Professional Documents
Culture Documents
Hs Admin
Hs Admin
Hs Admin
WORKSPACE
RELEASE 9.2
AD M IN I ST R ATOR S GU ID E
Copyright 19892006 Hyperion Solutions Corporation. All rights reserved. Hyperion, the Hyperion logo, and Hyperions product names are trademarks of Hyperion. References to other companies and their products use trademarks owned by the respective companies and are for reference purpose only. No portion hereof may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or information storage and retrieval systems, for any purpose other than the recipients personal use, without the express written permission of Hyperion. The information contained herein is subject to change without notice. Hyperion shall not be liable for errors contained herein or consequential damages in connection with the furnishing, performance, or use hereof. Any Hyperion software described herein is licensed exclusively subject to the conditions set forth in the Hyperion license agreement. Use, duplication or disclosure by the U.S. Government is subject to restrictions set forth in the applicable Hyperion license agreement and as provided in DFARS 227.7202-1(a) and 227.7202-3(a) (1995), DFARS 252.227-7013(c)(1)(ii) (Oct 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14, as applicable. Hyperion Solutions Corporation 5450 Great America Parkway Santa Clara, California 95054 Printed in the U.S.A.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Where to Find Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Help Menu Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Additional Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Education Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consulting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii xxii xxii xxii
Documentation Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii PART I Administering Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 CHAPTER 1 Hyperion System 9 BI+ Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 About Hyperion System 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 About Hyperion System 9 BI+ Reporting Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Hyperion System 9 BI+ Reporting Solution Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Client Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Application Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Database Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 CHAPTER 2 Administration Tools and Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Understanding Hyperion Home and Install Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Administer Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Impact Manager Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Job Utilities Calendar Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Service Configurators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Starting and Stopping Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Before Starting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Starting Core Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Contents
iii
Starting a Subset of Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting Services and server.dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting Services Individually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting Services in Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of How Services Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42 43 43 45 46 47
Changing Service Port Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Starting Workspace Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Implementing Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Configuring Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Starting Services with Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Quick Guide to Common Administrative Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 CHAPTER 3 Administer Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Setting General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 User Interface Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Assigning Hyperion System 9 BI+ Default Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Managing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Access Control for Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Printer Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Directory Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inactivating or Re-activating MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 57 57 57 57 58 58 58 59 59 59 60 60
Managing Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Understanding Subscriptions and Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Modifying Notification Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Managing SmartCuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Managing Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Tracking System Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Usage Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tracking Events and Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Usage Tracking Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 66 66 67
iv
Contents
CHAPTER 4 Using Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 About Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Assessment Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 About Impact Management Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 The Metadata Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Running the Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Update Data Model Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Link Between Data Models and Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Access to Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Synchronize Metadata Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Using the Run Now Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Using the Schedule Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Update Data Model Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Specifying a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Viewing Candidates to Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Reviewing the Confirmation Dialog Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Accessing Updated Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Connecting Interactive Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Step 1Configuring the Hyperion Interactive Reporting Data Access Service . . . . . . . . 78 Step 2Creating Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . 78 Step 3Importing Interactive Reporting Database Connections into Workspace . . . . . . 79 Step 4Associating Interactive Reporting Database Connections with Interactive Reports 79 Using Show Task Status Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Using Show Impact of Change Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Creating the New Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Renaming Tables or Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Using Normalized and Denormalized Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Deleting Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Changing Column Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Changing User IDs and Passwords for Interactive Reporting Documents . . . . . . . . . . . . . . . . 97 Service Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 CHAPTER 5 Managing Shared Services Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Registering Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Managing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 About Sharing Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Contents
About Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Working with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Private Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Shared Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Applications for Metadata Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronizing Models and Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sync Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Naming Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compare Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing and Editing Model Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renaming Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sharing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filtering the Content of Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tracking Model History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Permissions to Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing and Setting Model Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prerequisites for Moving Data Between Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Access to Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing Data Integration Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filtering Integration Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating or Editing a Data Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scheduling Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Scheduled Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grouping Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 102 103 104 106 108 110 112 112 113 115 119 120 122 125 126 131 133 134 134 134 135 137 144 145 146 149
CHAPTER 6 Automating Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Managing Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing Calendar Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Calendar Manager Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing the Job Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Job Log Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Time Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Public Recurring Time Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Externally Triggered Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Triggering Externally Triggered Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 154 154 155 155 155 156 157 158 158 158 159
Administering Public Job Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Managing Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
vi
Contents
Managing Pass-Through for Jobs and Interactive Reporting Documents . . . . . . . . . . . . . . . 160 Managing Job Queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Scheduled Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Background Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Foreground Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 CHAPTER 7 Administering Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Organizing Items and Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administrating Pushed Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administering Personal Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Configuring the Generated Personal Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Understanding Broadcast Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Providing Optional Personal Page Content to Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Displaying HTML Files as File Content Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Configuring Graphics for Bookmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Configuring Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Viewing Personal Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Publishing Personal Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Configuring Other Personal Pages Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 CHAPTER 8 Configuring RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 About RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Starting RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Logging On to RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Using RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Managing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Adding RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Deleting RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Pinging RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Modifying RSC Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Common RSC Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Job Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Managing Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Adding Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Modifying Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Deleting Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing Repository Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Defining Database Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Changing the Services Repository Database Password . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Changing the Repository Database Driver or JDBC URL . . . . . . . . . . . . . . . . . . . . . . . . . 187 Managing Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Optimizing Enterprise-Reporting Applications Performance . . . . . . . . . . . . . . . . . . . . . . 189 From Adding Job Services to Running Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Contents
vii
Using the ConfigFileAdmin Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 About config.dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Modifying config.dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Specifying Explicit Access Requirements for Interactive Reporting Documents and Job Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Setting the ServletUser Password when Interactive Reporting Explicit Access is Enabled . . . 193 CHAPTER 9 Configuring LSC Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 About LSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Starting LSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Using LSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Modifying LSC Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common LSC Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assessment and Update Services Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Data Access Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Host Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Database Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Shared Services Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Authentication Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 198 199 199 201 203 203 204 205 205
Modifying Properties in portal.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 CHAPTER 10 Configuring the Servlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Using Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Modifying Properties with Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Interface Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Personal Pages Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internal Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cache Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnostics Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 209 213 215 216 218 218
Zero Administration and Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 6x Server URL Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Client Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Load Testing Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Access Servlet Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Data Access Service Property . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Service Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 222 222 222
viii
Contents
Logging Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Log Management Helper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Server Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Log File Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Log File Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service Local Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Log File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Log Message File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Configuration Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Configuring Log Properties for Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Configuring Logging Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Configuring Appenders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Configuring Log Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Analyzing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Viewing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Standard Console Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Importing General Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Importing Interactive Reporting Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Running Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Logon and Logoff Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Logs for Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Logs for Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Information Needed by Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 PART II Administering Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 CHAPTER 12 Understanding Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Enterprise Metrics Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Metrics and Configuration Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Database Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Application Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Enterprise Metrics Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Servlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Clients and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Implementation and Administration Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Contents
ix
CHAPTER 13 Enterprise Metrics Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Provisioning Users and Groups to Access Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 248 Using Analytic Services Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supported Security Rule Sets in Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Granting Data Security in Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Analytic Services Data Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 249 249 250
About Database Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 About Application-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Data Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 CHAPTER 14 Supporting Clips in Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Authentication and Authorization Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Preference Settings Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 CHAPTER 15 Enterprise Metrics Server Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Administration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Launching the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Monitoring Server Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Shutting Down the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Restarting the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Viewing the Server Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Monitoring Server Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Changing Server Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Setting Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Exporting Settings to Preference Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Monitoring Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Exiting the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 CHAPTER 16 Enterprise Metrics Load Support Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Load Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Scheduling the Load Support Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 BeginLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 FinishLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Publish Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Processed Enrichment Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Enrichment Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Enrichment Versus ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Contents
Enrich Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Failure During Enrichment Job Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Studio Utilities in Stand-alone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Responding to a Finish Load Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Viewing Catalog Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Running the Studio Utilities in Stand-alone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Reviewing the Load Support Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 mb.Loads.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 mb.Publish.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 mb.Enrich.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 CHAPTER 17 Troubleshooting Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Using Log Files for Tuning and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Locating and Viewing the Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Enterprise Metrics Server Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Tools and Client Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Servlet Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Thin Client Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Understanding Which Logs to View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Reading Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Log Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Specific Scenarios and Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Using the Deployment Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Using the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Metadata Export Utility Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Configuring the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Running the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 CHAPTER 18 Evaluating Enterprise Metrics Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Statistics Reporting Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Launching the Performance Statistics Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Understanding the Enterprise Metrics Performance Statistics Utility . . . . . . . . . . . . . . . . . . . 310 Star Stats Summary Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Query Performance Analysis Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Query Performance Analysis Over Time Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Agg Usage Analysis Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 User Performance Analysis Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Slowest Queries Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Query Performance Analysis Over Publish Time Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Query Performance Analysis Using Max Start_Time Pivot . . . . . . . . . . . . . . . . . . . . . . . . 316 Query Performance Using Parameter Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Hierarchy Levels and Column Reference Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Contents
xi
Star Supported Levels Reference Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Star Levels and Columns Reference Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference of Bursted Supported Levels Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Query Performance with Reject Reason Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Performance Statistics Utility to Tune and Troubleshoot . . . . . . . . . . . . . . . . . . . . Star and Aggregate Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Slow Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Needed Versus Supported Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carpooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Star is Picked but Not Used or Rejected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Needed Columns and Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frequently Used Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Complaints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analyze the Performance After Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
318 319 319 320 321 322 322 323 324 324 324 325 326 326
Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 CHAPTER 19 Enterprise Metrics Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Metrics_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Configuration_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Client.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Metadata_export.prefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 PART III Administering Financial Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 CHAPTER 20 Administrative Tasks for Financial Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Deleting User POVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Report Server Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying the Maximum Number of Calculation Iterations . . . . . . . . . . . . . . . . . . . . . . Log File Output Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Periodic Log File Rolling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Financial Reporting TCP Ports for Firewall Environments or Port Conflict Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing Server Components Through a Device that Performs NAT . . . . . . . . . . . . . . Adding Required Java Arguments on UNIX Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 359 359 360 362 364 366
Analytic Services Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Differences Between Analytic Services Ports and Connections . . . . . . . . . . . . . . . . . . . . 367 Scheduler Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Batch Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launching Batches from a Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scheduling Batches Using an External Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Encoding Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 370 371 371 371 372
xii
Contents
Batch Input File XML Tag Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Setting XBRL Schema Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 RMI Encryption Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 PART IV Administering Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 CHAPTER 21 Understanding Connectivity in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 About Connection Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Working with Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Creating Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Setting Connection Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Creating an OLAP Connection File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Modifying Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Connecting to Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Monitoring Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Connecting with a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Connecting Without a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Setting a Default Interactive Reporting Database Connection . . . . . . . . . . . . . . . . . . . . . 394 Logging On Automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Using the Connections Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Logging On to a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Logging Off of a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Modifying an Interactive Reporting Database Connection Using the Connections Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Changing Database Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Working with an Interactive Reporting Document and Connecting to a Database . . . . . . . . 397 Connecting to Web Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Connecting to Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 CHAPTER 22 Using Metatopics and Metadata in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . 401 About Metatopics and Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Data Modeling with Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Creating Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Copying Topic Items to a Metatopic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Creating Computed Metatopic Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Customizing or Removing Metatopics and Metatopic Items . . . . . . . . . . . . . . . . . . . . . . 404 Viewing Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 MetaData in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Using the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Accessing the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Configuring the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Contents
xiii
CHAPTER 23 Data Modeling in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 About Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Building a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Adding Topics to a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Removing Topics from a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Understanding Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simple Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatically Joining Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying an Automatic Join Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manually Joining Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Showing Icon Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Join Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Defined Join Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Local Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Topic Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Topic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Topic Item Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restricting Topic Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Data Model Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Data Model Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatically Processing Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Promoting a Query to a Master Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronizing a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 419 419 420 420 421 421 422 422 423 423 427 428 429 430 430 431 431 432 436 436 437
Data Model Menu Command Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 CHAPTER 24 Managing the Interactive Reporting Studio Document Repository . . . . . . . . . . . . . . . . . . . . . . . 439 About the Document Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Administering a Document Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Repository Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confirming Repository Table Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Repository Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Repository Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Repository Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uploading Interactive Reporting Documents to the Repository . . . . . . . . . . . . . . . . . . . Modifying Repository Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controlling Document Versions in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . BRIOCAT2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BRIOOBJ2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 441 442 443 444 445 445 446 448 449 449
xiv
Contents
BRIOBRG2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 BRIOGRP2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Controlling Document Versions in Interactive Reporting Web Client . . . . . . . . . . . . . . . 450 CHAPTER 25 Auditing with Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 About Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Creating an Audit Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Defining Audit Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Auditing Keyword Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Sample Audit Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 CHAPTER 26 IBM Information Catalog and Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 About the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Registering Documents to the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Defining Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Selecting Subject Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Administering the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Creating Object Type Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Deleting Object Types and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Administering Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Setting Up Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 CHAPTER 27 Row-Level Security in Interactive Reporting Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 About Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 The Row-Level Security Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Hyperion System 9 BI+ and Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Row-Level Security Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Creating the Row-Level Security Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 The BRIOSECG Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 The BRIOSECP Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 The BRIOSECR Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 OR Logic Between Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Row-Level Security Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Defining the Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Dealing with The Rest of the Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Overriding Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Cascading Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Other Important Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Custom SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
Contents
xv
CHAPTER 28 Troubleshooting Interactive Reporting Studio Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Connectivity Troubleshooting with dbgprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and the Interactive Reporting Web Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 CHAPTER 29 Interactive Reporting Studio INI Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 PART V Administering Web Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 CHAPTER 1 Web Analysis Configuration Options and Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Web Analysis Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controlling Result Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Java Plug-in Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Hyperion System 9 BI+ Analytic High Availability Services . . . . . . . . . . Considerations for Configuring Analytic High Availability Services . . . . . . . . . . . . . . . . Resolving Analytic Services Subscriptions in Web Analysis . . . . . . . . . . . . . . . . . . . . . . . Configuring a Web Analysis Mail Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Formatting Data Value Tool Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Web Analysis to Log Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting Raw Data Values to Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 493 493 494 494 495 496 496 496 496 497
Web Analysis Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Repository Password Encryption Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Web Analysis Configuration Test Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 Changing Web Analysis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 CHAPTER A Backup Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 What to Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 General Backup Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Backing Up the Workspace File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complete Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Post-Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weekly Full and Daily Incremental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . As Needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference Table for All File Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 503 503 504 504 504
Sample Backup Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Backing Up the Repository Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Backing Up Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
xvi
Contents
Preface
Welcome to the Hyperion System 9 BI+ Workspace Administrators Guide. This preface discusses these topics:
Purpose on page xvii Audience on page xviii Document Structure on page xviii Where to Find Documentation on page xix Help Menu Commands on page xx Conventions on page xxi Additional Support on page xxii Documentation Feedback on page xxii
Purpose
This guide provides information that you need to administrator the entire Hyperion System 9 BI+ Workspace of services, modules, and tools. It explains Workspace features and options and contains the concepts, processes, procedures, formats, tasks, and examples that you need to administer the software. This guide also provides information on administering Hyperion System 9 BI+ Enterprise Metrics, Hyperion System 9 BI+ Financial Reporting, Hyperion System 9 BI+ Interactive Reporting, and Hyperion System 9 BI+ Web Analysis. This guide does not cover end-user tasks. It assumes that you read the Hyperion System 9 BI+ Workspace Getting Started Guide and the Hyperion System 9 BI+ Workspace Users Guide.
Note: This book covers the entire Workspace system, while you may have installed only a subset of it. Therefore, components and features that your system does not include are discussed in this book. For more information, see About Hyperion System 9 BI+ Reporting Solution on page 26.
Preface
xvii
Audience
This guide is written for all levels of administrators, from those who administer a subset of Workspace to those who oversee the entire Workspace system. In addition, some information is intended for developers of Hyperion System 9 BI+ Production Reporting programs or system customizations, for advanced users of Interactive Reporting, and administrators of Enterprise Metrics, Financial Reporting, and Web Analysis.
Document Structure
This document contains the following information:
Part I, Administering Workspace, introduces the architecture, administrative tools, and administrative tasks available in Workspace, a DHTML based, zero-footprint client that provides the user interface for viewing and interacting with the content created by the authoring studios in addition to enabling users to create queries against relational and multidimensional data sources. It covers administration related to documents and jobs in Workspace, and explains how to configure and maintain the Workspace services, applications, and tools, and how to optimize, back up, and troubleshoot Workspace. Part II, Administering Enterprise Metrics, provides information on installing, implementing, administrating, and troubleshooting Enterprise Metrics, a toolset for creating, configuring, and delivering metrics that enable organizations to assess and improve business performance. Part III, Administering Financial Reporting, describes administrative tasks specific to Financial Reporting, which provides scheduled or on-demand highly formatted financial and operational reporting from most data sources. Part IV, Administering Interactive Reporting Studio, explains advanced features, such as auditing, connectivity, and data modeling, used to administer Interactive Reporting, which provides ad hoc relational query and self-service reporting from ODBC data sources. Part V, Administering Web Analysis, describes files and utilities used to configure, maintain, and optimize Web Analysis, which provides interactive ad hoc analysis, presentation, and reporting of multidimensional data. Glossary contains a list of key terms and definitions. Index contains a list of Workspace terms and page references.
xviii
Preface
The HTML Information Map is available from the Workspace Help menu for all operating systems; for products installed on Microsoft Windows systems, it is also available from the Start menu. Online help is available from within Workspace. After you log on to the product, you can access online help by clicking the Help button or selecting Help from the menu bar. The Hyperion Download Center can be accessed from the Hyperion Solutions Web site.
2 Enter your e-mail address and password. 3 Select a language and click Login. 4 If you are a member on multiple Hyperion Solutions Download Center accounts, select an account for the
current session.
5 To access documentation online, from the Product List, select a product and follow the on-screen
instructions.
xix
Online help in PDF and HTML format Links to related resources to assist you in using Workspace
Launches the Hyperion Technical Support site, where you submit defects and contact Technical Support. Launches the Hyperion Developer Network site, where you access information about known defects and best practices. This site also provides tools and information to assist you in getting starting using Hyperion products:
Sample models A resource library containing FAQs, tips, and technical white papers Demos and Webcasts demonstrating how Hyperion products are used
Hyperion.com
Launches Hyperions corporate Web site, where you access a variety of information about Hyperion:
Office locations The Hyperion Business Intelligence and Business Performance Management product suite Consulting and partner programs Customer and education services and technical support
About Workspace
Launches the About Workspace dialog box, which contains copyright and release information, along with version details.
xx
Preface
Conventions
The following table shows the conventions that are used in this document:
Item Meaning Arrows indicate the beginning of procedures consisting of sequential steps or one-step procedures. In examples, brackets indicate that the enclosed elements are optional. Bold in procedural steps highlights user interface elements on which the user must perform actions. Capital letters denote commands and various IDs. (Example: CLEARBLOCK command) Keystroke combinations shown with the plus sign (+) indicate that you should press the first key and hold it while you press the next key. Do not type the plus sign. For consecutive keystroke combinations, a comma indicates that you press the combinations consecutively. Courier font indicates that the example text is code or syntax. Courier italic text indicates a variable field in command syntax. Substitute a value in place of the variable shown in Courier italics. When you see the environment variable ARBORPATH in italics, substitute the value of ARBORPATH from your site. Italic n stands for a variable number; italic x can stand for a variable number or a letter. These variables are sometimes found in formulas. Ellipsis points indicate that text was omitted from an example. This document provides examples and procedures using a right-handed mouse. If you use a left-handed mouse, adjust the procedures accordingly. Options in menus are shown in the following format. Substitute option names in placeholders, as indicated. Menu name > Menu command > Extended menu command For example: 1. Select File > Desktop > Accounts.
Ctrl+Q, Shift+Q
Example text
Courier italics
ARBORPATH
Conventions
xxi
Additional Support
In addition to providing documentation and online help, Hyperion offers the following product information and support. For details on education, consulting, or support options, click the Services link on the Hyperion Web site at http://www.hyperion.com.
Education Services
Hyperion offers instructor-led training, custom training, and e-Learning covering all Hyperion applications and technologies. Training is geared to administrators, end users, and information systems professionals.
Consulting Services
Experienced Hyperion consultants and partners implement software solutions tailored to clients reporting, analysis, modeling, and planning requirements. Hyperion also offers specialized consulting packages, technical assessments, and integration solutions.
Technical Support
Hyperion provides enhanced telephone and electronic-based support to clients to resolve product issues quickly and accurately. This support is available for all Hyperion products at no additional cost to clients with current maintenance agreements.
Documentation Feedback
Hyperion strives to provide complete and accurate documentation. Your opinion on the documentation is of value, so please send your comments by going to
http://www.hyperion.com/services/support_programs/doc_survey/index.cfm.
xxii
Preface
Part
Administering Workspace
In Administering Workspace:
Chapter 1, Hyperion System 9 BI+ Architecture Overview Chapter 2, Administration Tools and Tasks Chapter 3, Administer Module Chapter 4, Using Impact Management Services Chapter 5, Managing Shared Services Models Chapter 6, Automating Activities Chapter 7, Administering Content Chapter 8, Configuring RSC Services Chapter 9, Configuring LSC Services Chapter 10, Configuring the Servlets
Administering Workspace
23
24
Administering Workspace
Chapter
1
In This Chapter
About Hyperion System 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 About Hyperion System 9 BI+ Reporting Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Hyperion System 9 BI+ Reporting Solution Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
25
Hyperion System 9 BI+Management reporting including query and analysis in one coordinated environment Hyperion System 9 Applications+Coordinated planning, consolidation, and scorecarding applications Hyperion System 9 Foundation ServicesUsed to ease installation and configuration, provide metadata management, and support a common Microsoft Office interface
Enterprise metrics for management metrics and analysis presented in easy-to-use, personalized, interactive dynamic dashboards Financial reporting for scheduled or on-demand highly formatted financial and operational reporting from most data sources including Hyperion System 9 Planning and Hyperion System 9 Financial Management Interactive reporting for ad hoc relational query, self-service reporting and dashboards against ODBC data sources Production reporting for high volume enterprise-wide production reporting Web analysis for interactive ad hoc analysis, presentation, and reporting of multidimensional data
Hyperion System 9 BI+, which includes Hyperion System 9 BI+ Analytic Services, is part of a comprehensive BPM system that integrates this business intelligence platform with Hyperion financial applications and Hyperion System 9 Performance Scorecard.
26
Client Layer
The client layer refers to local interfaces used to author, model, analyze, present, report, and distribute diverse content, and third party clients, such as Microsoft Office:
WorkspaceDHTML based, zero-footprint client that provides the user interface for viewing and interacting with content created by the authoring studios, and enables users to create queries against relational and multidimensional data sources:
Analytic ServicesHigh performance multidimensional modeling, analysis, and reporting Hyperion System 9 BI+ Enterprise MetricsManagement metrics and analysis presented in personalized, interactive dashboards
27
Hyperion System 9 BI+ Financial ReportingHighly formatted financial reporting Hyperion System 9 BI+ Interactive ReportingAd hoc query, analysis, and reporting including dashboards Hyperion System 9 BI+ Production ReportingHigh volume enterprise production reporting Hyperion System 9 BI+ Web AnalysisAdvanced interactive ad hoc analysis, presentation, and reporting against multidimensional data sources
Hyperion System 9 BI+ Interactive Reporting StudioHighly intuitive and easy-tonavigate environment for data exploration and decision making. With a consistent design paradigm for query, pivot, charting, and reporting, all levels of users move fluidly through cascading dashboardsfinding answers fast. Trends and anomalies are automatically highlighted, and robust formatting tools enable users to easily build free-form, presentation-quality reports for broad-scale publishing across their organization. Hyperion System 9 BI+ Interactive Reporting Web ClientRead-only Web plug-in for viewing Interactive Reporting Studio reports. Hyperion System 9 BI+ Financial Reporting StudioWindows client for authoring highly formatted financial reports from multidimensional data sources, which features easy, drag and drop, reusable components to build and distribute HTML, PDF, and hardcopy output. Hyperion System 9 BI+ Web Analysis StudioJava applet that enables you to create, analyze, present, and report multidimensional content. The studio offers the complete Web Analysis feature set to designers creating content, including dashboards for information consumers. Hyperion System 9 BI+ Production Reporting StudioWindows client that provides the design environment for creating reports from a wide variety of data sources. Reports can be processed in one pass to produce a diverse array of pixel-perfect output. Processing can be scheduled and independently automated, or designed to use form templates that prompt dynamic user input. Hyperion System 9 BI+ Enterprise Metrics Personalization WorkspaceEnables you to define metrics that allow users to view business information and trends to better understand business performance using a java applet. Dynamic charts and reports provide up-to-date information and expedite performance analysis. Hyperion System 9 BI+ Enterprise Metrics StudioJava applet for creating personal News pages and customizing Metrics pages. Hyperion System 9 BI+ Dashboard Development ServicesEnables creation of dashboards:
Dashboard StudioWindows client that utilizes extensible and customizable templates to create interactive, analytical dashboards without the need to code programming logic. Dashboard ArchitectWindows-based integrated development environment that enables programmers to swiftly code, test, and debug components utilized by Dashboard Studio.
28
Hyperion System 9 Smart View for OfficeHyperion-specific Microsoft add-in and toolbar from which users can query Hyperion data sources including Analytic Services, Financial Management, and Planning. Users can use this environment to interact with Financial Management and Planning forms for data input, and can browse the BI+ repository and embed documents in the office environment. Documents are updated by user request. Performance ScorecardWeb-based solution for setting goals and monitoring business performance using recognized scorecarding methodologies. Provides tools that enable users to formulate and communicate organizational strategy and accountability structures:
Key Performance Indicators (KPIs)Create tasks and achievements that indicate progress toward key goals Performance indicatorsIndicate good, acceptable, or poor performance of accountability teams and employees Strategy mapsRelate high-level mission and vision statements to lower-level actionable strategy elements Accountability mapsIdentify those responsible for actionable objectives Cause and Effect mapsDepict interrelationships of strategy elements and measure the impact of changing strategies and performance
Application Layer
The application layera middle tier that retrieves requested information and manages security, communication, and integrationcontains two components:
Application Layer Web Tier on page 29 Application Layer Services Tier on page 30
Because the business intelligence platform is modular, it may consist of various combinations of components, configured in numerous ways. The end result is a comprehensive, flexible architecture that accommodates implementation and business needs.
29
Local servicesServices in the local Install Home that are configured using the Local Service Configurator (LSC). Referred to as LSC services. Remote servicesServices on a local or remote host that are configured using the Remote Service Configurator (RSC). Referred to as RSC services.
Because most of these services are replicable, you may encounter multiple instances of a service in a system.
Core Services
Core Services are mandatory for authorization, session management, and document publication:
Repository ServiceStores Hyperion system data in supported relational database tables, known collectively as the repository. A system can have only one Repository Service. Publisher ServiceHandles repository communication for other LSC services and some Web application requests; forwards repository requests to Repository Service and passes replies back to initiating services. A system can have only one Publisher Service. Global Service Manager (GSM)Tracks system configuration information and monitors registered services in the system. A system can have only one GSM. Local Service Manager (LSM)Created for every instance of an LSC or RSC service, including GSM. When system servers start, they register their services and configuration information with GSM, which supplies and maintains references to all other registered services. Authentication ServiceChecks user credentials at logon time and determines whether they can connect; determines group memberships, which, along with roles, affects what content and other system objects (resources) users can view and modify. Authentication Service is replicable and does not have to be co-located with other services. Authorization ServiceProvides security at the level of resources and actions; manages roles and their associations with operations, users, groups, and other roles. A system must have at least one Authorization Service. Session Manager ServiceMonitors and maintains the number of simultaneous system users. Monitors all current sessions and terminates sessions that are idle for more than a specified time period. While Session Manager is replicable, each instance independently manages a set of sessions. Service BrokerSupports GSM and LSMs by routing client requests and managing load balancing for RSC services. A system can have multiple Service Brokers.
30
Name ServiceMonitors registered RSC services in the system, and provides them with system configuration information from server.xml. Works in conjunction with Service Broker to route client requests to RSC services. A system can have only one Name Service.
Management Services
Management services are Core Services that collect and distribute system messages and events for troubleshooting and usage analysis:
Logging ServiceCentralized service for recording system messages to log files. A system can have only one Logging Service. Usage ServiceRecords the number and nature of processes addressed by Hyperion Interactive Reporting Service, which enables administrators to review usage statistics such as the number of logons, what the most used files are, what the most selected MIME types are, and what happens to system output. Systems can have multiple Usage Services.
Functional Services
Functional services are Core Services that are specific to various functional modules:
Job ServiceExecutes scripts that create reports, which can be prompted by users with permissions or by Event Service. Report output is returned to initiating users or published to the repository. Job Services can be created and configured for every executable. Event ServiceManages subscriptions to system resources. Tracks user subscriptions, job parameters, events and exceptions, and prompts Job Service to execute scheduled jobs. Event Service is configured to distribute content through e-mail and FTP sites, and to notify users with subscriptions about changing resources. A system can have only one Event Service.
Hyperion Interactive Reporting ServiceRuns Interactive Reporting jobs and delivers interactive HTML content for Interactive Reporting files. When actions involving Interactive Reporting documents are requested, Hyperion Interactive Reporting Service fulfills such requests by obtaining and processing the documents and delivering HTML for display. Hyperion Interactive Reporting Data Access ServiceProvides access to relational and multidimensional databases, and carries out database queries for the plug-in, Hyperion Interactive Reporting Service, and Interactive Reporting jobs. Each Hyperion Interactive Reporting Data Access Service supports connectivity to multiple data sources, using the connection information in one or more Interactive Reporting database connection files, so that one Hyperion Interactive Reporting Data Access Service can process a document whose sections require multiple data sources. Hyperion Interactive Reporting Data Access Service maintains a connection pool for database connections.
31
Extended Access for Hyperion Interactive Reporting ServiceEnables users to jointly analyze multidimensional and relational sources in one document. It retrieves flattened OLAP results from Web Analysis documents, Production Reporting job output, or Financial Reporting Batch reports in the BI+ repository and imports data into Interactive Reporting documents (.bqy) as Results sections. Hyperion Interactive Reporting Base ServiceStarts all LSC and RSC services in one Install Home.
Hyperion Financial Reporting ServerGenerates and formats dynamic report or book results, including specified calculations. Hyperion Financial Reporting Server can handle numerous simultaneous requests for report execution from multiple clients, because each request is run on its own execution thread. Hyperion Financial Reporting Server caches data source connections, so multiple requests by the same user do not require a reconnection. Financial Reporting servers are replicablethe number necessary depends on the number of concurrent users who want to execute reports simultaneously through the clients. Multiple Financial Reporting servers can be configured to report against one repository. Hyperion Financial Reporting Communication ServerProvides a Java RMI Registry to which other Financial Reporting servers are bound. Hyperion Financial Reporting Print ServerEnables Financial Reporting content to be compiled as PDF output. Runs only on supported Windows platforms, but is replicable to provide scalability for PDF generation. Hyperion Financial Reporting Scheduler ServerResponds to Financial Reporting scheduled batch requests. At the specified time, Hyperion Financial Reporting Scheduler Server prompts the other Financial Reporting servers to fulfill the request.
32
Assessment (Harvester) ServiceHarvests metadata from published Interactive Reporting repository documents. Update (Transformer) ServiceUpdates published and harvested Interactive Reporting documents or publishes new versions to the repository.
Metrics ServerMetrics engine that issues queries against a data warehouse and one or more Analytic Services sources. It combines result sets, calculates requested metrics, and displays Enterprise Metrics content in Workspace or Personalization Workspace. Configuration ServerUsed solely in conjunction with Enterprise Metrics Studio and Studio utilities to develop and test new catalog content. When appropriate, Configuration Catalog content is published to Metrics Catalog for production use by Metrics Server.
33
Type Core Core Core Core Core Core Impact Management Services Impact Management Services Interactive Reporting Interactive Reporting Interactive Reporting Interactive Reporting Management Management
Name Authentication Service Authorization Service Global Service Manager Local Service Manager Publisher Service Session Manager Assessment (Harvester) Service Update (Transformer) Service Extended Access for Interactive Reporting Service Hyperion Interactive Reporting Base Service Hyperion Interactive Reporting Data Access Service Hyperion Interactive Reporting Service Logging Service Usage Service Name Service Repository Service Service Broker Event Service Job Service Hyperion License Server Shared Services Configuration Server Metrics Server Hyperion Financial Reporting Communication Server Hyperion Financial Reporting Print Server Hyperion Financial Reporting Scheduler Server Hyperion Financial Reporting Server Scorecard Module Services Production Reporting Service Smart View Services
Instances Multiple Multiple 1 per system Multiple 1 per system Multiple Multiple Multiple Multiple Multiple Multiple Multiple 1 per system Multiple 1 per system 1 per system Multiple 1 per system Multiple 1 per system 1 per system 1 per system 1 per system
RSC
N/A
Common Administration Services Common Administration Services Enterprise Metrics Servers Enterprise Metrics Servers Financial Reporting Servers Financial Reporting Servers Financial Reporting Servers Financial Reporting Servers Performance Scorecard Services Production Reporting Service Smart View Services
Multiple
Multiple
Multiple Multiple
34
Database Layer
Architecturally, databases fall into two fundamental groups: repositories that store Hyperion system data; and data sources that are the subject of analysis, presentation, and reporting. There are three important repositories for information storage:
Common repositoryHyperion system data in supported relational database tables Shared ServicesUser, security, and project data that can be used across Hyperion products Common Hyperion License ServerLicensing information.
Relational data sources, for example, Oracle, IBM DB2, and Microsoft SQL Server Multidimensional data sources, for example, Analytic Services Hyperion applications, for example, Financial Management and Planning Data warehouses ODBC data sources
For a complete description of supported data sources, see the Hyperion System 9 BI+ Financial Reporting, Interactive Reporting, Production Reporting, Web Analysis Installation Guides for Windows and UNIX.
35
36
Chapter
2
Administrative tools enable you to configure and administer Workspace.
In This Chapter Understanding Hyperion Home and Install Home on page 38 Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Starting and Stopping Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Starting Workspace Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Implementing Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Quick Guide to Common Administrative Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
37
Administration Tools
Topics that describe Workspace administration tools:
Administer Module on page 38 Impact Manager Module on page 39 Job Utilities Calendar Manager on page 39 Service Configurators on page 39 Servlet Configurator on page 40
Administer Module
Properties managed using the Administer module (accessed from the view pane Navigate panel):
General properties Your organization, including adding and modifying users, groups, and roles, through the User Management Console Physical resources including printers and output directories MIME types Notifications SmartCuts
38
For detailed information on managing these items, see Administer Module on page 53. For information about common user-interface features among the modules, see the Hyperion System 9 BI+ Workspace Users Guide or the Hyperion System 9 BI+ Workspace Getting Started Guide.
Service Configurators
All Workspace services have configurable properties that you modify using Local Service Configurator (LSC) or Remote Service Configurator (RSC). LSC and RSC handle different services.
RSC
RSC provides a graphical interface to manage a subset of Workspace service types referred to as RSC (or remote) services. You use RSC to configure services on all hosts in the system:
Modify or view RSC service properties Ping services Add, modify, or delete hosts Add, modify, or delete database servers in the system Delete services
Administration Tools
39
LSC
LSC enables you to configure manage a subset of Workspace services on a local host, referred to as LSC (or local) services:
View or modify properties of LSC services View or modify properties of the local Install Home Configure pass-through settings
Servlet Configurator
Servlet Configurator enables you to customize the Browse, Personal Pages, Scheduler, and Administration servlets for your organization. The many settings include the length of time to cache various types of data on the servlets, colors of various user interface elements, the locale, and language. See Configuring the Servlets on page 207.
Before Starting Services on page 41 Starting Core Services on page 41 Starting a Subset of Services on page 42 Starting Services Individually on page 43 Starting Services in Order on page 45 Stopping Services on page 46 Example of How Services Start on page 47
40
For a usable system, all Core Services must be started (see Core Services on page 30).
Note: Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service must be started separately (see Starting Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service on page 44). Hyperion recommends that you restart your Web server after restarting Workspace services. If you do not restart the Web server, a delay of several minutes occurs before users can log on.
startCommonServices Method
The startCommonServices method of starting services is the preferred method for UNIX and an alternative method for Windows. To start Workspace Core Services (that is, all services except Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service), run the startCommonServices script in Install Home\bin:
UNIXstartCommonServices.sh WindowsstartCommonServices.bat
startCommonServices starts the Java services in an Install Home, except for inactivated
41
Table 1
Flags Used in startCommonServices Start Scripts Description Length of database passwords. Default=5. Format for e-mails (HTML or text file). Default is HTML format. Number of job worker threads.Determines the speed at which jobs are built and sent to Job Service. Configure based on number of Job Services, schedules, and events; and size of connection pool for the repository. Default=2. Number of schedules processed at one time by the scheduler worker thread. Default=15. Number of seconds job execution is delayed when Job Services are busy. Default=300. Number of concurrent jobs per each Job Service. No default limit.
Flag
-Dminimum_password_ length -Ddisable_htmlemail -DPerformance.MaxSTWorkers
-DPerformance.SchedulerBatchSize
-DPerformance.SchedulerDelay -Djob_limit
From Administrative Tools, select Services, select Hyperion Interactive Reporting Base Service n, and click Start. Select Start > Programs > Hyperion System 9+ > Utilities and Administration > Start BI+ Core Services.
LSC servicesUsing LSC, set Runtype to Hold for each service. RSC servicesIn server.dat, delete the names of services you want to inactivate. Before modifying this file, save a copy of the original. Details about server.dat are provided in Starting Services and server.dat on page 43.
42
serviceType must be one of the strings shown in the first column of Table 2.
Table 2
Service Types Official Service Name Name Service Repository Service Event Service Job Service Service Broker
where:
# is a number uniquely identifying the service localHost is the name of the computer where the service is installed, in the form of hostname.domain.com.
For example, to inactivate only Service Broker and Event Service on host apollo, remove the following lines from server.dat:
com.sqribe.transformer.ServiceBrokerImpl:SB1_apollo.Hyperion.com com.sqribe.transformer.MultiTypeServerAgentImpl:ES1_apollo.Hyperion.com
Starting Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service on page 44 RSC Services Individual Start Scripts on page 44
43
Starting Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service
You must start Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service individually. This is true whether the service is installed in an Instal Home with the Workspace services or alone in its own Instal Home.
Note: When you connect to a computer to start Hyperion Interactive Reporting Service on Windows, make sure the color property setting for the display is 16 bits or higher. If the color property setting is less than 16 bits, users may encounter extremely long response times when opening Chart sections of Interactive Reporting documents in Workspace. This is an important prerequisite, especially when starting the services remotely (for example using VNC, Terminal Services, Remote Administrator or Timbuktu, and so on), because many remote administration clients connect with only 8-bit colors by default.
To start Hyperion Interactive Reporting Service (or Hyperion Interactive Reporting Data
Access Service):
1 In LSC, verify that Run Type for Hyperion Interactive Reporting Service (or Hyperion Interactive Reporting
Data Access Service) is set to Start.
2 Start the common services. 3 Start Hyperion Interactive Reporting Service (or Hyperion Interactive Reporting Data Access Service) in its
own process using a process monitor (see Implementing Process Monitors on page 48).
For Windows, to start these services without a process monitor, run its start script:
For UNIX, see Starting Services with Process Monitors on page 50.
44
Table 3
Service Type Hyperion Interactive Reporting Data Access Service Event Service Hyperion Interactive Reporting Service Job Service Name Service Repository Service Service Broker
Example Start script for the first Name Service installed on a UNIX host named apollo:
NS1_apollo_start.sh
Start script for the third Job Service installed on a Windows host named zeus:
JF3_zeus_start.bat
45
Stopping Services
You stop all Workspace services and services started individually by stopping their processes. Do so at each services host computer. In all cases, stopping the services constitutes a hard shutdown and causes the services to stop immediately. In the event of a hard shutdown, all work in progress stops. The method for stopping a service must match how it was started:
Individual RSC services started with a start scriptRun its stop script. The name of a services stop script matches that of its start script except for the substitution of stop for start. For example, if Job Services start script is JF1_apollo__start.bat (or .sh), the stop script is JF1_apollo__stop.bat (or .sh).
Caution! Use a services stop script only if the service was started with its start script. A stop script cannot
be used to terminate one service within a multi-service process. The stop script stops all services running in that process.
Process running in a console windowUse a shutdown command, such as shutdown or [Ctrl+C] on Windows. Using an operating system kill command (such as kill on UNIX) to stop the Workspace services does not cause damage to the system; however, do not use kill -9. Windows serviceUse the Stop command in the Services tool.
If you are running services as different servers (that is, as separate processes), you must stop Repository Service last.
Note: Do not terminate Job Service when it is executing a job. If you do so, you cannot restart Job Service until the job exits (or until you terminate it). If a job exists that never terminates, you can restart Job Service by terminating the process. On Windows systems, use Task Manager to terminate the process. On UNIX systems, use the kill command. As a last resort, you can reboot the computer on which Job Service resides.
46
2. Reads from the config.dat file. 3. Establishes a connection with Name Service to download Job Service configuration information. Because a service looks up its configuration information only when it starts, it does not learn about subsequent changes made to the environment. Therefore, if you change a services configuration and want it to take effect immediately, restart the service.
To change ports in config.dat, use the ConfigFileAdmin utility found in \bin. To change v8_serviceagent, use RSC.
See also Assigning Financial Reporting TCP Ports for Firewall Environments or Port Conflict Resolution on page 362 and Changing Web Analysis Ports on page 499
where localhost is the name of the Workspace server, and port is the TCP port on which the application server is listening. The default port for Workspace is 19000 if using Apache Tomcat.
47
3 Start Hyperion Interactive Reporting Service or Hyperion Interactive Reporting Data Access Service using
process monitor scripts.
48
Table 4
Configurable Properties in the Properties Files Description Interval for polling the internal status of the service in seconds. Minimum and default = 30, maximum=300. Number of seconds the service is stopped if the polling is not working. Minimum and default = 300, maximum=600. Number of seconds the process continues before a hard shutdown. Maximum and default = 30, no minimum. Number of seconds the process continues during a graceful shutdown. Allows a service to continue processing in the background. Default=14400(4 hours), minimum=3600 (1 hour), maximum= 86400 (1 day). Path to services generated data file. Default is C:\\IOR.txt Path to services standard output file location. Default is C:\\DAS_stdout.txt. Path to services standard error file location. Default is C:\\DAS_stderr.txt.
You set process monitor logging levels in remoteServiceLog4jConfig.xml (see Configuring Log Properties for Troubleshooting on page 228).
Threshold Events for Hyperion Interactive Reporting Service Process Monitors Description Set to ON to use the following events. Number of Interactive Reporting documents retrieved. Number of Interactive Reporting jobs run. Total service running time since its first request. Time of day that the service is not available, in minutes after midnight. For example, 150 means 2:30 AM.
49
Hyperion Interactive Reporting Data Access Service Process Monitor Event Thresholds
You set threshold events to trigger process monitors to stop and restart the service. Threshold events for Hyperion Interactive Reporting Data Access Service are in server.xml in a property list called DAS_EVENT_MONITOR_PROPERTY_LIST. Set the first property, EVENT_MONITORING, to ON to enable threshold event usage. Comment out or delete the thresholds not in use.
Table 6
Threshold Events for Hyperion Interactive Reporting Data Access Service Process Monitors Set to ON to use one of the following events. Number of relational database process requests including Oracle, SQL Server, Sybase, DB2, etc. Number of MDD database process requests including Essbase, MSOLAP, SAP, etc. Number of all other relational database requests including requests like stored procedure calls, get function lists, etc. Number of all other MDD database requests including requests like build outline, get members, show values, etc. Total service running time since its first request. Time of day that the service is not available, in minutes after midnight. For example, 150 means 2:30 AM.
startIntelligenceService.batHyperion Interactive Reporting Service startDataAccessService.shHyperion Interactive Reporting Data Access Service
For Windows, you can start Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service with process monitors in the Services tool. In the Services tool, in addition to the Workspace Server, there is a Windows service for each Hyperion Interactive Reporting Service and for each Hyperion Interactive Reporting Data Access Service that uses process monitors.
50
System Configuration Tasks Component Reference Starting and Stopping Services on page 40 Stopping Services on page 46
Provision users, groups, and roles Configure generated Personal Page Configure Broadcast Messages Provide optional Personal Page content Provide graphics for bookmarks Create custom calendars for scheduling jobs Create public job parameters Create or modify printers or directories for job output Define database servers Configure services
User Management console Explore module Explore module Explore module Explore module Calendar Manager Schedule module Administer module RSC RSC, LSC
Hyperion System 9 Shared Services User Management Guide Configuring the Generated Personal Page on page 165 Understanding Broadcast Messages on page 166 Providing Optional Personal Page Content to Users on page 168 Configuring Graphics for Bookmarks on page 168 Creating Calendars on page 154 Administering Public Job Parameters on page 159 Managing Physical Resources on page 56 Adding Database Servers on page 184 Chapter 8, Configuring RSC Services Chapter 9, Configuring LSC Services
51
Table 8
System Maintenance Tasks Component Reference Starting Services Individually on page 43 or Starting Services and server.dat on page 43 RSC, LSC RSC Administer module RSC or the Installation program User Management console User Management console Administer module Administer module Administer module RSC installation program Administer module Chapter 8, Configuring RSC Services, Chapter 9, Configuring LSC Services Managing Jobs on page 189 Setting General Properties on page 54 Chapter 8, Configuring RSC Services or the Hyperion System 9 BI+ Installation Guide Hyperion System 9 Shared Services User Management Guide Hyperion System 9 Shared Services User Management Guide Defining MIME Types on page 59 Modifying MIME Types on page 59 Inactivating or Re-activating MIME Types on page 60 Adding Hosts on page 182 Hyperion System 9 BI+ Installation Guide Host Shared Services Properties on page 205
Task Change which services run in a server Modify services Modify Job Service Modify system properties Delete services Modify users, groups, or roles Inactivate obsolete users, Create MIME types Modify MIME types Inactivate obsolete MIME types Add hosts Add services Configure common Metadata Services
52
Chapter
Administer Module
3
Use the Administer module to manage settings that control how end users interact with Hyperion System 9 BI+ Workspace.
Note: You can use various methods to perform most Administer module tasks. For a complete list of all toolbars, menus, and shortcut menus, see the Hyperion System 9 BI+ Workspace Getting Started Guide.
See also Chapter 4, Using Impact Management Services, and Chapter 5, Managing Shared Services Models.
In This Chapter
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Setting General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Managing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Managing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Managing MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Managing Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Managing SmartCuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Managing Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Tracking System Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Administer Module
53
Overview
The Administer module, available from the Workspace View pane and toolbar, enables you to manage Workspace properties, performance, and user interaction. Toolbar icons represent Administer module panel items.
Table 9
Activities Available from Administer Module Toolbar Icons and Panel Items Administer Panel Item General Properties Activity Define general system and user interface properties
Icon
Physical Resources
MIME Types
Notifications
Define mail server properties and how end users receive e-mail notifications about jobs Specify how to construct SmartCuts (shortcuts to imported documents in Workspace) for inclusion in e-mail notifications Manage row-level security settings in data sources used by Interactive Reporting documents Track system usage and define related properties
SmartCuts
Row-level Security
Usage Tracking
Event Tracking.
Track events, such as document opens, documents closes for selected MIME types, and jobs run
54
Administer Module
General Properties
System NameDistinguishes the current installation from other Workspace installations (Installation is defined as a system served by one GSM.) Broadcast MessagesSpecifies the folder in which to store broadcast messages Enable users to use Subscription and NotificationActivates import event logging, which enables Event Service to identify subscription matches and notify users of changes in subscribed items (Effective Datewhen logging begins) Enable Priority RatingsEnables users to set priority ratings on items imported to the Explore module. Enable HarvestingActivates Harvester Service, which enables users to use Impact Manager to extract and save Interactive Reporting metadata to relational data sources for use in other formats (see Chapter 4, Using Impact Management Services).
Display all users, groups, or roles in the systemLists all available users, groups, and roles when end users set access control on repository items. Selecting this option may impact system performance. List up to nn users, groups, or rolesNumber of users, groups, or roles displayed when end users set access control on repository items. The default setting is 100. Specifying too low a number may prevent end users from seeing all users, groups, and roles to which they have access.
Managing Users
For information on managing users, groups, and roles, see the Hyperion System 9 Shared Services User Management Guide.
55
3 Expand the Projects node until a BI+ application is displayed. 4 Right-click the application name and select Assign Preferences.
A three-step wizard is displayed in the Process bar.
5 For step 1 of the Wizard, Select Users, select Available Users or Available Groups. 6 From the left panel, select user names or group names and click the right arrow.
To select consecutive names, select the first name, press and hold down Shift, and select the last name. To select names that are not consecutive, press and hold down Ctrl, and select each item. Use Add All to select all names.
7 Repeat steps 5 and 6 to select a combination of users and groups. 8 When all user and group names are displayed in Selected Users and Groups, click Next. 9 For step 2 of the Wizard, Manage Preferences, specify these default preferences for the selected users
and groups:
Default FolderRepository location of the default folder. Desktop FolderUsed as a scratch pad or to store items for easy access from the Viewer module:
From the Viewer module, all Desktop folder items are displayed as icons. From the Explore module, the name of the folder specified as the default Desktop folder is displayed; for example, /Sample Content, not /Desktop.
New Document FolderDefault folder in which the new document wizard searches for valid data sources, that is, Web Analysis database connection files and Interactive Reporting documents. Start PageHyperion System 9 BI+ interface displayed after logging on. Select None, Explore, Document, Favorite, Desktop, Enterprise Metrics, or Scorecard. If you select Explore or Document for Start Page, you must specify a repository location.
10 When all preferences are specified, click Next. 11 For step 3 of the Wizard, Finish, choose between three tasks:
To configure options for another application, select one from the View pane. To change preferences for currently selected users and groups, click Back. To specify another set of users and groups and set their preferences, click Continue.
56
Administer Module
4 Set Access Control for this resource (see Access Control for Physical Resources on page 57). 5 Click Finish.
57
Printer Properties
Printers are used for Interactive Reporting job output:
TypeRead-only property; set as Printer. NameName for the printer; visible to end users. DescriptionHelps administrators and end users identify the printer. Printer AddressNetwork address of the printer (for example, \\f3prt\techpubs); not visible to end users.
General properties:
TypeRead-only property; set as Output Directory. NameName for the output directory; visible to end users. DescriptionHelps administrators and end users identify the directory. PathDirectorys full network path (for example, \\apollo\Inventory_Reports).
FTP properties:
Directory is on FTP ServerEnable if the output directory is located on an FTP server, and set these options:
FTP server addressAddress of the FTP server where the output directory is located (for example, ftp2.hyperion.com). FTP usernameUsername to access the FTP output directory. FTP passwordPassword for FTP username. Confirm passwordRetype the password entered for FTP password.
58
Administer Module
5 Click Finish.
Note: Newly defined MIME types are active by default.
4 Click OK.
59
To inactivate a MIME type, clear Active and click OK. Its traffic-light icon changes to red. To re-activate a MIME type, select Active and click OK. Its traffic-light icon changes to green.
Managing Notifications
Notification properties control how users receive notifications about the jobs and documents to which they subscribe:
60
Administer Module
Subscription Types on page 61 How Event Service Obtains Information on page 61 Notification Mechanisms on page 62
Subscription Types
Subscription types that users can subscribe to and receive notifications about:
New or updated versions of items Changed content in folders Job completion Job exceptions
Owners of scheduled jobs, when job execution finishes Users who run background jobs, when job execution finishes
Managing Notifications
61
Notification Mechanisms
Ways in which Event Service notifies users:
Send e-mails with embedded SmartCuts to notify users about changes to items, folders, new report output, job completion, or exception occurrences Optionally, Event Service may send file attachments, based on how users chose to be notified on the Subscribe page.
Display notifications of completed scheduled jobs or background jobs in the Schedule module Display notification of job completion after a job runs in the foreground Display a redlight icon in Exceptions Dashboard when output.properties indicates that exceptions occurred When exceptions occur, the importer of the file sets properties to indicate the presence of exceptions and to specify exception messages. The importer is usually Job Service, and the file is usually job output. Exceptions can be flagged by any of these methods:
Production Reporting code Manually by users who import files or job output APIs that set exception properties on files or output
Hyperion Interactive Reporting Service does not support exceptions, but you can set exceptions on Interactive Reporting documents using the API or manual methods. Users choose whether to include the Exceptions Dashboard on Personal Pages and which jobs to include on the Exceptions Dashboard.
Notifications
Enable e-mail attachmentAllows end users to send file attachments with their e-mail notifications. If jobs generate only one output file, that file is attached to the e-mail. If jobs generate multiple output files including PDF files, the PDF files are attached to e-mails; otherwise, no files are attached. Maximum attachment sizeMaximum allowed size for attachments, in bytes.
62
Administer Module
Time to live for entries in the notification logNumber of minutes after which events are removed from the notification log and are no longer displayed in the Explore module. Expiration times for scheduled jobs and background jobs.
Note:
The e-mail service must be installed on the Financial Management Server computer to e-mail batch output correctly.
Note: To send e-mails with embedded SmartCuts, you must also set SmartCut properties.
Require authenticationMakes authentication (ASMTP) mandatory. Enter user name and password when enabled. Default is disabled.
After specifying notification properties, you can click Send Test E-mail to view your mail server entries and enter a destination e-mail address.
Managing SmartCuts
SmartCuts are shortcuts in URL form to imported documents in Workspace. SmartCut properties are used to construct SmartCuts that are included in e-mail notifications. URLs for SmartCuts:
http://Host:IP Port/workspace/browse/get/Smartcut
For Example:
http://pasts402:19000/workspace/browse/get/Patty/Avalanche_CUI_Style_Gui delines.pdf/
Managing SmartCuts
63
HostHost on which UI Services reside IP PortPort number on which Workspace runs RootWeb application deployment name for Workspace, as set in your Web server software Typically, this is workspace/browse. The last segment (browse) must match the servlet name specified during installation.
Encoding for URLsHow Workspace encodes (and decodes) URLs. This property can take one of two values:
Default (Hexadecimal)Uses the standard encoding of URLs as defined in RFC 2396. The subset of ASCII characters which are valid in URLs are left as is. The space character is converted to %20. All other characters are converted to the 3-character string %xy, where xy is the two-digit hexadecimal representation of the lower 8-bits of the character. Because this encoding uses only the lower 8-bits of a character, it is used only for Latin-1 languages installations. UTF-8Uses the encoding of URLs as recommended in RFC 2718. Non-allowable characters are first converted into UTF8, and each resulting byte is converted to its %xy representation. This encoding must be used for installations supporting nonLatin-1 languages or installations using the WebSphere or Sun ONE native servlet engines.
At least one Hyperion Interactive Reporting Data Access Service instance must be configured to access the data source storing your row-level security information. The database client library should be installed on the computer where the Hyperion Interactive Reporting Data Access Service is running. The data source for the Workspace repository that has the row-level security table information should be configured. For security reasons, the user name and password to access the data source should differ from that used for the Workspace user account.
See Chapter 27, Row-Level Security in Interactive Reporting Documents, for information about implementing row-level security in Interactive Reporting documents.
64
Administer Module
Row-level security properties are stored in the repository; however, the rules about how to give access to the data are stored in the data source.
Enable Row Level SecurityRow-level security is disabled by default. ConnectivityDatabase connectivity information for reports source data. Database TypeType of database that you are using. Database types available depend on connectivity selection. Data Source NameHost of the report data source database. UsernameDefault database user name used by Job Service for running Production Reporting jobs on this database server; used for jobs that were imported with no database user name and password specified. PasswordValid password for Username.
Who logged in yesterday? Which Workspace reports are accessed most frequently?
You can configure your system to track numerous activities. For example, you can track opening, closing, and processing Interactive Reporting documents or you can track only opening Interactive Reporting documents. Activities are recorded as events in the repository database. Events are recorded with pertinent details and information that distinguishes them from each other. Event times are stored in GMT. Events are deleted from the database in a configurable time frame. Usage Service must be running to track events set in the user interface. Usage Service can be replicated and all Usage Services access one database. The user name and password to access the usage tracking information may differ from that used for Workspace. Hyperion recommends that Usage Tracking use its own schema in the repository database; however, an alternate schema is not required. For more information about configuring Usage Tracking schema, see the Hyperion System 9 BI+ Installation Guide for Windows and UNIX. Topics that provide detailed information about tracking usage and events:
Managing Usage Tracking on page 66 Tracking Events and Documents on page 66 Sample Usage Tracking Reports on page 67
Tracking System Usage
65
General preferences
Usage Tracking ActiveSelect to turn on usage tracking. Mark records ready for deletion after_daysNumber of days after which usage tracking events should be marked for deletion by the garbage collection utility. Default is 30 days. Delete records every_daysNumber of days after which the garbage collection utility should be run. Default is 7 days.
Connectivity preferencesUsername and password are populated from the usage tracking database and should only be changed if the database is moved.
3 Select Apply.
To track events:
1 Navigate to Administer and select Event Tracking. 2 Select an event to track it:
System Logons Database Logons Timed Query Event Open Interactive Reporting Document Process Interactive Reporting Document Close Interactive Reporting Document Run Interactive Reporting Job View Interactive Reporting Job Output Run Production Reporting Job
66
Administer Module
View Production Reporting Job Output Run Generic Job View Generic Job Output
3 To track documents, move one or more available MIME types to the Selected MIME Types list.
Tracking occurs each time a document of the selected MIME types is opened.
4 Click Apply.
To view the \Administration folder, from Explore in Viewer, select View > Show Hidden.
Caution! The sample reports could contain sensitive company information when used with your data.
Use access control when importing the reports so only the intended audience has access.
67
68
Administer Module
Chapter
Impact Management Services, introduced with the Impact Manager module, enable you to collect and report on metadata, and to update the data models that imported documents use. Impact Management Assessment Services and Impact Management Update Services perform these tasks. Tasks results are displayed in Show Task Status and Show Impact of Change interactive reports.
In This Chapter
About Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Assessment Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Running the Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Update Data Model Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Access to Impact Management Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Synchronize Metadata Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Update Data Model Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Accessing Updated Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Connecting Interactive Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Using Show Task Status Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Using Show Impact of Change Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Creating the New Data Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Changing Column Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Changing User IDs and Passwords for Interactive Reporting Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Service Configuration Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
69
70
Various queries on the metadata can be performed by the Metadata Service. The queries include which documents are harvested, retrieving section names from a document for a particular section type, retrieving sections that depend on a particular section, and so on.
The update services work in the following way: 1. Original documents are imported. 2. Documents are harvested as part of import or through a synchronize operation. 3. Documents are used to perform daily tasks until the database requires change. 4. Use an Impact of Change report to identify the documents impacted by proposed changes. 5. Create data models to update impacted imported documents. 6. Documents with replacement data models are harvested as part of import or through a synchronize operation. 7. Transformation parameters are specified. a. Select a document. The selection criteria is that the document contains an impacted data model. b. Select a replacement data model. c. The Impact Manager module displays Interactive Reporting documents that match the selection criteria. d. Documents selected from the list are composed into a task and are queued for transformation. Currently only Interactive Reporting documents are processed. 8. Transformation is applied to elements of the Impact Manager task. a. Documents are converted to XML. b. Transformation is performed on the XML. c. The XML is converted back to Interactive Reporting documents. d. Transformed documents are reimported as new versions of the original documents. 9. Documents are available for use against the new database definition.
71
72
Figure 1
73
Whether the metadata is run immediately or scheduled for the future, clicking Submit causes Impact Management Assessment Services to receive the request and return a numeric request identifier. The identifier is used to filter the Impact Management Assessment Services task log. See Using Show Task Status Interactive Report on page 80. When Impact Management Assessment Services synchronize the metadata, a comparison of each imported document is made with the metadata tables. If the imported document has been modified since it was last parsed, or it is not in the metadata tables, the document is added to a queue of documents to next be parsed.
74
Specifying a Data Model on page 75 Viewing Candidates to Update on page 76 Reviewing the Confirmation Dialog Box on page 77
Figure 2
4 Choose a data model from Select original data model from list.
Data model sections are created when a query section is created. Because the data model section is not visible as a unique section, users may not be aware that data models are in separate sections under default names.
75
Use Promote to Master Data Model to make a data model section visible. To assist with specifying which data model is to be updated, query names are displayed after the data model in the drop-down list. See Link Between Data Models and Queries on page 72.
6 When both data models are selected, click Next to go to Step 2, Candidates.
76
Figure 3
Candidates to Update
Click Select All to update all candidates. Use Ctrl+click or Shift+click to highlight and select individual or all documents in the list.
2 Optional: To return to Step 1, Specify Data Model, Click Back. 3 Optional: To activate the sort feature, in the candidate list table, click a column header.
For example, click Document to sort candidates by document title. The sort feature reorders the selected candidates to be updated.
4 Click Finish.
A Confirmation dialog box is displayed.
77
Note: The data source name is metadata, as created in the ODBC configuration, and references the database instance in MS SQL Server.
78
7 Name the imported file metadata.oce. 8 Specify a default user ID and name to connect reports to the repository tables.
The user ID requires select access to the repository tables.
3 Select
4 From the Connection drop-down list, for each Query/DataModel Name, select metadata.oce. 5 From the Options drop-down list, select Use default username/password. 6 Click OK. 7 Repeat steps 16 for the document named Task Status.
The Interactive Reporting documents are ready to deliver output.
79
Use the calendars to select time ranges for tasks. Task times are recorded in UTC format in the database.
Use the lists to select the user who submitted the tasks and the task statuses.
to process the query.
4 Click
80
Table 10
Task Status Interactive Report Column Descriptions Description Local submit time and date for the task request Type of task request Task request number Request command, for example, harvest or DM update Document name Document version number Colorcode for the status: Green = successful Yellow = pending Red = failed
Column Name Task Submitted Task Type Req Command Document Ver Stat
Time taken in milliseconds to perform the request Name of processor Priority status of the task Name of requester Description of requested task Path of files for request Status of request, for example, Execution successful
81
Table 10
Task Status Interactive Report Column Descriptions (Continued) Description Local completion time and date for the task request Coordinated Universal Time, based on the time zone of the application server (A computed item that extracts the time zone offset from a time string. The offset is used to translate the display of the Task Submitted column into local time. The assumption is that the server and client share a time zone. If this is not the case, the computed item can be edited to reflect the time zone difference between server and clients)
Selections are displayed in Currently Selected Query Limits. In this example, PCW_CUSTOMERS and PCW_SALES are selected.
82
3 Click
The table tabs display the items selected in the Query Panel. For example, PCW_CUSTOMERS and PCW_SALES are selected. The Impact of Change interactive report contains seven content tabs to assist in anticipation of change to schema:
Documents with RDBMS tables selectedImpacted documents that use the selected tables and columns. RDBMS/Topic column mappingsInteractive Reporting document topics or items mapped to RDBMS tables or columns. Topic/RDBMS column mappingsReverse map of RDBMS tables or columns to Interactive Reporting document topics or items. Data Models with topics in commonCommon data models where impacted tables or columns are used. For example, how many Interactive Reporting documents are updated with one replacement data model. RDBMS table usage detailsDocuments and sections in which tables and columns are used. Custom request itemsCustom SQL in request items that Update Data Model may impact. Custom query limitsCustom SQL in filter items that Update Data Model may impact.
83
Access database software and Interactive Reporting Studio are used in these procedural examples.
84
3 Right-click again and select Paste. 4 In Paste Table As, enter a Table Name.
For example, type Outlets. Ensure that Structure and Data is selected.
5 Click OK.
A copy of the PCW_CUSTOMERS table called Outlets is created.
85
6 Click OK.
86
If Show Detail Information is selected, this dialog box provides information on changes that were made with the synchronization.
2 Click OK.
For example, from the PCW Customers topic, right-click Outlet Id. Topic Item Properties is displayed.
87
4 Optional: Alternatively, to achieve an equivalent end result of changing the display names, perform these
actions:
a. Drag a topic, for example Orders, onto the Interactive Reporting Studio content area. b. Rename the display names of the renamed columns and the topic. For example, a data model is created that can replace another data model that uses only the Pcw Customers topic. The edited topic now exposes names matching the original topic and is a valid replacement.
88
Figure 5
89
Figure 6
Deleting Columns
Deleted columns are replaced by a computed item with a constant value. For example, string columns may return n/a, and numeric columns may return 0. Replacement enables reports to continue working and display the constant value (for example, n/a) for the deleted columns.
Note: If an entire table is deleted, it is treated as if the table has all columns deleted.
These procedures describe creating a computed item to mask the deletion of columns. Before creating the computed item, a series of processes, such as copying tables, changing names, and synchronizing data models, must be performed.
3 Right-click again and select Paste. 4 In Paste Table As, enter a Table Name.
For example, type Goods. Ensure that Structure and Data is selected.
5 Click OK.
A copy of the PCW_Items table called Goods is created.
90
4 Right-click the topic header, for example PCW Items, and select Properties.
Topic Properties is displayed.
91
6 Click OK.
If Show Detail Information is selected, the dialog box provides information on synchronization changes. For example, Dealer Price was deleted from the Goods topic.
2 Click OK.
92
Another topic is added to the content area. In this example, the topic is called Meta PCW Items.
2 Right-click the original topic header, for example PCW Items, and select Properties.
Topic Properties is displayed.
4 Right-click the topic header, for example Meta PCW Items, and select Properties.
Topic Properties is displayed.
93
6 Select the topic from step 5, for example PCW Items, and select DataModel > Add Meta Topic Item >
Server.
7 Enter the Name of the row that was deleted in the database, and enter a definition.
For example, type Dealer Price in Name, and type 0 as the Definition.
8 Click OK.
The computed item is added to the topic. In this example, Dealer Price is added to PCW Items.
94
9 Select the topic with the computed item added, for example PCW Items, and select DataModel > Data
Model View > Meta.
The selected topic is displayed in Meta View, for example PCW Items, and the other topics are removed.
95
Data Type Changes string OK OK OK Warn Warn Warn int Warn OK Warn Warn Warn Warn real Warn OK OK Warn Warn Warn date Warn Warn Warn OK Warn Warn time Warn Warn Warn Warn OK Warn timestamp Warn Warn Warn OK Warn OK
If the type change affects a Request line item, no action is taken because request item data types are accessed by clicking Option in Item Properties. If the Impact Manager module changes the data types, unforeseen effects in results, tables, charts, pivots, or reports may occurespecially if computations are applied to the column that is returned.
96
Figure 7
97
Table 12
Interactive Reporting Document Before And After Update Interactive Reporting Document After Update Connects the query to the data source using new credentials, user name=sa and password=secret, and processes without asking the user for values and without regard to the contents of the Interactive Reporting database connection Displays a log on dialog box and the user supplies a user id and password to connect Connects the query to the data source using the definition in the Interactive Reporting database connection at the time the connection is attempted
98
Chapter
5
In This Chapter
This chapter explains Hyperion System 9 Shared Services (formerly called Hyperion Hub) models as they are shared between multiple Hyperion products.
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Registering Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Managing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 About Sharing Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 About Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Working with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Working with Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
99
Overview
Shared Services enables multiple applications to share information within a common framework. The following table lists the high-level tasks that you can perform with Shared Services.
Task Managing Models Sharing Metadata Sharing Data For Information About Managing Models on page 101 About Sharing Metadata on page 101 About Sharing Data on page 101
About Models
Shared Services provides a database, organized into applications, in which applications can store, manage, and share metadata models. A model is a container of application-specific data, such as a file or string. There are two types of models; dimensional hierarchies such as entities and accounts, and nondimensional objects such as security files, member lists, rules, scripts, and Web forms. Some Hyperion products require that models be displayed within a folder structure (similar to Windows Explorer). Folder views enable the administrator to migrate an entire folder structure or a portion of a folder structure easily using Shared Services. The process of copying a model or folder from a local application to Shared Services is known as exporting. The process of copying a model or folder from Shared Services to a local application is known as importing.
Prerequisites
Shared Services supports external directories for user authentication. To use Shared Services functionality, you must configure Workspace to use external authentication.
Note: After installation of Shared Services, you must configure external authentication. For more information about installation and configuration of Shared Services, see the Hyperion System 9 Shared Services Installation Guide.
Registering Applications
Before you can use Shared Services, you must register your product with Shared Services using the Configuration Utility. For more information about using the Configuration Utility to register your product with Shared Services, see the Hyperion System 9 BI+ Workspace Installation Guide.
100
Version tracking Access control Synchronization between models and folders in the application and corresponding models and folders in Shared Services Ability to edit model content and set member properties of dimensional models Ability to rename and delete models
Users must be assigned the Manage Models user role to perform the preceding actions on Shared Services models.
Note: The Manage Models user must have Manage permission for a model via the Shared Services Model Access window in order to assign permissions to it.
See Working with Models on page 106 for detailed information about models. For more information about assigning user roles, see the Hyperion System 9 Shared Services User Management Guide available on the Hyperion Download Center.
101
Users must be assigned the Create Integrations user role to create Shared Services data integrations. As a Create Integrations user, you can perform the following actions on data integrations:
Assign access to integrations Create an integration Edit an integration Copy an integration Delete an integration Create a data integration group View (including filtering the view of) an integration
To view and run Shared Services data integrations, users must be assigned the Run Integrations user role. As a Run Integrations user, you can perform the following actions on data integrations:
View (including filtering the view of) an integration Run, or schedule to run, an integration Run, or schedule to run, a group integration
Before data can be moved between applications, the models for both the source and destination application must be synchronized between Shared Services and the product. See Sharing Data on page 133 for details about moving data between applications. For more information about assigning user roles, see the Hyperion System 9 Shared Services User Management Guide available on the Hyperion Download Center.
102
Hyperion Shared Services provides management capabilities to manage models. For example, you can perform the following tasks, among others:
Track model versions Control access to models Edit member properties in dimensional models Synchronize models between the application and Shared Services
See Working with Models on page 106 for detailed information about how to manage models.
103
Figure 8
Creating Applications
Shared Services enables you to create a shared application. Shared Services provides one shared default application called Common. Should additional shared applications be needed, they must be created by application users or administrators.
2 If it is not already selected, select the Browse tab. 3 Click Add. 4 In the Shared Application Name text field, type a name for the application.
See Application Naming Restrictions on page 105 for a list of restrictions on application names.
104
The maximum length is limited to 80 characters regardless of the application in which you are working. Names are not case sensitive. All alphanumeric and special characters can be used, with the exception of the forward slash (/) and double quotation () characters.
Deleting Applications
You need Manage permission on an application to delete an application.
Note: Users must have the appropriate product-specific user roles to delete an application. For a listing of product user roles, see the appropriate product-specific appendix in the Hyperion System 9 User Management Guide.
To delete an application:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Projects.
3 Select the application to delete and click Delete. 4 Click OK to confirm deletion of the application.
Sharing Applications
To be able to share models with other applications, you must share a private application with a shared application in Shared Services. Figure 9 shows a sample Select Shared Application window.
Figure 9
105
3 Select the application with which you want to share. 4 Click Share to begin sharing the application with the shared application that you specified.
After you have set up access to a shared application, you can designate models to be shared. See Sharing Models on page 120. You can stop sharing access to a shared application at any time. When you do so, models that are shared with the current application are copied into the application.
3 Select the application with which you want to stop sharing. 4 Click Stop Share to stop sharing with the designated application.
106
Figure 10
Note: If the current application is new, the view might not show models. Application models are displayed in the Browse tab after you explicitly export them to Shared Services. See Synchronizing Models and Folders on page 108 for information.
All models are displayed in ascending order. The Manage Models Browse tab provides information about each model in Shared Services:
Model name Model type Last time the model was updated Whether the model is locked and who locked it If a filter is attached to the model and whether the filter is enabled: indicates a filter that is enabled indicates a filter that is disabled
You can see only the models to which you have at least Read access. If you do not have access to a model, it is not displayed in the Manage Models Browse tab. Icons indicate where models are located: indicates a private model indicates a shared model Some Hyperion products require that models be displayed within a folder structure (similar to Windows Explorer). Folder views enable the administrator to migrate an entire folder structure or a portion of a folder structure easily using Shared Services. Folders are visible on the Manage Models Browse tab, Manage Models Sync tab, and Manage Models Share tab. Path information for folders is displayed directly above the column headers and path text is hyperlinked to refresh the page within the context of the selected folder.
107
Icons indicate where folders are located: indicates a private folder indicates a shared folder From the Manage Models Browse tab, you can perform any of the following operations:
View and edit members and member properties in dimensional models. See Viewing and Editing Model Content on page 115. Filter content that is imported to an application from a shared model. See Filtering the Content of Models on page 122. Compare the latest application version of a model to the latest version stored in Hyperion Shared Services. See Comparing Models on page 112. Track model history. See Tracking Model History on page 125. View model properties. See Viewing and Setting Model Properties on page 131. Rename models. See Renaming Models on page 119. Delete models. See Deleting Models on page 120.
You can synchronize the Shared Services version of a model with the application version, by importing the model from Shared Services to the application, or by exporting the model from the application to Shared Services. To do so, select the Manage Models Sync tab. See Synchronizing Models and Folders on page 108. You can share a model with other applications. To do so, select the Manage Models Share tab. See Sharing Models on page 120.
108
Figure 11
The Sync Preview window lists all models and folders in Shared Services and in the BI+ application. The Sync Operation field provides a recommended operation to apply to each model or folder. For more information about sync operations, see Sync Operations on page 110.
3 Optional: For models with Select Sync Operation, you can compare the latest version of the model in
Shared Services to the model in the application by clicking the Compare button. Before clicking Compare, you must select a Sync Operation in the drop-down list box.
The latest version of the model in Shared Services is compared to the latest version in the application. The contents of the two models are shown line-by-line in a side-by-side format. Hub Version refers to the model in Shared Services. Application Version refers to the model in the application. For information on resolving differences between the models, see Comparing Models on page 112. After you resolve the differences in a model, you are returned to the Sync Preview page.
109
6 Click Report to see a report of the operations that have been completed. 7 Click Refresh to update the message. 8 Click Close to return to the Sync Preview window.
Sync Operations
The Sync Preview window lists all models in Shared Services and in the application. The Sync Operation field provides a recommended operation to apply to each model, as follows:
If a model exists in the application but not in Shared Services, the sync operation is Export to Hyperion Hub. You cannot change this operation. If you select the model, when you synchronize, the specified model is copied to Shared Services.
Note: Keep in mind when exporting that Shared Services supports dimensions that contain up to 100,000 members.
If a model exists in Shared Services but not in the application, the sync operation is Import From Hyperion Hub. You cannot change this operation. If you select the model, when you synchronize, the specified model is copied to the application. If a model exists in both the application and Shared Services, the sync operation is selectable. Select from one of the following options:
Note: Remember these factors when deciding which compare operation to perform. With export, the compare operation considers the application model to be the master model. With import, the compare operation considers the Shared Services model to be the master model. In the following descriptions, the master model is underlined.
Export with MergeMerges the application model content with the content in Shared Services. Notice the following factors:
This option considers any filters during the merge process and ensures that filtered members are not lost. If a property only exists in the application model, then the property is retained in the merged model. If a property only exists in the Shared Services model, then the property is retained in the merged model.
110
If a property exists in both models, the value of the property in the application model will be retained in the merged model. A member in the application model but not in the Shared Services model will be retained in the merged model A member in the Shared Services model but not in the application model will not be retained in the merged model. A member which exists both in the Shared Services model and in the application model, but in different generation levels, will be merged and the position in the application model will be maintained. If an application system member exists only in a Shared Services model, export with merge will not delete this member. If an application system member exists both in a Shared Services model and in the application model, export with merge will merge the properties as usual and take the system member-specific attributes from the application model. For more information, see Application System Members on page 118.
For properties with attributes, the merge is based on the attribute value. For example, if the following Alias attribute exists in the Shared Services model:
<Alias table=French>Text in French<\Alias>
then the merged result will contain both attributes and will look like the following example:
<Alias table=French>Text in French<\Alias> <Alias table=English>Text in English<\Alias>
If the value for both Alias attributes is the same in both models, then the value for the application model will be retained in the merged model.
Export with OverwriteReplaces the Shared Services model with the application model. Import and MergeMerges the content from the Shared Services model with the application model content. Import and ReplaceReplaces the application model with the Shared Services model. Clear before ImportRemoves the existing content of the application model and replaces it with the content from the Shared Services model.
111
The maximum length is limited to 80 characters regardless of the application in which you are working. Names are not case sensitive. You can use all alphanumeric and special characters, with the exception of the forward slash (/) and double quotation () characters. Therefore, you cannot export a dimension to Shared Services that contains forward slash or double quotation characters.
Note: The restrictions on names listed in this section are enforced explicitly by Shared Services. <Hyperion Product Name> may enforce additional restrictions on names. If you are sharing models with one or more other products, you should be aware of additional naming restrictions that may be enforced by those products.
Comparing Models
At any time, you can compare a model in Shared Services to its corresponding version in the application. The latest version in Shared Services is compared to the model in the application. To compare different versions in Shared Services, see Tracking Model History on page 125.
To compare the application representation of a model with the Shared Services representation
of the model:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.
3 Select a Sync Operation in the drop-down list box for the model of interest. 4 Click the Compare button next to the Sync Operation box.
The latest version of the model in the application is compared with the latest version in Shared Services.
112
Compare Operations
The contents of the two models are shown line-by-line in a side-by-side format. Application Version refers to the model in the application. Application versions of a model are displayed on the left side of the Resolve Models (Compare) window. Hub Version refers to the model in Shared Services. Hub versions of a model are displayed on the right side of the Compare Models window. Figure 12 shows a sample Resolve Models (Compare) window.
Figure 12
By default, the Resolve Models window displays up to 50 rows per page, displays any folders in an expanded format, and displays only those models with differences. Color coding highlights any differences between the content of the two models, as follows:
Red indicates that the element has been deleted from the model. Green indicates that the element has been inserted into the model. Blue indicates that the element has been changed.
Note: The compare operation filters out any application system members that are not relevant to the product being viewed. For example, if viewing HFM models, Shared Services will filter out any application system members that are not valid for HFM. For more information about application system members, see Application System Members on page 118.
113
Resolve Models (Compare) Window Elements Description Click to display the selected member and any children under the selected member in an expanded format (default) Click to display the selected member and any children under the selected member in a collapsed format Click to jump to the first model element with a difference Click to display the difference immediately previous to the current difference Click to display the next difference after the current difference Click to jump to the last model element with a difference Click to display all model elements, not just the elements with differences Click to display only the model elements with differences (default)
Note: For contextual purposes, Show Diff Only also displays the members immediately previous to and immediately after the member with a difference.
Element Expand All button Collapse All button <<FirstDiff button <PrevDiff button NextDiff> button LastDiff>> button View All button Show Diff Only button
Click to display the member property differences for a selected element A red arrow indicates a deleted element in the Application Version of a model A green arrow indicates an inserted element in the Application Version of a model Click to jump to the first page of the model Click to display the previous page of the model
Select a page to display in the Taskflow Listing area. Click to display in the Taskflow Listing area the page you selected in the Page dropdown list box. Click to display the next page of the model Click to jump to the last page of the model
Rows
114
Figure 13
The editor enables you to manage dimension members by performing these tasks:
View all members for a model, including application system members Add a sibling or a child to a member Change the description of a member Rename a member Move a member up or down in the hierarchy Move a member left or right (across generations) in the hierarchy Edit dimension member properties Enable or disable a filter
If you are renaming a member, keep the following rules in mind: a. You cannot rename a shared member. b. You cannot create a duplicate member name (the rename operation performs a uniqueness check). c. You cannot rename an application system member.
Note: Renaming a member and moving a member across generations within Shared Services enables products to retain the member properties for a shared model. Therefore, if you want to retain member properties across all products for a shared model, perform the rename or move member operation within Shared Services rather than within the individual product.
115
2 If it is not already selected, select the Browse tab. 3 Select a model and click View.
The dimension editor shows the members of the selected model, including any application system members. For more information, see Application System Members on page 118.
Add a child or sibling member Rename a member (notice the rules about renaming members in the previous section) Delete a member Move a member up, down, left, or right in the dimensional hierarchy Edit member properties For more information about editing member properties, see Editing Member Properties on page 117.
If a filter exists for a model, enable or disable a filter For more information about filters, see Filtering the Content of Models on page 122.
Note: If you click on a member and it is not editable, then the member is an application system member. For more information about application system members, see Application System Members on page 118.
That you have not created names that are too long (for example, 20 characters for Hyperion Financial Management, 80 characters for Hyperion Planning) That you have not created any duplicate names
Note: Shared Services does not perform validations for Alias/UDA uniqueness.
6 Click Save to save the changes that you have made and to create a new version of the model in Shared
Services.
116
2 If it is not already selected, select the Browse tab. 3 Select a model name and click View.
The dimension editor shows the members of the selected model, including any application system members. For more information, see Application System Members on page 118.
Note: You cannot edit properties for an application system member. For more information about application system members, see Application System Members on page 118.
Figure 14
117
To view which products share a particular shared property, hover the cursor over the shared property icon. A tool tip is displayed with the names of the products that share the property.
5 Select a tab and use the editing keys to change member property settings as you prefer.
Note: Alias properties may be displayed in a different order in Hyperion Shared Services than in <Hyperion Product Name>. See the discussion following the procedure for details.
6 In the Edit Member window, click Save to save the property settings that you have made. 7 In the Edit Member window, click Close to close the window.
Note: The Edit Member window remains open unless you manually close it.
Save to save the changes you have made and create a new version of the model Close to return to the Model Listing view
If a member has an alias property, all the aliases and alias table names for the member are displayed in the Edit Member window. For example: <Hyperion Product Name>:
<Alias table="English">MyAlias in English</Alias> <Alias table="German">MyAlias in German</Alias> <Alias table="French">MyAlias in French</Alias>
Shared Services:
Alias (English): MyAlias in English Alias (German): MyAlias in German Alias (French): MyAlias in French
The order in which Shared Services reads the alias tables is not necessarily the order in which the aliases are shown in <Hyperion Product Name>, which can be changed by user preferences.
118
You can import and export models that contain application system members. Keep the following in mind while performing the following sync operations:
Import operations will only import application system members if they are valid for your product. For instance, if a shared model has a system member called active which is only valid for HFM, when this model is imported by Planning, it will ignore this member. Export with Overwrite replaces the Shared Services model with the application model, including any application system members. Export with Merge merges the application model content with the content in Shared Services. Notice the following factors:
If an application system member exists only in Shared Services, export with merge will not delete this member. If an application system member exists both in Shared Services and in the product, export with merge will merge the properties as usual and take the system memberspecific attributes from the product side of the model. All other export with merge scenarios will behave exactly the same way for system members as they do for normal members. For more information, see Sync Operations on page 110.
Renaming Models
Shared Services enables you to rename models in Shared Services. For example, you might want to rename a model if two applications want to share dimensional models that are named differently. For example, one application uses plural dimension names and the other application uses singular names. To share the models requires renaming one or both of them to a common name. Renaming a model changes the name only in Shared Services. The internal representation of the name does not change. If you import a new version of a renamed model to the application, the new version retains the original name. You need Write access to a model to rename it.
To rename a model:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.
2 If it is not already selected, select the Browse tab. 3 Select a model and click Rename. 4 Type a new name in the New Name text box. 5 Click one of these options:
Rename to save the new name Cancel to cancel the name change
See Model Naming Restrictions on page 112 for a list of restrictions on model names.
119
Deleting Models
You can delete a model if you have Write access to it.
To delete a model:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.
2 If it is not already selected, select the Browse tab. 3 Select a model and click Delete. 4 Click OK to confirm deletion.
Sharing Models
You set up the sharing of models between applications by designating a common shared application to be used by two or more applications. See Working with Shared Applications on page 103 and Sharing Applications on page 105 for details about shared applications. You can select two types of models to share:
You designate models in the private application in Shared Services to share with other applications. You select models from a shared application that have been made available for sharing by another application.
Note: Models within folders can also be shared using the Shared Services share operation. If a folder is selected, then all the models within that folder and within any subfolders will be shared.
To share models:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.
120
Figure 15
Icons indicate whether a model is shared: indicates a private model that is not shared indicates a shared model indicates a model with a conflict (model exists in both the private application and in the shared application in Shared Services) The Share Operation column provides a recommended operation to apply to each model, as follows:
Note: The Share Operation column displays only the first 10 characters of the shared application name. If the shared application name exceeds 10 characters, then Shared Services appends ellipses (...) to the end of the application name.
Share to <shared_application_name>Copies the content of the model in the private application to the shared application. The share operation also deletes the model in the private application and creates a link in the private application to the model in the shared application. Unshare from <shared_application_name>Copies the content of the model in the shared application to the private application and removes the link to the shared application.
Note: The model remains in the shared application. A copy of this previously shared model will be available in the users private/working application.
If there is a conflict and the model exists in both a private application and a shared application, the share operation is selectable. This conflict sometimes occurs because a model was previously shared and then unshared. Selecting a share operation enables you to reshare a model that was previously shared. Use the drop-down list box to select one of the following options:
121
Share from <shared_application_name> (Overwrite)Deletes the model in the private application and creates a link to the model in the shared application. Share to <shared_application_name> (Merge)Merges the content of the model in the private application with the content of the model in the shared application. The model in the private application is then deleted and a link is created to the model in the shared application. Share to <shared_application_name> (Overwrite)Replaces the content of the model in the shared application with the content of the model in the private application. The model in the private application is then deleted and a link is created to the model in the shared application.
3 Select one or more models to share and, if the share operation for a model is selectable, choose a share
operation.
4 Click Share to begin the sharing operation. 5 Click Refresh to update the status of the operation. 6 Click Report to view information about the status of the operation, including whether it was successful and
the reason for failure if the operation failed.
2 Select the Share tab. 3 In Share Models, select one or more models to remove from sharing. 4 Click Share. 5 When the status is complete, click OK.
The selected models are stopped from sharing and a copy of each model is made to the private application in Shared Services.
122
The Hyperion Planning application conducts budgeting on profit and loss accounts only and therefore does not require any balance sheet accounts from the account dimension. The Hyperion Planning application writes a filter that removes the Total Assets member and all of its descendants and the Total Liabilities member and all of its descendents. You can write filters for dimensional models only, and, you cannot have multiple filters on a particular dimension. Writing filters requires Write access to a model.
2 If it is not already selected, select the Browse tab. 3 Select a model and click Filter.
In the Create/Edit Filter window, the Members List area shows the members of the model and the Filtered Out Members text box shows members that are to be retained in the model on import. Figure 16 shows a sample Members List area of the Create/Edit Filter window.
Figure 16
4 From the Members List area, select a member. 5 Click Add to move the selected member from the Members List area to the Filtered Out Members text box.
The Select Member drop-down list box indicates how much of the hierarchy is to be filtered, as follows:
Descendants (Inc). Filters the selected member and all of its descendants. Descendants. Filters descendants of the selected member (but not the member itself). Member. Filters the selected member only.
123
You can move selected members back to Members List from Filtered Out Members with the Remove and Remove All buttons.
6 Repeat the two previous steps until you have selected as many members to retain as needed. 7 Click one of these options:
Save to save the filter Close to cancel the changes you have made , in the Model Listing view indicates that a model has an attached filter.
After a filter is applied to a model, you will see only those members within a model that are not filtered out. If you would like to see all the members in a filtered model, you can disable the filter and then, after viewing, enable the filter again.
2 If it is not already selected, select the Browse tab. 3 Select a filtered model and click Filter. 4 Click Disable. 5 Click Save to view the model in the Model Listing view.
The disabled filter icon, but the filter is disabled. , in the Model Listing view indicates a model has an attached filter,
2 If it is not already selected, select the Browse tab. 3 Select a filtered model with a disabled filter icon and click Filter. 4 Click Enable. 5 Click Save to view the model in the Model Listing view.
The enabled filter icon in the Model Listing view, indicates that the filter is enabled.
2 If it is not already selected, select the Browse tab. 3 Select a filtered model and click Filter. 4 Click Delete. 5 When prompted to confirm the deletion of the filter, click OK.
124
Managing Shared Services Models
Figure 17
2 If it is not already selected, select the Browse tab. 3 Select a model and click History.
Shared Services displays a list of model versions, including the name of the person who updated the version, the update date, and comments for each model.
4 From the version list, you can perform any of the following tasks:
ii. Click View. See Viewing and Editing Model Content on page 115 for more information.
Compare any two model versions to each other. i. Select any two versions.
ii. Click Compare. The contents of the two model versions are shown line-by-line in a side-by-side format. See Comparing Models on page 112 for more detailed information.
Replace the current model in the application with a version in the list. i. Select any version.
125
The specified version is imported to the application and replaces the current model. If a filter was applied to a previous version of a model, the model is imported with the filter applied.
ii. Click Properties. See Viewing and Setting Model Properties on page 131 for more information.
To access specific models in Shared Services, users must be assigned access rights individually or inherit access rights by being part of a group that is assigned access rights. If an individual user is assigned to a group and the access rights of the individual user conflict with those of the group, the rights of the individual user take precedence. To give users access to models other than their own, an administrator must add the users and assign their permissions.
126
Permissions
Model management provides the following types of permissions:
Read. The ability to view the contents of a model. You cannot import a model if you have only Read access to it.
Write. The ability to change a model. Write access includes the ability to export, import, and edit a model. Write access does not automatically include Read permission. You must assign Read permission explicitly, in addition to Write permission, if you want a user to have these permissions.
Manage. The ability to create new users and change permissions for users. Manage access does not automatically include Read and Write permissions. You must assign Read and Write permissions explicitly, in addition to Manage permission, if you want a user to have all these permissions.
The following table summarizes the actions that a user can take in regard to a model with each of the permissions.
Table 14
Action Sync Import Export View Filter Compare History Set Properties Assign Access Share Assign Permissions Edit Rename Delete
Write Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes
Manage Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
127
You can apply permissions to groups and to individual users. Users are automatically granted the permissions of the groups to which they belong. You can, however, explicitly add or deny permissions to a user to override group permissions. For each type of access permission (Read, Write, and Manage), you must apply one of the following actions:
Grant. Explicitly grant the permission to the user or group. Granting permissions to a member of a group overrides permissions inherited from the group. For example, if a group is denied a permission, you can explicitly grant the permission to a member of the group.
Deny. Explicitly deny the permission to the user or group. Denying permissions to a member of a group overrides permissions inherited from the group. For example, if a group is granted a permission, you can explicitly deny the permission to a member of the group.
None. Do not apply the permission to the user or group. Not applying a permission is different from denying a permission. Not applying a permission does not override permissions inherited from a group. Specifying None for particular permissions for individual users enables you to apply permissions on a group basis.
Note: If a user belongs to groups with mutually exclusive permissions to the same model, permissions that are assigned override permissions that are denied. For example, if a user belongs to a group that denies Read access to a particular model and belongs to another group that assigns Read access to the model, the user in fact is granted Read access to the model.
2 If it is not already selected, select the Browse tab. 3 Select a model and click Access.
128
You can view the permissions that are assigned to users and groups for the selected model in the Model Access window. Figure 18 shows a sample Model Access window.
Figure 18
Figure 19
5 In the Available Users/Groups text box, select users or groups to assign to this model (press Ctrl to select
multiple users). Click Add to move the selected users and groups to the Selected Users/Groups text box or click Add All to move all users and groups to the Selected Users/Groups text box.
129
6 Assign permissions to the selected users and groups by selecting one of the Grant, Deny, or None option
buttons for the Read, Write, and Manage permissions.
Figure 20
Note: Assigning (or denying) a permission does not implicitly assign (or deny) any other permissions; that is, assigning Write permission does not implicitly assign Read permission, and assigning Manage permission does not implicitly assign Read and Write permissions. Likewise, denying Read permission does not implicitly deny Write and Manage permissions, and denying Write permission does not implicitly deny Manage permission. You must explicitly assign all permissions that you want a user to have.
See Permissions on page 127 for details about the Read, Write, and Manage permissions and the Grant, Deny, and None actions that you can apply to each permission.
2 If it is not already selected, select the Browse tab. 3 Select a model name and click Access.
You can view the permissions that are assigned to users and groups for the selected model.
4 Select the check box next to one or more users or groups and click Edit.
The window shows the permissions currently assigned to the selected users or groups.
Note: The
5 Change permissions for the selected user or group by selecting one of the Grant, Deny, or None option
buttons for the Read, Write, and Manage permissions.
130
See Permissions on page 127 for details about the Read, Write, and Manage permissions and the Grant, Deny, and None actions that you can apply to each permission.
To view any changes made to model access, you must log out of the product application, close the browser, and then re-login to the product application.
2 If it is not already selected, select the Browse tab. 3 Select a model name and click Access.
You can view the permissions that are assigned to users and groups for the selected model.
4 Select the check box next to one or more users or groups and click Delete.
Note: When you click Delete, the permissions are immediately removed without a warning message being displayed.
131
Figure 21
Creator. Name of the user who created the model Updated By. Name of the person who updated the model If there have been no updates, the name of the creator is listed and the Updated Date is the same as the Created date.
Create Date. The date on which the model was created in (exported to) Shared Services Updated Date. The date on which the model was last updated in Shared Services Versioning. Whether versioning is enabled If versioning is not enabled, you can enable it by changing this setting. Once versioning is enabled, however, you cannot disable it.
Lock Status. Whether the model is locked or unlocked You can change this setting to lock the model for your exclusive use or to unlock the model to allow other users to work with it. Models are locked for only 24 hours. After 24 hours, the model is automatically unlocked.
Source Application. The name of the shared application Source Model. The path to the model in the shared application Transformation. The name of the transformation, if any, that Shared Services applies to the model to make it usable to the application
132
If the Dimension Type value is None, then you can select a new dimension type in the Dimension Type drop-down list box next to the Change To button.
Change To. Only shown if the Dimension Type value is None. Click the Change To button after you select a new dimension type value in the Dimension Type drop-down list box.
Dimension Type drop-down list box. Only shown if the Dimension Type value is None. Use the drop-down list box to select a new dimension type. Then click Change To to change the dimension type.
You need Read access to view model properties and Write access to change model properties.
2 If it is not already selected, select the Browse tab. 3 Select a model and click Properties.
You can view the properties for the model.
If versioning is not enabled, enable it by clicking the Enable button next to Versioning. After versioning is enabled, model management maintains a version history for the model. You cannot disable versioning for a model after you enable it.
Lock or unlock the model by clicking the Lock or Unlock button next to Lock Status. If the Dimension Type value is None, select a new dimension type in the drop-down list box next to the Change To button. After you select a new dimension type, click Change To and accept the resulting confirmation dialog box to invoke the change.
5 Click Close to return to the previous page and save any changes that you have made.
Sharing Data
Shared Services enables you to move data between applications. The method used to move data is called data integration. A data integration specifies the following information:
Source product and application Destination product and application Source dimensions and members Destination dimensions and members
A data integration wizard is provided to facilitate the process of creating a data integration.
Sharing Data
133
Whoever has write access to the DataBroker.DataBroker application can create data integrations. Users with read access to the DataBroker.DataBroker application can run data integrations. Access rights to this application are granted through the Shared Services User Management Console. For more information, see the Hyperion Shared Services User Management Guide. By default, all Shared Services users have full access (Read, Write, and Manage) to all integrations. A data integration can be run manually or scheduled to run at a specific time. Data integrations can also be placed in groups and run sequentially.
134
To access all data integration functionality, click In the View Pane Navigation panel, select
Administer to activate the Administer module, then select Manage Data.
Figure 22
A list of integrations is displayed. The list includes names, source applications, and destination applications. An application name identifies a product, application, and a shared application in the form: <Product.Application.Shared Application>, for example, HFM.App1.beta.
Note: When viewing a list of integrations, performance may become slower as you add more integrations and as more users view the list.
Group integrations do not have a source and destination; each integration in a group specifies its individual source and destination. A group icon in the source and destination columns identifies a group integration. The link, View group details, lists the integrations in the group. You can perform any of the following functions from the Integrations page:
Create, edit, or copy an integration (see Creating or Editing a Data Integration on page 137) Create a data integration group (see Grouping Integrations on page 149) Delete an integration (see Deleting Integrations on page 144) Run, or schedule to run, an integration (see Scheduling Integrations on page 145)
Sharing Data
135
A list of integrations is displayed. The integrations for all products and applications are shown by default. For a sample Manage Data window, see Figure 22 on page 135. Two combination boxes, Source and Destination, are displayed above the Filter View button. Each combination contains two drop-down list boxes, the first to specify a product and the second to specify a application.
Note: A list of integrations is displayed when you create an integration group. If you are creating a group, begin with step 2.
2 Select a product from the product Source or Destination drop-down list box or from both the product
Source and Destination drop-down list boxes.
The second Source or Destination drop-down list box is populated with the applications for the selected product.
4 Click Filter View to update the list based on the selections that you made.
The filter enables the display of integrations that act on a particular source product or application, or on a destination product or application, or on a combination of both. For example, if you specify HBM as the source application and Hyperion Planning as the destination application, the list includes all integrations whose source is Hyperion Business Modeling (HBM) or whose destination is Hyperion Planning. The following examples illustrate the different combinations of product and application that you can specify in the Source and Destination combination boxes
If a source product is specified and the three other drop-down boxes specify all, the list displays all integrations with the specified source product. If a source product and a source application are specified and the two destination dropdown boxes specify all, the list displays all integrations with the specified source application. If a source product and destination product are specified and the two application dropdown list boxes specify all, the list displays all integrations from the given source product to the given destination product. If an integration is bidirectional (can be transposed) and either source-to-destination or destination-to-source matches the given products, the integration is listed.
136
A list of integrations is displayed. For a sample Manage Data window, see Figure 22 on page 135.
If you want to create an integration, click New. If you want to edit an integration, select an integration and click Edit.
Note: Locking of integration models in edit mode is not supported. As a consequence, it is possible for multiple users to simultaneously open an integration and make changes. If more than one administrator edits the same integration simultaneously, the last one to save takes precedence. The entire integration is overwritten with the last version saved. No warning message is displayed.
If you want to use an existing integration to create a new integration, select an existing integration and click Copy.
Note: Action buttons (New, Edit, Delete, Copy, and Run) that are enabled for a user are defined at the DataBroker application level and not at the model level. However, for existing integration models, the actions that a user can perform are controlled at the model level. For example, if a user has full access rights to the DataBroker application, but Read access to a specific integration model, all buttons are enabled but when the user tries to edit and save this integration, an error is displayed.
Sharing Data
137
Figure 23
For a new integration, the fields are blank. For an integration to be edited or copied, the fields are populated with existing values.
138
Figure 24
Source
Bidirectional
System Override
Sharing Data
139
Description A check box that enables the integration, for performance reasons, not to transfer missing cell (#missing) values. If the box is checked, to ensure that data is transferred successfully, you must prepare the destination database before running the integration. See Prerequisites for Moving Data Between Applications on page 134 for details. A text box for a value that acts as a multiplier for the data. Enter a value with which you want to scale the integration data. For example, to convert data from a positive to negative value during the data transfer, specify a scale value of -1. Each transferred data value is then multiplied by -1, in effect, converting them to negative values. A text box for optional comments and notes.
Scale
Notes
If you want to specify the shared dimensions, select one or more pairs of dimensions (in source and destination applications) and click Share. A dimension can be shared with only one dimension in the other application. A line is drawn between any two dimensions that are shared.
If you want to unshare any dimensions that are shared by default, select one or more dimensions in either application and click Unshare; or click Unshare All to remove sharing from all dimensions. If you want to return to the default shared dimensions, click Default.
Note: You are not required to identify every dimension that is in fact identical. The reason to identify shared dimensions is to specify the dimensions for which you want to move a range of members. For any particular integration, if you are interested in only one member for a dimension, you can leave the dimension unshared.
140
The third page of the wizard enables you to pick ranges of members from the shared dimensions to define the slice of data that will be transferred.
Shared Dimension Members. Dimensions identified as shared on the previous wizard page Common POV. Dimensions not identified as shared Each POV (point of view) uses the same background POV members and a unique set of dynamic POV members. You specify the dynamic POVs.
If you specify an AllMembers() function, the integration must check all 11 members; however, data is transferred only for 1999, 2000, and 2001, because these years are common to both applications. Warning messages are returned for the other years.
Note: You must select at least one member from each dimension in Common Members or specify a function that identifies a common member.
Sharing Data
141
When using double quotation marks ( " ) and parentheses () in member names in the Create Integration Wizard, follow these guidelines: The following examples illustrate valid use of these characters in member names: abc abc func(abc) func(abc) func(a,b,c) func(a(b)c) func(a(b)c) These are examples of invalid member strings: func(abc) func(a,b,c) func(a(b)c) func(abc) If you select invalid member names from the Data Integration Wizard, it automatically adjusts the syntax to be valid before passing the name on. However, if you manually type an incorrect name, the wizard does not correct the invalid name, and an error is returned. The following members may be valid within an application, but may behave differently: a,b,c will be treated as three members, not one named a,b,c. Different styles can be mixed in a single shared pair of dimensions value input box, for example: a, b, c, abc, Children(a,b,c), iDescendants(a(b)c), Ancestors(a(bc)
142
d. Next to a dimension in the destination application, select the magnifying glass. e. From the list of members select a single member. f. Repeat steps d and e for all other dimensions in the destination application for which you want a member selected.
Note: You can leave background POV dimensions blank if the application does not require a value for them.
g. Click the Dynamic POV icon next to a dimension to move the dimension from the Background POV area to the Dynamic POV area. h. Click Add to create a POV that is based on the static and dynamic members that you have selected. i. Optional: If you want to create another POV, select a different member and click Add. You can repeat this step by selecting different members for the dynamic POV and clicking Add for each selection. The numbering in the lower right corner identifies the POV, for example, POV 3 of 5. You can navigate to each POV by using the left and right arrow keys. You can also move the dimension in the Dynamic POV area back to the Background POV area and move a different POV to the dynamic area and create another set of POVs. j. Optional: If you want to replace the content of any existing POV that you have access to, complete the following steps. i. Use the arrow keys in the lower right corner to navigate to a POV.
ii. Change the content in one of the Dynamic POV areas. iii. Click Replace. When the integration is run, it copies the data from the dimension member or members in the source application list to the dimension member or members in the destination application list.
9 Optional: If you want to see a list of POVs, click View All. 10 Optional: If you want to remove a POV, complete the following steps.
a. Click the left (<) or right (>) paging icon to navigate to a POV. b. Click Remove.
11 Save the integration, or cancel the changes that you made by taking one of the following actions:
Click Save to save the integration. The Create Integration window remains open. You can make additional changes to the integration and save it again when finished.
Click Save and Close. The integration is saved and the list of integrations is displayed. To schedule the new integration to run, see Scheduling Integrations on page 145.
Sharing Data
143
Click Save and Run. The integration is saved and the page to schedule an integration to run is displayed; see Scheduling Integrations on page 145.
Click Close. Any changes that you made since the last save are lost. Any new group that has not been saved is not created.
Note: Case-sensitivity in integration and integration groups is handled differently depending on the relational database. For Oracle configurations, if you save a new integration or group with a name comprised of the same characters but different case, such as ABC overwriting Abc, you are prompted to overwrite the existing one. After you overwrite, two integrations are created: Abc with the old contents and a new integration or group named ABC with the new contents. In the case of non-Oracle configurations, if you try to overwrite Abc with ABC, an initial message warning about overwriting is displayed. If you continue to overwrite, an exception is displayed stating that the name already exists and you are forced to select a new name.
Deleting Integrations
You can delete integrations that are no longer useful.
To delete an integration:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.
OK to delete the selected integration or integrations Close to cancel the delete operation
144
Scheduling Integrations
You can run an integration immediately or schedule it to run at a particular date and time. You can also place an integration in a group and schedule the group to run. See Grouping Integrations on page 149 and Scheduling Group Integrations on page 151.
3 Optional: If the integration is bidirectional, the source and destination application can be reversed.
Selecting an application from the Source drop-down list box and the Destination drop-down list box automatically shows the other application that will be used as the destination.
Note: If the source and destination applications are the same, it can be confusing with a bidirectional integration to know which way the data is being moved. The first entry in the Source drop-down box is the original, default source application.
4 Click Run.
A popup window is displayed to schedule the integration to run. Figure 25 shows a sample Run Integration window.
Figure 25
Sharing Data
145
6 Click Schedule for and scroll to select the month, day, and time in the drop-down list boxes to schedule
the integration to run at a particular time.
The integration you scheduled is added to the list of scheduled integrations. For information on viewing scheduled integrations, see Viewing the Status of an Integration on page 146.
View the status of a running, completed, or failed integration; see Viewing the Status of an Integration on page 146. Cancel a running integration; see Canceling an Integration on page 147. Run a copy of an integration; see Copying an Integration to Run on page 147. Reschedule an integration; see Rescheduling an Integration on page 148. Remove an integration from the list of scheduled integrations; see Removing an Integration on page 148.
Figure 26
146
The Status column indicates whether an integration is pending, running, completed, or failed.
To view details about a completed or failed integration, click the Failed or Completed link in
the Status column.
Note: Data integrations that contain members with parentheses in the name, for example Account1(), will fail. If this is the reason for the failure, you will see an Unknown function name Account1 error.
Canceling an Integration
You can cancel an integration that is scheduled to run or in progress (running).
To cancel an integration:
1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select a single integration only.
3 Click Cancel.
A confirmation message is displayed.
5 To schedule the integration to run at a particular time, click Schedule for and scroll to select the month,
day, and time in the drop-down list boxes.
Sharing Data
147
The integration you scheduled is added to the list of scheduled integrations. You can schedule an integration multiple times, which results in the integration being listed multiple times on this page.
Rescheduling an Integration
You can reschedule an integration that is waiting to run to a different date or time.
To reschedule an integration:
1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select a single integration only.
5 To schedule the integration to run at a particular time, click Schedule for and scroll to select the month,
day, and time in the drop-down list boxes.
Removing an Integration
You can remove an integration that is pending to run or one that has already run (completed or failed).
To remove an integration:
1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select multiple integrations to remove.
148
3 Click Remove.
A confirmation message is displayed.
OK to remove the integration or integrations you have selected Close to cancel the operation
Note: In some cases, removing an integration or group that has been run and then attempting to remove it from the Scheduled Integrations page results in a blank screen. In these cases, select the Back button in your browser and refresh your screen using either F5 or your browser's Refresh button.
Grouping Integrations
You can create groups of integrations to run at the same time. Before creating a group, you must first create individual integrations that can be added to a group; see Creating or Editing a Data Integration on page 137. In the group, you specify the order in which to run the integrations.
To create a blank new group, click New Group. A Create Integration Group page with blank fields is displayed.
To create a new group with a list of integrations, select one or more integrations from the list of saved integrations, and click New Group. A Create Integration Group page with populated fields is displayed.
To edit an existing group, select the group and click Edit. A Create Integration Group page with populated fields is displayed.
Sharing Data
149
Figure 27
3 Type a name for the group, or change the name for an existing group.
The name must be unique among existing group and integration names.
4 Optional: Type or change comments in the notes field. 5 Click Next to go to the next page.
Note: If you click Save or Save and Close, the group (name and notes) is saved. You can edit the group later and add integrations.
150
The selected integrations are copied, not moved, to Selected Integrations. You can add an integration multiple times if you want to run it more than once.
8 Optional: If you are editing an existing group, or if you add integrations that you want to remove, select one
or more integrations in Selected Integrations and click Remove to remove them from the group.
You can click Remove All to remove all integrations from the group. Integrations are run in the order that they are shown in Selected Integrations.
9 Optional: Select an integration and click the up or down arrow keys to move the integration up or down in
the list to change the order in which it is run.
10 Save the group, or cancel the changes you have made by taking one of the following actions:
Click Save to save the group. The Create Integration Group window remains open. You can make additional changes to the group and save it again when finished.
Click Save and Close. The group is saved and the list of integrations is displayed. To schedule the new group to run, see Scheduling Group Integrations on page 151.
Click Save and Run. The group is saved and the page to schedule a group to run is displayed; see Scheduling Group Integrations on page 151.
Click Close. Any changes you made since the last save are lost. If it is a new group and it has not been saved yet, no group is created.
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.
Sharing Data
151
3 Click Run.
A page is displayed to schedule the group to run. Figure 28 shows a sample Run Group Integration window.
Figure 28
5 To schedule the group to run at a particular time, click Schedule for and scroll to select the month, day,
and time in the drop-down list boxes.
The group you scheduled is added to the list of scheduled integrations. For information on viewing scheduled integrations, see Viewing the Status of an Integration on page 146.
Note: If one of the integrations within a group encounters an error while running, the entire group stops running.
152
Chapter
Automating Activities
6
As administrator, you can automate activities related to Interactive Reporting, Production Reporting, and generic jobs, schedules, and physical resources used for job output.
In This Chapter Managing Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Managing Time Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Administering Public Job Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Managing Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Managing Pass-Through for Jobs and Interactive Reporting Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Managing Job Queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Automating Activities
153
Managing Calendars
You can create, modify, and delete custom calendars using Calendar Manager. You can create calendars to schedule jobs based on fiscal or other internal or organizational calendars. Jobs scheduled with custom calendars resolve dates and variable date limits against quarterly and monthly dates specified in the custom calendars, rather than the default calendar. Topics that provide information on Calendar Manager:
Viewing Calendar Manager on page 154 Creating Calendars on page 154 Deleting Calendars on page 155 Modifying Calendars on page 155 Calendar Manager Properties on page 155 Viewing the Job Log on page 156
Creating Calendars
Calendar Manager uses the standard Gregorian calendar, which cannot be modified except for holiday designations and start week day, by default.
To create a calendar:
1 Invoke Calendar Manager. 2 Select Calendars from the left navigation pane. 3 Enter a name for the calendar. 4 Enter information on the Calendar Manager windows, clicking Save on each window.
You must select New Year and enter a year before you can save the calendar. For field information, see Calendar Manager Properties on page 155.
154
Automating Activities
Deleting Calendars
You can delete whole calendars or individual calendar years.
Modifying Calendars
You can modify or add years to calendars.
To modify calendars:
1 In Calendar Manager, navigate to a calendar.
Select a calendar name to view calendar properties. Select a year to modify periods or years and non-working days. When modifying periods or years be sure the dates for weeks or periods are consecutive.
2 Select New Year to add a year to this calendar, and modify properties. 3 Click Save.
Calendar Properties on page 155 Custom Calendar Periods and Years Properties on page 156 Custom Calendar Non-Working Days Properties on page 156
Calendar Properties
Calendar NameName cannot be changed after it is saved. User Defined WeeksEnables selection of week start day. The default week contains seven days and is not associated with other time periods. User-defined weeks can be associated with periods, quarters, or months, but cannot span multiple periods. Start and end dates cannot overlap and must be sequential. Week StartIf using user-defined weeks, select a starting day for the week.
Managing Calendars
155
New YearAny year is valid if no other years are defined. If this is not the first year defined, the year entered must be sequential. Quarter/Period/WeekThe system automatically assigns sequential numbers to quarters. All calendars contain 12 periods. Start and EndEnter initial Start and End dates. The system automatically populates the remaining periods and start and end dates, and assigns quarters logically. After the fields are populated, you can edit start and end dates, which cannot overlap and must be sequential.
Days of the weekSelecting days of a week populates the calendar automatically. You can select non-working days by day or by day of the week.
CalendarThe calendar reflects the day starting the week as previously selected. Clicking the arrows moves the calendar forward or back one month. You indicate working and nonworking days on a day-by-day basis by selecting and deselecting days.
3 Click OK to retrieve the log (see Job Log Entries on page 157).
156
Automating Activities
2 Select All users or select User and enter a user name. 3 Click OK
Managing Calendars
157
Managing Public Recurring Time Events on page 158 Creating Externally Triggered Events on page 158 Triggering Externally Triggered Events on page 159
4 Set access control (see the Hyperion System 9 BI+ Workspace Users Guide) to enable roles, users, or
groups to view and use the public recurring time event.
5 Click Finish.
158
Automating Activities
159
Scheduled Jobs on page 160 Background Jobs on page 161 Foreground Jobs on page 161
Scheduled Jobs
Scheduled jobs are queued when all Job Services are processing the maximum concurrent jobs defined. The queue is maintained by Event Service. Schedules in the queue are sorted based on priority and by the order in which they are triggered. When a schedule is ready for processing, Event Service builds the job and submits it to Service Broker. Service Broker gets a list of all Job Services that can process the job and checks availability based on the number of concurrent jobs that each Job Service is processing. This information is obtained dynamically from each Job Service. If Service Broker cannot find a Job Service to process a job, it gives a Job Limit Reached exception, which enables queuing in Event Service. The schedule is added to the queue and job data (including job application and executable information) for selecting a Job Service is cached. When the next schedule is ready for processing, Event Service builds the job and determines if that job type is in the queue (based on cached job data). If job type matches, the job is added to the queue. If not, the job is submitted to Service Broker for processing.
160
Automating Activities
When Event Service queuing is enabled, a Job Service polling thread is initialized that checks for available Job Services. If one is available, then Job Service processes the first schedule it can, based on job data cached in Event Service. Scheduled job data is removed from cache after the schedule is submitted to Job Service. Job properties that are modified are used only if the changes were made after the schedule is activated and added to the queue. Scheduled jobs are managed through Schedule module (see the Hyperion System 9 BI+ Workspace Users Guide).
Background Jobs
If a Job Service is not available to process a background job (which means job limits are reached), a command is issued to Event Service to create a schedule with a custom event that runs at that time. This command persists schedule information in the database. The schedule uses job parameters associated with the background job, and Event Service processes the job as it does other scheduled jobs.
Foreground Jobs
If a Job Service is not available to process a foreground job, an exception occurs notifying the user that Job Service is busy. The user is given the option to queue the job for processing by the next available Job Service. If the user decides to queue the job, a schedule is created with a custom event that runs at that time, and Event Service processes the job as it does other scheduled jobs. The schedule and event are deleted after the job is submitted to Job Service.
161
162
Automating Activities
Chapter
Administering Content
7
This section explains administrative tasks associated with system content stored in the repository.
In This Chapter Organizing Items and Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administrating Pushed Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administering Personal Pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Administering Content
163
A hidden folder named System is designed for administrator use. It is visible only to administrators, and only when hidden items are revealed. Use System to store files you do not want users to see, such as icon files for MIME types. You cannot rename, delete, or move the System folder.
164
Administering Content
Configuring the Generated Personal Page on page 165 Understanding Broadcast Messages on page 166 Providing Optional Personal Page Content to Users on page 168 Displaying HTML Files as File Content Windows on page 168 Configuring Graphics for Bookmarks on page 168 Configuring Exceptions on page 169 Viewing Personal Pages on page 169 Publishing Personal Pages on page 169 Configuring Other Personal Pages Properties on page 169
One Broadcast Messages content window with links to all items in /Broadcast Messages One Broadcast Messages file content window for each displayable item in /Broadcast
Messages
One content window for each of the first two pre-configured folders The first (as sorted) displayable HTML item in any pre-configured folder My Bookmarks content window Exceptions Dashboard content window
You can customize items included by default by setting Generated Personal Page properties in Servlet Configurator (see Personal Pages: Generated Properties on page 214).
165
Set Generated Personal Page properties in Servlet Configurator. Populate /Broadcast Messages with combinations of nondisplayable items for which links display on the generated Personal Page, and displayable HTML files or external links, whose content displays there. All these items appear as links and constitute one content window under the Broadcast Messages heading. Some displayable items may be displayed as file content windows, depending on configuration settings in Generated Personal Page properties.
In /Broadcast Messages, create pre-configured subfolders that are displayed when users first log on. Populate these folders with displayable HTML items and nondisplayable items. Each pre-configured folder has a corresponding content window that contains links to all items in the folder. Each displayable item is displayed as a file content window.
Tip: As with any content, only users with required access privileges can see items and folders in /Broadcast Messages and other pre-configured folders. To tailor the generated page for groups, put folders and items intended for those groups in /Broadcast Messages and pre-
configured folders, and assign access privileges to the target groups. For example, if each group accesses different subsets of pre-configured folders, then users in each group see different content windows when they first log on.
One content window that displays links to all items in /Broadcast Messages File content windows for each displayable item in /Broadcast Messages
Unlike other content window types, Broadcast Messages cannot be deleted from users Personal Pages. If users makes another page their default Personal Page, Broadcast Messages remain on the originally generated Personal Page. User can delete the generated page only if they added the /Broadcast Messages folder to another Personal Page. (A user can acquire multiple pages containing the Broadcast Messages by copying pushed Personal Pages.)
166
Administering Content
2 Select File > New Folder. 3 Enter a folder name and click OK.
The folder you created is displayed in /Broadcast Messages in Viewer.
167
168
Administering Content
Configuring Exceptions
To enable exceptions to be added to the Exceptions Dashboard, select the Advanced Option Allow users to add this file to the Exceptions Dashboard when importing through Viewer. For information on how users can add exception-enabled jobs or items to their Exceptions Dashboard, see the Hyperion System 9 BI+ Workspace Users Guide. To give jobs exceptions capability, you must design jobs (usually, Production Reporting programs or Interactive Reporting jobs) to write exceptions to the output.properties file. See the Hyperion System 9 BI+ Workspace Users Guide. For programmers information about supporting exceptions in jobs, see the Hyperion System 9 BI+ Workspace Users Guide.
Color schemes Maximum number of Personal Pages Visibility of content window headings (colored bars that resemble title bars)
169
170
Administering Content
Chapter
8
Administrators configure RSC services and their properties using RSC.
In This Chapter About RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Managing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Modifying RSC Service Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Managing Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Managing Repository Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Using the ConfigFileAdmin Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
171
About RSC
RSC is a utility that enables you to manage remote or RSC services. RSC can configure services on all hosts of a distributed Workspace system. RSC modifies the config.dat file that resides on the target host. You can run RSC from all server hosts in the system. In addition to modifying services, you can use RSC for these tasks:
Adding, deleting, and modifying hosts Adding, deleting, and modifying database servers Changing the database password used by RSC services
To remove RSC services, use the ConfigFileAdmin utility (see Using the ConfigFileAdmin Utility on page 190).
Starting RSC
To start RSC:
1 Start Service Configurator.
Windows: Select Start > Programs > Hyperion System 9 BI+ > Utilities and Tools > Service Configurator. UNIX: Run ServiceConfigurator.sh, installed in Install Home/bin.
2 From the Service Configurator toolbar, select Module > Remote Service Configurator, or click RSC icon,
.
Logging On to RSC
To log on to RSC, enter the requested information:
Administrative user ID Password for user name Workspace host of the services to configure Workspace port number for the server host
172
Using RSC
When you first log on to RSC, the services that are installed on the host that you are logged on to, and basic properties of the highlighted service, are displayed. Toolbar icons represent functions you perform using RSC.
Table 15
RSC Toolbar Icons Tooltip Exit Remote Service Configurator Description Closes RSC after user confirmation
Icon
Updates the list of services and basic properties of the selected service Checks whether a service is alive
Ping service
Displays the Defined Hosts window, where you define, delete, or modify hosts Displays the Defined Database Servers window, where you add, delete, and modify database servers Deletes a service after user confirmation
About RSC
173
Managing Services
With RSC, you can modify properties or delete installations of these services:
Event Service Job Service Name Service Repository Service Service Broker
Adding RSC Services on page 174 Deleting RSC Services on page 174 Pinging RSC Services on page 175
174
If the service is not responsive, a message is displayed indicating that ping could not connect to the service; for example:
A Brio.Portal error occurred in Ping: ConnectionException: Connection refused: connect
This indicates that the service is not running. If you receive this error, refer to the service log file to investigate why the error occurred.
Common RSC Properties on page 176 Job Service Properties on page 178
Note: RSC services not mentioned explicitly in this section have only common properties.
175
General RSC Properties on page 176 Advanced RSC Properties on page 176 RSC Storage Properties on page 177
DescriptionBrief description of the service. HostHost on which the service resides. You can select or define a host. If you define a host, enter a name that makes the service easily identifiable within your organization. The maximum number of characters allowed is 64. See Managing Hosts on page 182. IP PortService IP port number. The wizard assigns a unique port to each service. Even if you install multiple services of one type (Job Service, for example) on one host, the wizard automatically enters a unique IP port number for each one. DirectoryLocation where the service resides. Adopt a convention for naming the directories where you store service information. For example, for Event Service named ES_apollo, the directory might be j:\Brio\Brio8\server\ES_apollo.
Note: Changes to Host, IP Port, and Directory properties do not take effect until the service is restarted.
Log LevelsLevel at which service errors are logged. See Configuring Logging Levels on page 229. A change to this property takes effect immediately. Therefore, when errors occur and you want more debugging information, you can change the logging level without restarting the service.
Max ConnectionsMaximum number of connections allowed. Consider memory allocation for the connections you allow. You must increase the maximum number of file descriptors on some systems, such as UNIX. A change to this property takes effect immediately. Changing the Max Connections setting without restarting the service is useful to dynamically tune the service at run time.
176
Name ServiceGeneral configuration information, such as lists of hosts, and database servers Repository ServiceWorkspace content metadata Event ServiceSchedules and subscriptions
Service Broker and Job Service do not have storage properties. Data for all these services is stored in the repository database, for which storage properties define connectivity:
DB DriverName of the driver used to access the database. This is database-dependent and should only be changed by an experienced administrator. If you change DB Driver, you must change other files, properties, data in the database, and the Java classpath. See Changing the Repository Database Driver or JDBC URL on page 187.
JDBC URLURL for Java access to the database using the JDBC driver. The services use this URL to connect to the database server. If you change JDBC URL, you must change other files, properties, and data in the database. For details, see Changing the Repository Database Driver or JDBC URL on page 187.
User NameUser name for the database account. All services should use one database account. PasswordPassword for the database account.
Caution! Workspace only supports configurations in which all services connect to one database. For this
reason, change the settings on this tab only if you are an experienced Workspace administrator; otherwise, request assistance from Hyperion Solutions Customer Support.
Storage property settings rarely should be changed. The circumstances that would require changes include, for example, assignment of host names on your network, changes to a database user account (name or password), or changes to database type (as from Oracle to Sybase). Such changes require extensive changes to external systems configuration as well.
177
Job Service Dynamic Properties on page 178 Job Service Database Properties on page 178 Job Service Production Reporting Properties on page 179 Job Service Application Properties on page 179 Executable Job Service Properties on page 182
When you modify properties of Job Service, the service receives change notifications and updates its configuration immediately. Properties used while the service is running take effect immediately. Such properties include Max Connections, Logging Levels, and all properties on the Database, Production Reporting, Application, and Executable tabs. Properties only used at start time, however, do not take effect until the next time Job Service starts. Such properties include Directory, Log File, and IP Port.
Job LimitMaximum number of concurrent jobs to be run by Job Service If this value is 0 or -1, an unlimited number of concurrent jobs can be run. Job limit cannot be modified at runtime. Changes made to Job Limit are picked up by Job Service dynamically.
HoldDetermines whether Job Service can accept jobs for processing When set to true, a Job Service continues to process jobs that are running, but does not process any new jobs.
Both properties can be changed without restarting Job Service. Only Job Service has Dynamic properties.
178
To delete a databases connectivity from Job Service, click Delete. To modify the connectivity properties of a database:
1 Select a database from the list and click Modify. 2
Modify or create environment variables using Name and Value.
ApplicationName of the application. Select an application or add one. All applications defined in Workspace are listed. Applications can have multiple executables, each on a different Job Service to distribute the load. DescriptionOptional read-only description of the application. Click Modify to change the description. Command StringRead-only command string to pass to the application when it runs. Click Modify to change the command string.
You can add applications to Job Service, delete applications that have no associated executables, and modify application properties by clicking the corresponding button. The Add button is available only when you must define executables for applications (see Adding Applications for Job Service on page 180). After you add applications, you must define their executable properties (see Executable Job Service Properties on page 182).
179
To add applications:
1 Display the Job Service application properties, 2 Click Add to open Application Properties. 3 Supply a name and description. 4 Enter a command string to pass to the application when it runs.
Use one method:
Select a pre-defined template. Enter a command string in the field provided. Build a command string using command tokens.
5 Click OK, then click the Executable tab to define the executable properties for the application.
See Executable Job Service Properties on page 182.
Command Tokens
You can use command tokens to build command strings to pass to applications when they run:
$CMDFull path and name of the executable. $PARAMSParameters defined for the program. You can set prompt and default values for
$PROGRAMProgram to run. Examples of programs include shell scripts, SQL scripts, or Oracle Reports. $BPROGRAMProgram name with the file extension removed. Use this in combination with
hardcoded text to specify a name for an error file, a log file, or another such file. An example would be log=$BPROGRAM.log.
$FLAGSFlags associated with the program. $EFLAGSFlags associated with the executable or an instance of it. All jobs associated with
$DBCONNECTDatabase connect string associated with the program. If set, end users
180
$DBUSERNAMEDatabase user name associated with the program. If set, end users cannot specify a user name at runtime. $DBPASSWORDDatabase password associated with the program. If set, ends users cannot
$BPUSERNAMEUser name. If the user name is required as an input parameter to the job,
specifying this token instructs the system to include the user name in the command line automatically, rather than prompting the user.
Example 1 Command string template that runs Oracle Reports: $CMD userid=$DBUSERNAME/$DBPASSWORD@$DBCONNECT report=$PROGRAM destype=file desname=$BPROGRAM.html batch=yes errfile=$BPROGRAM.err desformat=html
When the tokens in the above command string for Oracle Reports are replaced with values, the command executed in Job Service looks like this: r30run32 userid=scott/tiger@Brio8 report=inventory destype=file desname=inventory.html batch=yes errfile=inventory.err desformat=html
Example 2
Command string template that runs shell scripts on a Job Service running on UNIX: $CMD $PROGRAM $PARAMS When the tokens in the above command string for running shell scripts are replaced with values, the command executed in Job Service looks like this: sh runscript.sh p1 p2 p3
Example 3 Command string template that runs batch files on Job Service running on a Windows system: $PROGRAM $PARAMS
When the tokens in the above command string for running batch files are replaced with values, the command executed in the Job Service looks like this: Runbat.bat p1 p2 p3
ExecutableLocation of the executable program for the application (full path and executable name); must be co located with Job Service. FlagsValue used in the command line for the token $EFLAGS, which represents the flags associated with the program. Environment VariablesEnvironment variables associated with the application, for example, $PATH, $ORACLE_HOME.
181
Managing Hosts
The Defined Hosts dialog box lists the currently-defined hosts in Workspace and identifies the host name and platform. Topics that describe how to add, modify, and delete hosts:
Adding Hosts on page 182 Modifying Hosts on page 183 Deleting Hosts on page 183
Adding Hosts
After you install services on a computer, you must add the computer as a host in Workspace.
To add hosts:
1 Click
, and click Add.
Caution! The host name cannot start with numerals. Hyperion Interactive Reporting Data Access
Service and Hyperion Interactive Reporting Service do not work if host names start with numerals.
3 Click OK.
Workspace pings the host to make sure it is on the network. If the ping fails, an error message is displayed. After Workspace successfully pings the host and validates the host name, Workspace adds the host and lists it in the Defined Hosts dialog box.
4 Click OK.
Note: If you change the host name, you must restart Workspace services and Job Service in order for the host to take effect.
182
Modifying Hosts
You modify a host to change its platform designation.
To modify hosts:
1 Click
.
2 Select a host from the list, and click Modify. 3 Select a platform for the host, and click OK.
Deleting Hosts
You cannot delete a host if services are installed on it.
To delete hosts:
1 Click
.
2 Select a host from the list and click Delete. 3 When prompted, click Yes to delete the host, and click OK.
Defining Database Servers on page 184 Changing the Services Repository Database Password on page 187 Changing the Repository Database Driver or JDBC URL on page 187
Database Server Properties on page 184 Adding Database Servers on page 184 Adding Job Service Database Connectivity on page 185 Modifying Database Servers on page 185 Deleting Database Servers on page 186
183
NameAlphanumeric name for the database server you want to add that is at least five characters. Database typeType of database server you are using. HostHost where the database server resides. User nameDefault user name used by the Job Service for running Production Reporting programs on the database server. Used if the job owner does not supply a database user name and password when importing a given job. PasswordValid password for user name.
4 Click OK.
184
Connectivity informationInformation needed depends on the database type. For example, for an Oracle database, enter a connect string. Environment variablesRequired only to execute Production Reporting jobs against the database. Used to specify database information and shared library information that may be required by Production Reporting. For example: name = ORACLE_SID, value=PAYROLL
6 Click OK.
2 Select a database server from the list and click Modify. 3 Make changes as necessary (see Database Server Properties on page 184), and click OK.
2 Select a database server from the list and click Delete. 3 When prompted, click Yes to verify database deletion, and click OK.
185
Caution! Make sure to change the password in Workspace before changing it in the database. If you
perform the steps in the wrong order, you may lose the ability to run Workspace.
If these services use different database accounts, perform this step only for those that use the account whose password you are changing.
5 Close RSC. 6 In LSC, click Show host properties, and select the Database tab. 7 Change the password and click OK.
This password property (like the other properties on the Database tab) applies to all LSC services on the local host, all of which use one database account. For more information about LSC, see Chapter 9, Configuring LSC Services.
8 Repeat step 6 and step 7 on every host that contains LSC services, making certain to enter the password
the same way each time.
9 If you are using the same database for row-level security, change the password for row-level security from
the Administer module.
10 Stop the Workspace services. 11 Change the password in the database, making certain it matches the password entered for Workspace
services.
Caution! If you perform steps in the wrong order, you may lose the ability to run Workspace.
186
If parts of the JDBC URL change, such as the database server name, port number, or SID, you must update the JDBC URL property. To do so, perform the JDBC URL portions of the instructions.
8 Type 4 to select Modify Name Server Data. 9 As the program prompts you for each property, refer to the listing you just displayed, and enter the same
values for all properties except Name Server JDBC URL and Name Server JDBC Driver.
10 Enter the values for Name Server JDBC URL and Name Server JDBC Driver properties; for example:
Name Server JDBC URL jdbc:brio:oracle://brio8host:1521;SID=brio8 Name Server JDBC driver com.brio.jdbc.Oracle.OracleDriver
For example:
update v8_jdbc set jdbc_driver= 'com.hyperion.jdbc.Oracle.OracleDriver', jdbc_url='jdbc:hyperion:oracle://hyperionhost:1521;SID=hyperion'
13 Add a JDBC driver to Hyperion Home\common\JDBC and set BP_DBDRIVER to the full path of the
JAR files.
187
Managing Jobs
Job Service compiles and executes content-creation programs or jobs. Job Service listens for Workspace job requests (such as requests initiated by users from the Scheduler module), manages program execution, returns the results to the requester, and stores the results in the repository. Three job types that Workspace can store and run:
Interactive ReportingJobs created with Interactive Reporting Studio. Production ReportingSecure or nonsecure jobs created with Production Reporting studio. GenericJobs created using other applications (for example, Oracle or Crystal Reports) through a command line interface.
For Interactive Reporting jobs, no special configuration is necessary. Every Job Service is preconfigured to run Interactive Reporting jobs. For users to run Production Reporting or generic jobs, you must configure a Job Service to run the report engine or application program. One Job Service can run multiple types of jobs, as long as it is configured for each type (except Interactive Reporting). Topics that explain how to configure Job Service to run jobs.
Optimizing Enterprise-Reporting Applications Performance on page 189 From Adding Job Services to Running Jobs on page 190
See also Adding Applications for Job Service on page 180 and Executable Job Service Properties on page 182.
Note: The system automatically creates a textual log file (listed beneath the job) for every job it runs. You can suppress all job log files by adding the java system property, -Dbqlogfile_isprimary=false, to the common services and Job Service startup scripts. You must stop and restart all services. See Chapter 2, Administration Tools and Tasks, for more information on stopping and starting the services.
Replicate Job Services (multiple Job Services assigned to a given data source on different computers) to increase overall reliability and decrease job turn-around time. Install Job Service on the same computer as the database to conserve valuable network resources.
Note: Normally, there should be one Job Service on a given host. You can configure a Job Service to run multiple applications.
188
HostPhysical computer identified to the system by host name. Job ServiceJob Service on the host using RSC. ApplicationThird-party vendor application designed to run in the background. Application examples include Production Reporting, Oracle Reports, or public domain application shells such as PERL. ProgramSource used to drive an invocation of an application. For example, a user might submit a Production Reporting program that generates a Sales report to a Production Reporting application on a given host through Job Service.
About config.dat on page 191 Modifying config.dat on page 192 Specifying Explicit Access Requirements for Interactive Reporting Documents and Job Output on page 193 Setting the ServletUser Password when Interactive Reporting Explicit Access is Enabled on page 193
189
About config.dat
Regardless of whether services are running on Windows or UNIX, and whether they are running in the common services process or in separate processes, RSC services always use config.dat to begin their startup process.
config.dat resides in \BIPlus\common\config. All RSC services on a host (within an Install Home) share a config.dat file. If you distribute RSC services across several computers, each computer has its own config.dat.
When Name Service starts, it reads config.dat to get database connectivity and logon information. All other RSC services reads this file to get their password, host, and port for Name Service. Name Service gets its configuration information directly from the database. Other RSC services connect to Name Service to get their configuration information.
config.dat uses plain ASCII text. Passwords contained in the file are encrypted, and you can
modify them only with RSC or the ConfigFileAdmin utility. This ensures that only people who know the config.dat password can modify the service passwords in the file. See Modifying config.dat on page 192. To modify configuration information in config.dat, modify service properties using RSC. RSC writes your changes to config.dat.
190
Modifying config.dat
You view or modify information in config.dat by using a simple utility run from a command line, named ConfigFileAdmin.bat (Windows) or ConfigFileAdmin.sh (UNIX). This file is in Install Home\bin. To run the ConfigFileAdmin utility, specify the config.dat password on a command line after the file name. For example, with the default password, you would type configfileadmin.bat administrator (on Windows) or ConfigFileAdmin.sh administrator (on UNIX). Tasks you can accomplish with the ConfigFileAdmin utility:
Deleting services Changing services passwords Changing the password for access to config.dat Changing the ServletUser password
To list the properties of Name Service, such as its database logon name and password, select option 3. When the Workspace installation creates a config.dat file, it assigns a default password, namely, administrator. This differs from the admin account password. As a matter of system security, you should change the config.dat password using the ConfigFileAdmin utility, by selecting option 10. You can use option 4 to modify the database password that Name Service uses to connect to the repository database, or you can use RSC to do so.
191
Specifying Explicit Access Requirements for Interactive Reporting Documents and Job Output
By default, no explicit access to Interactive Reporting database connections is required to process Interactive Reporting documents or job outputs using the plug-in or Workspace. To require explicit access, as when a database is associated with Interactive Reporting documents or job output, use the ConfigFileAdmin utility.
To require explicit Interactive Reporting database connection access to process documents and
job out:
1 At a command line, go to the Install Home\bin directory of the Workspace server. Enter:
configfileadmin password
2 Type 14.
. . . 11) 12) 13) 14)
Supply the requested information for the database (user) name, database password, database URL, and database driver. You can find this information in the <xref> section of the server.xml file.
3 Type 1.
0) Exit 1) Toggle the SC_ENABLED flag for ServletUser (enables/disables feature) 2) Update the ServletUser password and re-generate properties file.
4 After toggling, restart the server, because Repository Service caches this information.
Setting the ServletUser Password when Interactive Reporting Explicit Access is Enabled
The special user ServletUser has read-only administrative privileges. When the SC_ENABLE flag is set to true, ServletUser sends a request for access to Interactive Reporting documents or job output on behalf of users without explicit access to the Interactive Reporting database connection associated with the document or job output. When the SC_ENABLE flag is set to false, ServletUser cannot make such a request. Only users with explicit access given by the importer to the Interactive Reporting database connection associated with the Interactive Reporting document or job output have access.
192
The password for ServletUser is updated in the repository and stored, encrypted, in the sc.properties file. The directory in which this file is located depends on the servlet engine you are using. For example, for Apache Tomcat, this file is in:
Install Home\AppServer\InstalledApps\Tomcat\5.0.28\Workspace\webapps\wor
kspace\WEB-INF\config\sc.properties
2 Type 14.
. . . 11) 12) 13) 14)
3 Type 2.
0) Exit 1) Toggle the SC_ENABLED flag for ServletUser (enables/disables feature) 2) Update the ServletUser password and re-generate properties file.
4 Enter the information requested. 5 Manually update the sc.properties file on all Workspace servlet installations.
193
194
Chapter
9
Administrators configure LSC services and their properties using LSC and the portal.properties file.
In This Chapter About LSC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Modifying LSC Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Modifying Host Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Modifying Properties in portal.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
195
About LSC
LSC enables you to modify properties of installed LSC services:
Analytic Bridge Service (ABS)Also known as Extended Access for Hyperion Interactive Reporting Service Assessment (Harvester) Service (HAR) Authentication Service (AN) Authorization Service (AZ) Global Service Manager (GSM) Hyperion Interactive Reporting Service (BI) Hyperion Interactive Reporting Data Access Service (DAS) Local Service Manager (LSM) Logging Service (LS) Publisher Service (PUB) Session Manager (SM) Super Service (BPS)Also known as Hyperion Interactive Reporting Base Service Update (Transformer) Service (TFM) Usage Service (UT)
LSC only modifies LSC service properties; it neither creates nor removes LSC services. To add services, use the Workspace installation program. To remove services, see Using the ConfigFileAdmin Utility on page 190. LSC cannot configure services on a remote host (nor in another Install Home on the same host) or on a system with no GUI capability. LSC edits repository information and server.xml (in Install Home\common\config), which holds configuration information only for services in that Install Home.
Note: Multiple Workspace installations, or Install Homes, may reside on one host computer. A server installation is a set of installed services in one Install Home directory that run in one process space. If a host has two Install Home directories, they require two separate process spaces. LSC always edits server.xml for its own Install Home.
196
Starting LSC
To start LSC:
1 Start Service Configurator.
Windows: Select Start > Programs > Hyperion System 9 BI+ > Utilities and Tools > Service Configurator. UNIX: Run the ServiceConfigurator.sh file, installed in Install Home/bin.
.
2 Select Module > Local Service Configurator, or click the LSC icon, 3 Enter your user ID and password.
Note: If you log on with a normal user account, some fields, such as the Trusted Password and Pass-through configuration information, are read-only. For full access to all functionality, you must be logged in as a user who is provisioned with the BI+ Global Administrator role.
Using LSC
LSC lists services. that are installed in the Workspace installation (Install Home) from which LSC is running, along with basic properties of the highlighted service. Toolbar icons represent functions you perform using LSC.
Table 16
LSC Toolbar Icons Tooltip Exit Description Closes LSC after users confirmation
Icon
About LSC
197
Common LSC Properties on page 198 Assessment and Update Services Properties on page 199 Hyperion Interactive Reporting Service Properties on page 199 Hyperion Interactive Reporting Data Access Service on page 201
To view or modify most LSC service properties, double-click the service name or select the
service name and click .
To view or modify GSM or LSM properties (which do not appear in the Local Service list box),
click the Show host properties icon these service properties. to display the General Properties tab, which contains
Service NameRead-only name of the service, assigned during installation Run TypeControls whether a service is started with other services (by the startCommonServices script or Hyperion Interactive Reporting Base Service) Setting Run Type to Start makes the service active, so it starts with the others. Setting Run Type to Hold inactivates the service, so it does not start with the others. The Hold setting is useful for troubleshooting, to temporarily limit which services start.
Analytic Bridge Service Authentication Service Authorization Service Hyperion Interactive Reporting Base Service (starts all LSC and RSC services in one Install Home)
198
Work directoryName of the directory where the services temporary files are stored Max concurrent threadsMaximum number of concurrent threads the service supports Request Queue polling intervalFrequency with which the service checks for the Request Queue lock timeout setting For example, to set the service to poll every 30 seconds, type 30.
Request Queue lock timeoutNumber of seconds after which the Request Queue lock timeout expires Clear log entries afterNumber of hours after which log entries should be cleared
Hyperion Interactive Reporting Service General Properties on page 199 Fonts for UNIX on page 200
Cache LocationDirectory name where the services temporary files are stored For example, to set cache location to the D drive, type D:\\temp.
Max Concurrent RequestsMaximum number of concurrent requests the service supports; requests that exceed this setting are blocked For example, to block the number of concurrent requests after 4999, type 5000.
Polling IntervalFrequency with which the service checks the Document Unload Timeout setting For example, to set the service to poll every 180 seconds, type 180.
199
Min. Disk space (MB)Minimum disk space required to service requests For example, to allocate 10 MB as the minimum disk space, type 10.
Document Unload TimeoutInactive time in seconds after which documents are unloaded from memory to conserve system resources For example, to retain documents in memory no longer than 30 minutes after last use, type 1800.
Document Unload ThresholdNumber of open documents that activates the document unloading mechanism For example, to set the maximum number of open documents to 15, type 15.
To make Microsofts TrueType Web fonts available to Hyperion Interactive Reporting Service
when you do not have Type1, TrueType, or OpenType fonts:
2 Create a directory, directory. 3 Extract each CAB file (*.exe) into the newly created directory using the cabextract utility in
\BIPlus\bin. \BIPlus\bin/cabextract -d directory <CAB file>
4 Create a fonts.dir file in the directory containing font files using the ttmkfdir utility in \BIPlus\bin.
\BIPlus\bin\ttmkfdir -d directory -o directory\fonts.dir
5 Set the environmental variable BQ_FONT_PATH to the directory where fonts.dir was created.
Add this variable to the start-up script to save your changes. In the start-up script, BQ_FONT_PATH=directory, export BQ_FONT_PATH. This environmental variable can contain colon-separated paths to directories containing fonts.dir.
200
Hyperion Interactive Reporting Data Access Service General Properties on page 201 Hyperion Interactive Reporting Data Access Service Data Source Properties on page 201 Adding Data Sources for Hyperion Interactive Reporting Data Access Service on page 202
Relational Partial Result Cell CountMaximum number of relational data table cells that a block of results data from a query can contain when sent from Hyperion Interactive Reporting Data Access Service to the client Default value is 2048; minimum is 1.
Multidimensional Partial Result Row CountMaximum number of multidimensional data table rows that a block of results data from a query can contain when sent from Hyperion Interactive Reporting Data Access Service to the client Default value is 512; minimum is 1.
Reap IntervalFrequency in seconds with which Hyperion Interactive Reporting Data Access Service clears query data from memory when the requesting client seems to be disconnected Default value is 180; minimum is 5.
Minimum Idle TimeMinimum number of seconds to retain query data in memory for the client retrieval before assuming that the client is disconnected Default value is 180; minimum is 0.
Connectivity TypeData source database driver; must be installed on the host for Hyperion Interactive Reporting Data Access Service Database TypeDatabase type for the data source Whether Hyperion Interactive Reporting Data Access Service can connect to databases is determined by Interactive Reporting database connections and database drivers installed.
201
Hostname/ProviderDatabase host name or logical data source name For OLE DB database connections, this is the OLE DB Provider identifier.
Server/File (OLE DB only)Server file or data source name used for database connections
Note: Connectivity Type, Database Type, Name of Data Source, and Server/File properties are used only to route requests to Hyperion Interactive Reporting Data Access Service. Database client software to connect to the requested database must be installed and properly configured on each host where Hyperion Interactive Reporting Data Access Service is configured to accept routed requests for database access.
Maximum Connections to DBMaximum number of connections permitted from Hyperion Interactive Reporting Data Access Service process to the datasource, using the current driver Default value is 2048; minimum is 0.
Maximum Queue SizeMaximum number of requests that can simultaneously wait to obtain a connection to the database server Default value is 100; minimum is 0.
Minimum Idle TimeMinimum number of seconds to keep open unused database connections Default value is 180; minimum is 0.
Reap IntervalFrequency (in seconds) at which the system checks for unused database connections and closes them Default value is 180; minimum is 5.
Maximum Connections in PoolMaximum number of unused database connections to keep open for a database user name and Interactive Reporting database connection combination Default value is 1000; minimum is 0.
Minimum Pool Idle TimeMinimum number of seconds to keep unused connections for a database user name and Interactive Reporting database connection combination in memory Default value is 180; minimum is 0.
Adding Data Sources for Hyperion Interactive Reporting Data Access Service
When adding data sources, these Hyperion Interactive Reporting Data Access Service properties, which are set using LSC, must match the specified corresponding Interactive Reporting database connection properties, which are set in Interactive Reporting Studio:
202
Hyperion Interactive Reporting Data Access Service Properties (in LSC) Connectivity type Database type Hostname/Provider
Interactive Reporting Database Connection Properties (in Interactive Reporting Studio) Connection software Database type Host or provider (OLE DB)
Interactive Reporting Studio uses Interactive Reporting database connections to determine which Hyperion Interactive Reporting Data Access Service to use; Hyperion Interactive Reporting Data Access Service uses Interactive Reporting database connections to connect to databases.
Host General Properties on page 203 Host Database Properties on page 204 Host Shared Services Properties on page 205 Host Authentication Properties on page 205
2 Modify General, Database, Shared Services, or Authorization properties as necessary. 3 Click OK.
Installation DirectoryRead-only path to the directory where Workspace services are installed Cache Files DirectoryDirectory where temporary files are stored for caching of user interface elements and content listings Root Log LevelLogging level for all services (see Configuring Logging Levels on page 229)
203
GSM: NameRead-only name of GSM that manages this Install Homes services GSM: Service Test IntervalFrequency in minutes with which GSM checks that registered services on all hosts are running GSM: HostComputer on which GSM is installed GSM: PortPort number on which GSM is running LSM: Log LevelLogging level for LSM (see Configuring Logging Levels on page 229) LSM: Service Test IntervalFrequency in minutes with which LSM checks that other services are running LSM: GSM Sync TimeFrequency in seconds with which LSM synchronizes its information with GSM
Database DriverName of the driver used to access the database This is database-dependent, and should be changed only by experienced administrators. If you change the database driver, you must change other files, properties, data in the database, and the Java classpath. See Changing the Repository Database Driver or JDBC URL on page 187.
JDBC URLURL for Java access to the database using the JDBC driver If you change the JDBC URL, you must change other files, properties, and data in the database. See Changing the Repository Database Driver or JDBC URL on page 187.
User NameUser name that services use to access the database that contains their metadata This name must match for all installations using the same GSM.
Password
Host database properties rarely should be changed, but if modifications are necessary, then edit these files, which contain database information for services, to keep them in sync:
Every RSC serviceYou must set properties on every RSC service individually
startCommonServices script
Instructions for changing some of the database properties are given in Changing the Services Repository Database Password on page 187, and in Changing the Repository Database Driver or JDBC URL on page 187.
204
HostName of the computer hosting Shared Services PortPort for the Shared Services User Management Console; default port number is 58080 Project nameShared project name; defined through Shared Services Application nameShared application name; defined through Shared Services CSS Config File URLURL used to retrieve external configuration information from Shared Services
Default URLURL stored in the database and used by all services Use this URL instead for this serverUsed to override the URL just for this Install Home (typically, it is not necessary to set this property)
The CSS Config File URL is stored in BpmServer.properties, the location of which depends on your servlet engine. For example, with Apache Tomcat, this file is in:
Install Home\AppServer\InstalledApps\Tomcat\5.0.28\Workspace \webapps\WEB-INF\conf
Note: If the Host, Port, or CSS Config File URL changes, you must update the BpmServer.properties file.
Set trusted passwordEnables the use of a trusted password Use users login credentials for pass-throughEnables pass-through using the users logon credentials Allow users to specify credentials for pass-throughEnables pass-through using the credentials the user specifies in Preferences If no credentials are specified in Preferences, an error message displays each time users attempt to open Interactive Reporting documents or run jobs.
205
defaultCalendarName listenerThreadPollingPeriodFrequency in minutes with which the system should poll for externally triggered events multiValueSQRParamSeparatorCharacter to use as a separator between values of a multi-value parameter in Production Reporting jobs bqDocsTimeOutInterval in seconds that services should wait for Hyperion Interactive Reporting Service to open Interactive Reporting documents defaultCategoryUuidRoot folder name outputLabelName of a set of job output files, which is composed of outputLabel value followed by job name outputLabel1Part of a job output label identifying a cycle of an Interactive Reporting job bqlogfilenameprefixLog file name for Interactive Reporting job output, without the file extension bqlogfileextFile extension of the log file for Interactive Reporting job output
206
Chapter
10
Configuring the servlets enables Workspace to more precisely meet the needs of your organization. Configuration settings depend on aspects of your organizations environment such as how the system handles user passwords, usage volume, and how users interact with Workspace.
Note: For information on customizing parameter forms for Production Reporting and generic jobs, see the Hyperion System 9 BI+ Workspace Users Guide. For information on customizing Web module user interfaces, refer to the Hyperion System 9 BI+ Workspace Developers Guide.
In This Chapter
Using Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Modifying Properties with Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Zero Administration and Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Load Testing Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
207
Windows: Select Start > Programs > Hyperion System 9 BI+ > Utilities and Administration > Servlet Configurator. UNIX: Run the config.sh file, installed in Install Home/bin.
The configuration toolbar is displayed above the navigation pane and contains these icons:
Icon
Sets the visible configuration settings (that is, those currently displayed in the right-hand frame) to their default values Sets all configuration settings to their default values
208
User Interface Properties on page 209 Personal Pages Properties on page 213 Internal Properties on page 215 Cache Properties on page 216 Diagnostics Properties on page 218 Applications Properties on page 218
3 Save your settings. 4 Make the settings effective by restarting the servlets.
User Interface: Login Properties on page 210 User Interface: Localization Properties on page 211 User Interface: Subscription Properties on page 212 User Interface: Job Output Properties on page 212 User Interface: SmartCut Properties on page 212 User Interface: Color Properties on page 212
209
LoginPolicy class for $CUSTOM_LOGIN$Name of the class that implements the LoginPolicy interface (the fully package-qualified name without the .class extension); specify only if you are using a custom logon implementation For more information about custom logon, see the loginsamples.jar file in Install Home\docs\samples.
Custom username policyPossible values are CUSTOM_LOGIN$ (the custom policies), $HTTP_USER$, $REMOTE_USER$, $SECURITY_AGENT$, or to $NONE$:
Set to $NONE$ unless you implement a custom logon or configure transparent logon If set to a value other than $NONE$, uses the specified user name policy to obtain the user name for all users logging on to Workspace servlets Use $CUSTOM_LOGIN$ only if you use a custom implementation for the username value If set to $SECURITY_AGENT$, the Custom password policy must be set to $TRUSTEDPASS$
Custom password policyPossible values are CUSTOM_LOGIN$ (the custom policies), $HTTP_PASSWORD$, $TRUSTEDPASS$, $USERNAME$, or to $NONE$:
Set this option to $NONE$ unless you implemented a custom logon or configured transparent logon If set to a value other than $NONE$, uses the specified password policy to obtain the password for all users logging on to Workspace servlets Use $CUSTOM_LOGIN$ only if you use a custom implementation for the password value. If the custom user name policy is set to $SECURITY_AGENT$, the Custom password policy must be set to $TRUSTEDPASS$
Allow users to change their passwordDisplays the Change Password link in Workspace Preferences for native users in Shared Services:
If you do not select this option, the change password link is not available to users If you configured transparent logon, do not select this option
Set default server toIP address or name for the server hosting GSM, and optional port number, where server and port are separated by a colon (:); if port number is omitted, the default GSM port number of 1800 is used, for example:
apollo:2220Uses port 2220 apolloUses default port 1800
210
Format times usingServlets can display time fields in a 12-hour (AM/PM) format or in a 24-hour format; for example, in a 24-hour format, the servlets display 6:30 PM as 18:30 Date display orderServlets can display dates in month day year order (for example, May 1 2004) or day month year order (for example, 1 May 2004) Use locale-sensitive sortSorts names using the default locale (locale-sensitive sorts are slightly slower but more user-intuitive; For example, A and a are sorted together in a locale-sensitive sort, but not in a lexicographical sort) If no locale-sensitive sort is defined, the servlets use a lexicographical sort.
Default local language codeLowercase, two-letter code for the language most commonly used by servlet end users (for example, en for English or fr for French) For a complete list of codes, go to:
http://www.ics.uci.edu/pub/ietf/http/related/iso639.txt
Users can use the servlets in the language of their choice (if templates exist in that language) by setting their browser language option. (In Internet Explorer, select Tools > Internet Options, General tab, Languages button. In Firefox, select Tools >Options, Language button.) Used in conjunction with country codes and local variants to determine (1) the set of templates the servlet reads upon startup, and (2) in what language to display pages. The system checks for localization settings in this order (until a non-default value is found): a. User browser b. Localization properties for the servlet (iHTML or Data Access) c. Default localization properties for Workspace servlets d. Default locale specified on the Web server Localization settings found are used in this order (until a default value is found): a. Language code b. Country code c. Local variant For example, Viewer checks the user browser first. If it has no language setting, then Viewer, which does not have its own localization settings, checks the default localization settings. This check begins with Default local language code. If that setting is specified (is not Default), Viewer checks Default local country code to refine localization. If it too is specified, Viewer checks Default local variant. If, on the other hand, Default local language code is set to Default, Viewer skips the default localization settings and checks the locale for which the servlets host is configured.
Default local country codeUppercase, two-letter code for the country (for example, US for United States, CA for Canada, and so on) Used in conjunction with the language code and local variant parameters to obtain and display user data
211
Used only if Default local language code is specified (is not set to Default); if country code is set to Default, the iHTML servlet uses the language code value to determine user
Default local variantOptional localization property used for a finer granularity of localization in messages for a user audience with matching language and country codes; for example, if you specify a variant of WEST_COAST, the system uses it to deliver specialized data, such as time for the local time zone Used only if Default local country code is not set to Default; if Default local variant is set to Default, the servlet uses the Default local language code and Default local country code values to determine the user locales
Display HTML icon when displaying Production Reporting job output in listing pages Display SPF icon when displaying Production Reporting job output in listing pages Output format to display after a Production Reporting job is run
General Properties
Main frame: Background colorBackground color of the main frame (or pane). Does not apply to Personal Pages. If you leave this option blank, your platforms default background color is used.
212
Personal Page wizard: Background colorPersonal Page wizard is the sequence of pages displayed after a user chooses New Personal Page. Wizard pages have two colors, a main background color and the color of the top and bottom borders. Personal Page wizard: Border colorSee preceding paragraph.
Title PropertySets the underline color when titles are underlined. Text Properties
Regular text colorRegular text is most of the text on servlet pages. If you leave this option blank, the browser default is used. Link text colorColor of links which the user has not (recently) chosen.
Personal Pages: General Properties on page 213 Personal Pages: Publish Properties on page 214 Personal Pages: Generated Properties on page 214 Personal Pages: Syndicated Content Property on page 214 Personal Pages: Color Scheme Properties on page 214
Max Personal Pages per userSet to 20 or less; default is 5 Max initial published Personal PagesMaximum number of Personal Pages to be copied from published Personal Pages when a user first logs on; set to at least 1 less than the value of Max Personal Pages per user; default is 2. Users can choose default Personal PageDefault is enabled
Users change their default by putting the desired default Personal Page at the top of the list on the My Personal Pages page in the servlets When disabled, users cannot delete or reorder the default Personal Page To ensure that users see the Personal Page containing the Broadcast Messages every time they log on, disable this option
Show headings of Content Windows on Personal PagesContent windows are displayed with headings (title bars); enabled by default
213
LocationFolder path and name that contains published Personal Pages; must be located in the /Broadcast Messages folder. Default value is /Broadcast Messages/Personal Page Content, which is not browsable by default
Show publishers groupsEnables end users to give permissions to their own groups; enabled by default Allow publisher to enter group nameEnables end users to give permission to a specified group; enabled by default Allow publishing to all usersEnables end users to give permissions to all users; enabled by default
Show My BookmarksGenerated Personal Page includes the My Bookmarks content window; enabled by default Show Exceptions DashboardGenerated Personal Page includes the Exceptions Dashboard; enabled by default Number of foldersNumber of pre-configured folders (subfolders of the /Broadcast Messages folder) that are displayed on the generated Personal Page; default is 3 Number of File Content WindowsNumber of displayable items in pre-configured folders (subfolders of the /Broadcast Messages folder) that are displayed as content windows on the generated Personal Page; default is 1 Default color schemeDefault color scheme for generated Personal Page and the Edit Personal Page page
NameRequired Headings colorBackground color of the heading (title bar) of each content window
214
Background colorBackground color of content windows in the main (wide) column Text colorColor of servlet-generated text on Personal Pages, such as the names of content windows Link colorColor of the text of servlet-generated links on a Personal Page, such as bookmarks in My Bookmarks Broadcast Messages colorColor of the heading of each Broadcast Messages content window Header background colorBackground color of content windows in the optional header area at the top of a Personal Page Footer background colorBackground color of content windows in the optional footer area at the bottom of the page Left column background colorBackground color of content windows in the optional narrow column on the left side of a Personal Page Right column background colorBackground color of content windows in the optional narrow column on the right-hand side of a Personal Page
Internal Properties
Internal properties control how servlets or the Workspace server works:
Internal: Redirect Property on page 215 Internal: Cookies Properties on page 215 Internal: Transfer Property on page 216 Internal: Jobs Property on page 216 Internal: Upload Property on page 216 Internal: Temp Property on page 216
Note: The session time-out value is configured on the servlet engine. For example on JRun, the HTTP session time-out value can be modified for the JVM. All Hyperion System 9 BI+ Web applications should have session timeouts set to greater than 10 minutes.
Keep cookies between browser sessionsSaves information between browser sessions. The user name last used to log on is saved and used for subsequent logon instances. Encrypt cookiesEncrypts saved cookies.
215
Cache Properties
Cache properties set limits on how long the servlets can cache various data. These properties affect the responsiveness of the user interface, so setting them involves a trade-off between performance and the freshness of displayed data. Cache folders for property can be described in three ways: (1) maximum time to cache folders, in seconds; (2) maximum delay between when a modification is made to a folder in the repository and when the user sees the change in Viewer; (3) maximum time interval during which users see old folder contents.
216
Increasing the value of Cache folders for makes pages display more quickly to the user, but increases the length of time that the user sees stale folder contents. Decreasing the value of Cache folders for reduces the duration that the user can see stale folder contents, but slows the display of pages. Topics that describe Cache properties:
Cache: Objects Properties on page 217 Cache: System Property on page 218 Cache: Templates Property on page 218 Cache: Notification Property on page 218 Cache: Browser Property on page 218
Number of folders cachedSize of the cache for folders; default is 200 Cache folders forMaximum time in seconds to cache folders (that is, the limit for the delay between changes to a folders contents and Viewers display of the changes); set to zero or greater; default is 3600 User sees old folder contents for no more than the number of seconds specified here.
Cache browse queries forMaximum time in seconds for changes to browse queries in the Workspace servers to be reflected in the servlets; set to zero or greater; default is 60 Cache jobs forMaximum time in seconds for changes to jobs in the Workspace servers to be reflected in the servlets; set to zero or greater; default is 60 Cache parameter lists forMaximum time in seconds that the servlets cache job parameter lists; default is 60 Cache published Personal Pages forMaximum time in seconds that the servlets cache the content of the /Personal Page Content folder; must be greater than zero; default is 60 Note that this cache is refreshed whenever a Personal Page is published using the Personal Pages servlet.
Cache Content Windows on Personal Pages forMaximum time in seconds for changes to Broadcast Messages on a Personal Page to be reflected in the Personal Pages servlet; must be greater than zero; default is 60 Cache Content Windows being modified forMaximum time in seconds that Viewer or Administer module caches content while it is being modified; default is 180 Cache list items forMaximum time in seconds that item or resource lists are cached; default is 900 Max items to cache for listingMaximum number of items in a listing that are cached; default is 100
217
Diagnostics Properties
Configuration Log properties are used for diagnostic purposes:
Logging Service ServerHost name of the server on which Logging Service resides ConfigurationPath of Servlet Configurator log configuration file, servletLog4jConfig.xml (the default can be used)
Applications Properties
Applications: URL Properties on page 219 Applications: iHTML Properties on page 219 6x Server URL Mapping on page 220
218
Clear disk cache afterMaximum time interval between clearing of disk cache, in seconds; default is 300 Terminate idle iHTML session afterNumber of seconds for iHTML servlet to wait for a response from Hyperion Interactive Reporting Service before timing out
Default is 1800 Changes the BQServiceResponseTimeout property in ws.conf If exceeded, Hyperion Interactive Reporting Service does not respond
DAS response timeoutNumber of seconds that Data Access servlet should wait for a response before timing out:
Hyperion Intelligence Backward Compatibility SupportEnables Hyperion Intelligence clients of prior versions (8.2.1 and earlier) to communicate with Workspace:
Enable backward compatibility only for testing or diagnostic purposes; it is not recommended for production environments.
Enable Zero AdministrationIdentifies the release number of the most up-to-date version of Interactive Reporting on the server and triggers the downloading of the Interactive Reporting Web Client when a user selects a link to an Interactive Reporting document
219
These mappings are made by adding calls to the Map6xUrlTo8() method and should be added to the CustomizeInstallForIE(insight) function.
The Map6xUrlTo8(Old_URL, New_URL) method establishes a URL mapping. Passing an empty string as New_URL cancels the URL redirection. Clear6xUrlMap() function removes all URL redirections established so far. The CustomizeInstallForIE(insight) function only runs when Interactive Reporting is downloaded. Mappings are saved in the Windows registry for use with locally saved documents. If the mappings are to be updated dynamically (once per session), then the call to the CustomizeInstallForIE(insight) function should also be made from the Zero Administration main function. Example function CustomizeInstallForIE(insight) {
insight.Map6xUrlTo8("http://<brio6x_host>:<brio6x_web_port>/odsisapi/ods.ods","http://<hyperion9x_host>:<hyperion 9x_web_port>/workspace/dataaccess/Browse") }
220
Client Processing
When an Interactive Reporting document is opened in Viewer, the Web browser retrieves and parses the HTML documents from the Web server. The JSP logic for Zero Administration, which is included in these HTML files, runs in the clients Web browser. The zeroadmin.jsp file is retrieved from the Web server. Release numbers from that file are compared to release numbers on the client computer. There are three possible outcomes:
If no release number is found on the client, the user is prompted to install. If the numbers are equal (meaning the client release number matches the zeroadmin.jsp file), or if the client release is greater than the zeroadmin.jsp version, the Interactive Reporting document is opened using the previously installed Interactive Reporting release. If the release number on the client is less than that in zeroadmin.jsp, the user is prompted to upgrade their client product.
Web browsers can interrogate Interactive Reporting to find out the release number. You can view this information by locating the DLL files (for example, axbqs32.dll under Internet Explorer, or npbqs32.dll under Firefox) and displaying their file properties. Most popular Web browsers allow automatic download and installation and provide a digital certificate for an extra layer of security. The JSP automatically provides the correct application (plug-ins for Windows in a browsercompatible file format).
where:
DateTimeStamp is the date and time stamp parameter type with format %Y%m%d%H%M%S UserRuntimeID is a virtual userID parameter type with format %03s
2 Enable static key encryption for recording the scripts and running the scripts within Workspace.
This setting is not recommended for production environments.
221
Data Access Servlet Property on page 222 Hyperion Interactive Reporting Data Access Service Property on page 222 Hyperion Interactive Reporting Service Property on page 222
Note: Setting only one of these properties can cause processing (running of Interactive Reporting jobs, querying from Interactive Reporting, querying from the Workspace) to fail, because the source and target encryption schemes do not match.
Make sure the property is defined inside the <properties> subnode of the <service type="DataAccess"> node and outside of this node: <propertylist
defid="0ad70321-0002-08aa-000000e738090110" name="DAS_EVENT_MONITOR_PROPERTY_LIST">
2 Restart the Hyperion Interactive Reporting Data Access Service for all Install Homes.
The property must be inside the <properties> subnode of <service type="BrioQuery"> node and outside of this node: <propertylist defid="0ad70321-0002-08aa000000e738090110" name="BQ_EVENT_MONITOR_PROPERTY_LIST">
222
Chapter
Troubleshooting
11
Administrators can generate log files throughout Workspace to help technicians identify system or environmental problems or to help developers debug reports or API programs.
In This Chapter Logging Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Log File Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Configuring Log Properties for Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Analyzing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Information Needed by Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Troubleshooting
223
Logging Architecture
All log messages are routed through Logging Service and stored in one location. Logging Service writes log messages to one or more files, which can be read using a viewer. Log4j (version 1.2) is used as the basis for the logging framework and configuration files. Log Management Helper is used by C++ services (Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service) in conjunction with the log4j framework and Logging Service. Workspace comes with preconfigured loggers and appenders. Loggers correspond to areas in code (class) where log messages originated. Appenders correspond to output destinations of log messages. You can troubleshoot system components by setting the logging level of loggers.
Log4j
The log4j package enables logging statements to remain in shipped code without incurring heavy performance costs. As part of the Jakarta project, log4j is distributed under the Apache Software License, a popular open source license certified by the Open Source Initiative. Logging behavior is controlled through XML configuration files at runtime. In configuration files, log statements can be turned on and off per service or class (through the loggers) and logging levels for each logger can be set, which provide the ability to diagnose problems down to the class level. Multiple destinations can be configured for each logger. Main components of log4j:
LoggersControl which logging statements are enabled or disabled. Loggers may be assigned levels ALL, DEBUG, INFO, WARN, ERROR, FATAL, or INHERIT. AppendersSend formatted output to their destinations.
Go to www.apache.org or see The complete log4j manual by Ceki Glc (QOS.ch, 2003).
Logging Service
Logging Service stores all log files in one location. If Logging Service is unavailable, log messages are sent to backup log files. When Logging Service is restored, messages in backup files are automatically sent to Logging Service, which stores them in log files and deletes the backup files. Logging Service cannot be replicated.
224
Troubleshooting
One LMH process exists for each Hyperion Interactive Reporting Data Access Service and for each Hyperion Interactive Reporting Service per Install Home. Logging Service consolidates all log messages in separate log files for Hyperion Interactive Reporting Data Access Service and Hyperion Interactive Reporting Service per Workspace.
Server Synchronization
Because log files are time-stamped and written in chronological order, time synchronization between servers, which is the responsibility of the administrator, is important. Many products, free and commercial, are available to manage server clock synchronization.
Log File Location on page 225 Log File Naming Convention on page 226 Log Message File Format on page 227
Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service Local Log Files
Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service have additional log files that are stored in the directory where the services are run, and which collect log messages before these services connect to Logging Service. Log messages in these files are not routed to Logging Service log files. Start-up problems are collected in BIstartup.log and DASstartup.log. Other log messages generated when Logging Service is unavailable are collected in these log files:
If you change the name or location of these files, you must change the entry in server.xml that points to them. server.xml resides in \BIPlus\common\config.
225
Servlets
Services:
AnalyticBridgeService AuthenticationService AuthorizationService CommonServices DataAccessService EventService GSM HarvesterService IntelligenceService JobService LSM NameService PublisherService RepositoryService SessionManager ServiceBroker TransformerService Usage Service
Miscellaneous
BIProcessMonitor
226
Troubleshooting
information
Contains logging messages when Logging Service is unavailable (for example, BI_PM_sla1_backupMessages_10_215_34_160_1800.log).
LoggerName of the logger that generated the logging message Time stampTime stamp in coordinated universal time (UTC); ensures that messages from differing time zones can be correlated The administrator is responsible for time synchronization between servers.
LevelLogging level ThreadThread name Sequence numberUnique number to identify messages with matching time stamps TimeTime the log message was generated ContextInformation about which component generated the log message
SubjectUser name Session IDUUID of the session Originator TypeComponent type name Originator NameComponent name HostHose name
The format for backup log files match the format for regular log files.
Log File Basics
227
Configuration Log
Basic configuration information is logged to configuration_messages.log in BIPlus/logs. The file format matches service and servlet log file formats. This log file contains Java system property information, JAR file version information, and database information.
Loggers, logging levels, and appenders are configured in XML files. The log rotation property is a Java system property and is configured in startcommonservices.bat. Logging levels for LSC services, RSC services, and the root logger are configured using LSC and RSC. All other configuration changes are made by editing XML files.
Configuration Files
Configuration file types are main and imported: Imported files are used by main files and organize the loggers and appenders into separate XML files. Main configuration files:
serviceLog4jConfig.xmlMain configuration file for services; in \BIPlus\common\config\log4j remoteServiceLog4jConfig.xmlMain configuration file for Hyperion Interactive
Reporting Service and Hyperion Interactive Reporting Data Access Service, and for RSC services when started remotely; in \BIPlus\common\config\log4j
adminLog4jConfig.xmlMain configuration file for LSC, RSC, and Calendar Manager servletLog4JConfig.xmlMain configuration file for the servlets; in \WEBINF\config of the servlet engine deployment
Note: If you change the location of serviceLog4jConfig.xml or remoteServiceLog4jConfig.xml, you must update the path information stored in server.xml. If you change the location of servletLog4jConfig.xml, you must update the path information in ws.conf.
228
Troubleshooting
Appenders can be added by referencing them in <logger> and <root> elements using <appender-ref> elements.
serviceloggers.xmlImported by serviceLog4jConfig.xml and remoteServiceLog4jConfig.xml; configure through LSC debugLoggers.xmlContains definitions for loggers that can be enabled to debug problems in the services; imported by serviceLog4jConfig.xml file and remoteServiceLog4jConfig.xml; in \BIPlus\common\config\log4 debugLoggers.xmlContains definitions for loggers that can be enabled to debug problems in the servlets; imported by servletLog4jConfig.xml; in the \WEBINF\config folder of your servlet engine deployment
Logging Levels Description Uses the logging level set at its closest ancestor with an assigned level; not available at the root level All messages levels Minor and frequently occurring normal events; use only when troubleshooting Normal significant events of the application Minor problems caused by factors external to the application Usually, Java exceptions that do not necessarily cause the application to crash; the application may continue to service subsequent requests Implies the imminent crash of the application or the relevant sub-component; rarely used
Configuring Loggers
Use RSC to configure RSC service logging levels, which are stored in the database (see Advanced RSC Properties on page 176). Use LSC to configure LSC service logging levels (stored inserviceLoggers.xml) and the root logger (see Host General Properties on page 203). Configure the servlet root logger level in servletLog4JConfig.xml. Configure other servlet loggers in the servlet debug configuration file (debugLoggers.xml).
229
Configuring Appenders
You can send log messages to multiple destinations by adding appenders, defined in appenders.xml, to loggers.
For example:
<appender-ref ref="LOG_LOCALLY_BY_LOGGING_SERVICE"/>
230
Troubleshooting
231
All appenders in XML configuration files are configured to use default values for CompositeRollingAppender. You can configure CompositeRollingAppender properties for each appender separately.
Note: If you want all log files to rotate using matching criteria, change the configuration for each CompositeRollingAppender defined in both appenders.xml files.
1 - Roll the logs by size 2 - Roll the logs by time 3 - Roll the logs by size and time RollingStyle 3 could provide confusing results because naming conventions for logs rolled by time and size differ, and deletion counters do not count logs rolled differently together.
DatePattern value
If RollingStyle=2 or 3, set the time interval to write log messages to another log file. Set the Date Pattern value using the string, yyyy-MM-dd-mm; for example, yyyy-MM-dd-mm means every 60 minutes, yyyy-MM-dd-a means every 12 hours, and yyyy-mm-dd means every 24 hours. Default is every 12 hours.
MaxFileSize
If RollingStyle=1 or 3, when the maximum file size is reached, the system writes log messages to another file. Default is 5MB. You can use KB (kilobyte), MB (megabyte), or GB (gigabyte). If RollingStyle=1 or 3, when the maximum number of log files per originator type (plus one for the current file) is reached, the system deletes the oldest file. Default is 5. Log files rolled by time are not affected by this setting.
MaxSizeRollBackups
The appenders.xml files for server and servlets tell the server when to create another log file, which two parameters. The best practice rolling style is 3, which toggles log files by time or size. The default 5MB log file size is the default for software packages such as e-mail and Web servers.
Note: Best practices recommend that RollingStyle for all entries be set to 3, and that default log file size be set to 1 MB. Log files that exceed 1 MB may slow down the server, with possible outages (the service crashes or needs to be restarted) occurring after the log exceeds 25 MB. Large log files can be problematic to open in a text editor such as Notepad or vi.
232
Troubleshooting
<!-- Select rolling style (default is 2): 1=rolling by size, 2=rolling by time, 3=rolling by size and time. --> <param name="RollingStyle" value="1"/> <!-- If rolling style is set to 2 then by default log file will be rolled every 12 hours. --> <param name="DatePattern" value="'.'yyyy-MM-dd-a"/> <!-- If rolling style is set to 1 then by default log file will be rolled when it reaches size of 5MB. --> <param name="MaxFileSize" value="5MB"/> <!-- This is log file rotation number. This only works for log files rolled by size--> <param name="MaxSizeRollBackups" value="5"/> <layout class="com.brio.one.mgmt.logging.xml.XMLFileLayout"> </layout> </appender>
To use LogFactor5:
1 Copy the name of the LogFactor5 appender, <appender-ref ref="LF5APPENDER"/>. 2 Paste the copied codeline under the logger in which to use LogFactor5.
<root> <level value="WARN"/> <appender-ref ref="LF5APPENDER"/> <appender-ref ref="LOG_REMOTELY"/> </root>
LogFactor5 starts automatically when the component to which you added the appender is started. If the component is ongoing, LogFactor5 starts in 30 seconds. The LogFactor5 screen is displayed when logging initializes. Log messages are displayed as they are posted.
233
Server logs
Client logserver_messages_BrowseServlet.log
BI1_hostname.log DAS1_hostname.log 0_DAS_hostname.log (when using a process monitor) 0_BI_hostname.log (when using a process monitor)
Client logs
server_messages_DataAccessServlet.log server_messages_iHTMLServlet.log
Server logs
234
Troubleshooting
server_messages_DataAccessService.log server_messages_IntelligenceService.log
DAS1_hostname.log BI1_hostname.log
Client logs
server_messages_BrowseServlet.log server_messages_JobManager.log
Server logs
Server logs
server_messages_Authorization.log
235
236
Troubleshooting
Part
II
Chapter 12, Understanding Enterprise Metrics Chapter 13, Enterprise Metrics Security Chapter 14, Supporting Clips in Enterprise Metrics Chapter 15, Enterprise Metrics Server Administration Chapter 16, Enterprise Metrics Load Support Programs Chapter 17, Troubleshooting Enterprise Metrics Chapter 18, Evaluating Enterprise Metrics Performance Chapter 19, Enterprise Metrics Preference File Settings
237
238
Chapter
12
In This Chapter
This chapter provides an overview of the components of Enterprise Metrics. It introduces the major components that are installed and configured, and it explains what functions are available in the resulting environments.
Metrics and Configuration Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Database Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Enterprise Metrics Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Clients and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Implementation and Administration Process Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
239
240
Database Overview
Enterprise Metrics uses four sets of database tables. Each set is described in these sections:
You can manually install the catalogs and a small number of required system tables after you complete the Enterprise Metrics installation. The Application Data and the staging database are defined by the customer as part of the application development process or by installation of a preconfigured Enterprise Metrics Solution. Depending on the data extraction strategy, the staging database tables can be distributed in a separate database instance or reside in the same instance as the Application Data and libraries. The database tables are easily distinguishable because each set includes a different prefix in the table names. For example, all tables in the Configuration Catalog have the prefix PUB_ and all tables in the Metrics Catalog have the prefix PRD_.
Application Data
The Application Data contains the data that is viewed and analyzed by general end-users of Enterprise Metrics. The data for a customers application may reside in relational star schema tables, Analytic Services cubes, or both, as defined by the customer during the application development process. The Application Data also must include the relational system tables (such as BAP_PERIOD and BAP_LOAD) that are created during the Enterprise Metrics installation.
Database Overview
241
Catalogs
The catalogs each contain a set of relational tables. The tables share names (except for prefixes) and columns. The tables in the catalogs contain configuration information that controls many aspects of the Enterprise Metrics application. These aspects include the following:
The definition of metrics, measures, pages in the Monitor Section, pages in the Investigate Section, Report pages (in the Pinpoint Section), enrichment rules, and much more, including behaviors associated with those objects (such as, the link from a chart on a page) The appearance of the charts (such as, chart colors, scaling, and number formatting) The layout and format of mini reports
The Metrics Catalog represents the production metadata. The Metrics Catalog tables affect what end-users see and are handled by the Server. This set of tables duplicates those in the Configuration Catalog, except that the table names are prefixed with PRD_ (such as, PRD_STAR_HIERARCHY). The Configuration Catalog represents the publishing metadata. The Configuration Catalog tables are handled by the Configuration Server and can only be viewed by those with publishing privileges, such as the Editor. This set of tables allows Editors to make configuration changes to the application, then view those changes without affecting the production application (Personalization Workspace) that end-users access. The tables in the Configuration Catalog are prefixed with PUB_ (such as, PUB_METRIC).
242
The servers are designed to run continuously, without intervention. They poll the database to detect database and network outages, and they close and re-establish database connections as necessary. More importantly, they monitor a group of flags in the BAP_LOAD table (in the Application Data area) to detect when new data is being loaded, or that metadata publishing is in progress, and automatically re-initialize when these processes complete. The Application Data, and most of the catalog tables, are treated as read-only by the servers. Each time a server initializes, it reads in all of the metadata, performs various consistency checks, and then permits clients to connect. In the case of the Enterprise Metrics Server, before accepting client connections, the server preloads the system cache with some of the pages most likely to be accessed by users. Typical initialization time for the Enterprise Metrics Server is from two to seven minutes, depending on the amount of data to preload. The primary function of the server is to act as the metrics engine, using the catalog definitions to convert a client request for a complex set of metrics into an optimal set of generated SQL or MDX queries, to access the required columns in the Application Data. The results are then used to calculate the desired metrics, return them to the client, and cache them for possible future use. The server also implements a variety of functions related to performance, scalability, and security, including:
Aggregate navigation and query consolidation, to minimize query times Management of a dynamically adjusted connection pool Enforcement of data level (row and column) security restrictions, on a user basis Client authentication, authorization, and idle session timeouts Personalization functions, allowing users to customize their pages and links in the Personalization Workspace (this information is also stored in the catalog, not on the client machines) Activity logging and statistics collection for use in performance tuning
The Enterprise Metrics Server is remarkably efficient and does not require a large investment in CPU, memory, or network resources. After the databases have been created, all it takes to bring up a server is to define the database connections and assign a port number and invoke the startup script. When one of the Enterprise Metrics Servers is started, it reads a preference file to determine the settings. Many preference settings (prefs) are available for fine-tuning the installation. See Chapter 19, Enterprise Metrics Preference File Settings. When the Workspace or Personalization Workspace or one of the Enterprise Metrics servers is started, it reads the preference file to determine the settings. Preference settings include information such as the user ID and password that the server uses when connecting to the database, and the page that appears initially when a user starts the Workspace.
243
Servlets
Servlets are used to support three functions for Enterprise Metrics.
Launcher ServletsEnterprise Metrics has two Launcher Servletsone for the Configuration environment and the second for the Metrics environment. These servlets are responsible for handling the login process, authentication and single sign-on across Enterprise Metrics clients and also with other single sign-on applications. Thin Client ServletThe Thin Client Servlet handles dynamic HTML and image generation for running the Workspace. The Enterprise Metrics servlets are designed to run in a dedicated JVM without other servlets.
Enterprise Metrics Personalization WorkspaceThe Personalization Workspace uses pure HTML. The Personalization Workspace uses a Java applet and requires a one-time setup of the Java plug-in. The Personalization Workspace allows end-users to create personal pages in the Monitor Section and to customize pages in the Investigate Section. Enterprise Metrics StudioThe Studio is used by the Editor to configure pages in the Monitor and Investigate Sections.
Server ConsolesAllow an Administrator to manually restart the associated server, adjust various preference settings dynamically for tuning or monitoring purposes, and view server logs remotely. Studio UtilitiesA collection of functions used by the Editor to edit the definitions in the Configuration Catalog, and eventually publish them to the Metrics Catalog. Although a few of these functions require some database knowledge, the majority are designed to be used by a business analyst rather than an information services staff member. Log filesTrack activity on the Workspace, Personalization Workspace, Enterprise Metrics Studio, and Servers, and as well as the Studio Utilities. Data in the log files is used for troubleshooting. Technical UtilitiesA set of tools that includes a Calendar Utility to generate the time (or period) dimension table, a Performance Statistics Utility to gather statistics, and a Metadata Export Utility to extract metadata for troubleshooting purposes.
With the exception of the Enterprise Metrics Workspace and Technical Utilities, all front-end components run as Java applets. There is a one-time setup process to install the Java plug-in.
244
Installation on page 245 Implementation on page 245 Administration on page 246 Troubleshooting on page 246
Installation
As the administrator, you may be involved in the installation and initial configuration of Enterprise Metrics. Accordingly, there are a number of prerequisites that must be addressed before installing the software. After you install the software, there are manual configuration steps that you may need to perform depending upon your system configuration. In addition, there are steps that you should follow to verify the installation. See the Hyperion System 9 BI+ Enterprise Metrics Installation Guide for additional information.
Implementation
After completing the installation process (including verification and testing of installed components) there are certain tasks that must be performed to complete the initial implementation of Enterprise Metrics. These tasks include:
Setting up the Technical Utilities (Hyperion System 9 BI+ Enterprise Metrics Users Guide) Provisioning users and groups to access Enterprise Metrics (Chapter 13, Enterprise Metrics Security) Meeting the requirements to support Analytic Services, if you plan to use Analytic Services as a data source (Hyperion System 9 BI+ Enterprise Metrics Users Guide) Generating the period table information using the Calendar Utility (required) (Hyperion System 9 BI+ Enterprise Metrics Users Guide) Configuring Enterprise Metrics to support clips on Interactive Reporting Studio Dashboards, if desired (Chapter 14, Supporting Clips in Enterprise Metrics) Performing load balancing for the Enterprise Metrics Workspace
245
Administration
Typically, there are two types of administration you will perform on the Enterprise Metrics system: daily administration and periodic maintenance. Daily administration may include:
Starting and stopping the Enterprise Metrics Servers Starting and stopping the dedicated servlet JVM Scheduling ETL jobs Enrichment job processing Standard and enrichment publishing
Adding new Enterprise Metrics users Updating calendar information using the Calendar Utility (for example, adding an additional five years of calendar data) Assisting the Editor with enrichment functions, such as adding new tables or columns to the Application Data area to be used for data enrichment or modifying the ETL jobs as necessary Modifying the server preferences, if necessary for troubleshooting or other purposes Monitoring performance statistics
Troubleshooting
The Enterprise Metrics log files are the primary source of information if you need to troubleshoot problems in Enterprise Metrics. These logs files provide detailed information regarding the activity for the Workspace or Studio, Servers, and Studio Utilities. In addition, there are a set of log files for the dedicated servlet JVMs that can be used to troubleshoot problems with the Enterprise Metrics servlets.
246
Chapter
13
This chapter provides information on Enterprise Metrics authentication and security. It also includes information on how to use Analytic Services security.
In This Chapter Provisioning Users and Groups to Access Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Using Analytic Services Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 About Database Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 About Application-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
247
Metrics ViewerReview Enterprise Metrics content. This allows a Hyperion System 9 BI+ user to view Enterprise Metrics content within the BI+ workspace. If the user is not granted this role, he will not see the option to launch Enterprise Metrics from his Workspace. Metrics AnalystPersonalize Enterprise Metrics Workspace. This role allows a Hyperion System 9 BI+ user to launch Enterprise Metrics Personalization Workspace. This role internally contains the Metrics Viewer role, by definition. Metrics EditorCreate and distribute Enterprise Metrics. Generate the content used to create Enterprise Metrics. Assign Data Security to users. This role must be assigned only to Enterprise Metrics administrators and Editors and includes the Metrics Analyst and Metrics Viewer roles.
Unless a user is granted one of these roles, she is not able to access any of the Enterprise Metrics Clients.
If you plan to use Analytic Services security, review the detailed guidelines in the following sections:
Supported Security Rule Sets in Enterprise Metrics Provisioning Users and Groups to Access Enterprise Metrics Enabling Analytic Services Data Security
248
You must grant data security to a user using the Enterprise Metrics Studio Utilities Security tool for each user in Analytic Services. Within Analytic Services (using an Analytic Services administration tool such as EAS or MaxL), the user must be assigned a minimum access level of at least read for all of the cubes in the configuration. When you grant data security to a user, you must place the user in the UNRESTRICTED rule set. Although you are placing them in the UNRESTRICTED rule set in Enterprise Metrics, each users security restrictions are determined by Analytic Services. Do not assign a user to more than one hierarchical rule set.
For information on creating users, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.
249
4 Verify that AUTH_METHOD=CSS. (You must use external authentication in order to use Analytic Services
data security.)
5 Save and close the file. 6 Repeat the above steps for the Metrics_server.prefs file.
Users must still be authorized to use Enterprise Metrics by defining rule sets in the Security tool. All users are treated as if they were in the UNRESTRICTED rule set, and data security of all types is provided solely by Analytic Services. In the Personalization Workspace, the security restriction display shows Using Analytic Services security. See the Hyperion System 9 BI+ Enterprise Metrics Users Guide for information the Security tool.
Note: You must not enable this feature in a mixed relational data mart and cube environment. If you request a relational query with this option configured, no data security would be applied.
CDB_USERUsed for a pool of read-only connections to the Application Data tables and views. These connections are used for generating SQL queries against the Application Data that gathers the data to display pages in the Monitor, Investigate, and Pinpoint Sections, and possibly to create views in the Application Data for building reports. DB_USERUsed for two connections to the Application Data database. One connection is used only during server initialization for the purpose of reading constraint values, reading values from BAP_PERIOD or BAP_PERIOD_TIME, checking column names in tables, and so on. One connection remains open continuously for polling the BAP_LOAD table. MDB_USERUsed for a single connection to read the data in the catalog tables during server initialization, which is closed at the end of initialization.
250
UMDB_USERUsed for a single connection for access to the catalog tables. This connection is used during server initialization to write report constraints, and during normal operation for saving user changes to the pages in the Monitor and Investigate Sections, preferences, and for accumulating some statistics. Although this connection is open continuously, there are rare cases that require updates to catalog tables by more than one user, in which case one or more additional connections may be acquired briefly, then released.
The standard installation uses only two database user IDs: one for the Application Data, and another for the Metrics and Configuration Catalog tables. CDB_USER and DB_USER are set as the user ID for the Application Data, and MDB_USER and UMDB_USER are set as the user ID for the Metrics and Configuration Catalog tables. These two database IDs are a prerequisite to the installation of Enterprise Metrics.
Detailed information on application-level security is provided in the Hyperion System 9 BI+ Enterprise Metrics Users Guide.
Authorization
In a standard configuration, the Editor defines rule sets using the Security tool in the Enterprise Metrics Studio Utilities. The access rights associated with general users differ from those associated with the Editor. The two special predefined rule sets in Enterprise Metrics are: Reported Periods Only and Unrestricted. See the Hyperion System 9 BI+ Enterprise Metrics Users Guide.
251
Hierarchical securityRow-level security that restricts users to specific members of a particular hierarchy level. For example, product managers can be restricted to seeing data only for those product families they manage. Time restriction on unreported periodsA special type of row-level security that limits users access to unreported fiscal periods. For example, you can prevent non-insiders from seeing certain data for unreported quarters. Fact securityColumn-level security that restricts users from seeing certain factual data. For example, you might want all cost and revenue figures to be accessible only to upper management.
252
Chapter
14
Enterprise Metrics clips are Enterprise Metrics charts or mini reports that are defined and invoked via a URL. Enterprise Metrics allows users to copy the URL of charts and mini reports from the Monitor Section in Enterprise Metrics Personalization Workspace to external Web pages or Interactive Reporting Studio dashboards. This chapter provides important requirements that are necessary to support Enterprise Metrics clips with Interactive Reporting Studio.
In This Chapter
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Authentication and Authorization Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Preference Settings Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
253
Overview
Enterprise Metrics clips allow end-users to copy URLs of charts or mini reports in the Monitor Section and paste the URLs to an external Web page or Interactive Reporting Studio dashboard. Enterprise Metrics clips:
Present livecurrent data when viewed. Apply user security rules of currently logged in user when viewed. Launch Enterprise Metrics with the context of that objecttaking end-users directly to the specified target page. The clips specify to launch Enterprise Metrics within Hyperion System 9 BI+ Workspace or Enterprise Metrics Personalization Workspace. Can target a page in the Monitor, Investigate, or Pinpoint Section. Can target several reportswith different constraints applied.
All clips to Enterprise Metrics display a Tooltip when the end-user positions the mouse pointer over the clip. The Tooltip includes important information on security restrictions and where the clip links to in Enterprise Metrics. The following sections describe requirements that are necessary for end-users to use Enterprise Metrics clips with Interactive Reporting Studio.
Authentication and ProvisioningEnterprise Metrics must use external authentication using the same Shared Services instance that is used by Hyperion System 9 BI+ Workspace. This is automatically configuring (by default) when you run the Configuration Utility after installing Enterprise Metrics. Hyperion System 9 RolesUsers must be granted adequate roles and access control to view Interactive Reporting documents. In addition, they must be granted at least one of the following roles: Metrics Viewer, Metrics Analyst, or Metrics Editor. Enterprise Metrics SecurityThe users must be assigned adequate data security in the Enterprise Metrics Security tool.
For additional information on assigning rule sets to provisioned users and groups in the Data Security tool, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.
254
When an end-user clicks a Enterprise Metrics clip on a Interactive Reporting document, the link opens up either a new Enterprise Metrics tab within the Workspace or a new browser window and starts a session of Enterprise Metrics Personalization Workspace, depending on the options used when generating the Clip URL. In addition, when the Enterprise Metrics clients are launched, the context of the Clip automatically is displayed in a Monitor, Pinpoint or Investigate Section. The user is not prompted to log in.
Note: Enterprise Metrics clips do not contain the User ID or any data security restrictions. The User ID and corresponding data security restrictions are applied for the logged-in user, when the clip is viewed.
Figure 29
Essentially two Client preference settings determine what options you see in the Clip Generation Options dialog box. They are:
CLIP.URL_TYPEControls the options displayed on the Clip Generation options dialog box. This preference setting has one of the following values:
GENERALIs the default option. Allows the user to generate URLs for Clips in the standard format. You can use these URLs to embed clips in a single sign-on web environment other than Hyperion System 9. PREFIXEnables clip URLs to be generated in the required format for Interactive Reporting. In this mode, you must also set the CLIP.URL_PREFIX value. Essentially, the standard URL is URL-encoded and appended to the prefix, for these options. BOTHEnables the user to generate clip URLs in any of the above formats. In this mode, you must also set the CLIP.URL_PREFIX value.
255
CLIP.URL_PREFIXContains the prefix to use for the clip URL when the URLs are generated for the clip.
Note: The Metrics Server, however, has the ability to automatically derive the values for these preference settings if the AV_URL Server preference setting is set in the Server preference file. In a typical Enterprise Metrics installation, the AV_URL preference setting is set when the server setup is completed using the Configuration Utility. In the default scenario, you do not need to update the preference settings listed above. However, for your installation you may choose to suppress the last two options, which can be done by explicitly setting the values for CLIP.URL_TYPE and CLIP.URL_PREFIX.
For addition information, see Chapter 19, Enterprise Metrics Preference File Settings.
a. Add the CLIP.URL_TYPE= setting to the file. b. Indicate the value BOTH or PREFIX. Since BOTH is what the Metrics Server defaults to if AV_URL is specified, you may want to set this to PREFIX to reduce the options to only the first two. c. Add the CLIP.URL_PREFIX= setting to the file to use a custom prefix on the URL. For example:
CLIP.URL_PREFIX=http://<System 9 BI+web server:port>/workspace/Hyperion/browse/extRedirect?extUrl=
You must specify the <System 9 BI+web server> exactly as you expect the end-users to use when launching the Hyperion System BI+ Workspace.
Note: If these two preference settings are already present in the Client.prefs, modify the existing settings.
256
Chapter
17
This chapter describes how you can troubleshoot problems using the Enterprise Metrics log files. In addition, this chapter also provides information on how to use the Metadata Export Utility if a Hyperion Solutions Customer Support staff members requests to view your catalog (metadata).
In This Chapter
Using Log Files for Tuning and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Locating and Viewing the Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Understanding Which Logs to View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Reading Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Using the Deployment Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Using the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
287
Where the logs are stored, and how to identify them The best ways for locating and viewing them Which log(s) are most likely to contain relevant information
The three types of logs are: server, client, and tools. Each log is identified with a prefix and each server maintains its own logs. Server logs are distinguished by the port number in the third segment of the filename. To manage disk space effectively, a log rotation scheme is used. Each component is configured to maintain some number of log files (two or three); when the current log file exceeds a configurable size limit, the file is closed, a new one is started, and older ones are removed. For this purpose, each log filename includes a date and timestamp indicating when the log was created (first written to). Sample log names are:
Workspace and Personalization Workspace logmb.client.20020130.042005.log Server logmb.server.2005.20020130.041841.log Studio Utilities logmb.tools.20020130.114913.log Studio logmb.client.20020301.035903.log Configuration Server logmb.server.2006.20020130.112647.log
288
If an end-user runs Personalization Workspace, the log file is stored on their computer. The exact location is browser-dependent, so the simplest way to find and view these is to use the View Log link on the Login page. This function locates the current log (provided the client has been run at least once in the current browser session). For example, on Windows 2000, the log is stored in the temporary directory:
C:\Documents and Settings\<userid>\Local Settings\Temp. Note: If an end-user runs the Workspace using Tomcat, the activity is written to a central log file on the machine where the Enterprise Metrics Web components are installed in the <Hyperion_Home>\AppServer\InstalledApps\Tomcat\5.0.28\EnterpriseMetrics \server folder.
Servlet Logs
The Servlet log, mb.servlets.log, location depends on your web environment.
Tomcat
<Hyperion_Home\<EM_Home>\AppServer\InstalledApps\Tomcat\5.0.28\Enter priseMetrics\server
289
For WebLogic, the log is written in the directory containing the WebLogic startup script
<BEA_HOME>/user_projects/domains/HMB
For WebSphere, the log is written in the WebSphere Application Server home directory <WAS_HOME>.
If the server does not fully initialize, view the all of the information you might need should be available in the server.log. For performance tuning, use the server logs to review the issued database queries and aggregate table usage. If an end-user complains of slow response time, first review the users client.log to determine which page or item is causing the delay, and then match to the entries in the server.log for further analysis (further details are explained below). If an end-user has a chart or report that will not display, first review the client.log to identify which specific chart or report is causing the problem, and then trace it back to the server.log (where you might find that a query was failing, or perhaps the metadata was configured improperly). In such cases, it usually helps to have the user begin a new session and recreate the problem in the most direct manner possible, to simplify your search. If the tools are misbehaving, view the tools.log (unless you have a problem with authentication or authorization). With the exception of the initial login authentication, the tools interact with the database directly, so it is unnecessary to view the server.log. If the problem is launching the clients or using the Thin Client, check the mb.servlets.log.
All Enterprise Metrics applets display error dialog boxes if they have issues starting. For example, the server cannot be located, the database is down, or the server is still initializing. However, in rare cases the applet may not start, which means that no information appears in the log file. Typically this is due to browser or Web server configuration issues. To investigate these, you must enable and open the Java Console Window (in the browser), and watch for messages while the browser is connecting and downloading the applet from the Web server.
290
Always start from the bottom and work your way back up to the point of interest, and carefully check the date/time stamps to ensure that you are not reading old data. If the client is in a different time zone than the server, look for an entry in beginning of the client.log, which notes the corresponding time on the server. Requests from the client to the server are identified by a user ID and request ID within a client session. This enables it to match client and server activity. If performance is slow, compare the time stamps on consecutive entries to see if you can determine where the time was spent. Read carefully, and do not be intimidated. At first the amount of information may seem overwhelming, but with a little practice you may be surprised at how much you can determine on your own. Most importantly, if it seems like you may need assistance from Hyperion Solutions Customer Support, save the relevant logs before they get overwritten. When reviewing the Enterprise Metrics log files, be aware of the following terminology:
The term dash corresponds to a page in the Investigate Section The term graph corresponds to chart The term database measure corresponds to measure
Log Formats
All three types of logs begin with a standard set of information about the system environment and the current prefs settings, and the majority of the activity log entries follow a standard format which includes the date, time, severity code, user ID (or function name in some case), and message text.
291
The log file excerpt shows information pertaining to the Java software installation, the user name and home directory, the Enterprise Metrics installation directory, and other environment variables, such as the operating system (os.name) and operating system version (os.version).
292
Prefs Settings
The next section of the log file shows information about the current prefs settings. The following lines show an excerpt from the server.log.
Current preferences are: AUTH_AUTO_REGISTER=TRUE AUTH_AV_PROD_ID= AUTH_DEF_FILTER_CRIT=* AUTH_DEF_FILTER_TYPE=GROUPS AUTH_METHOD=CSS AUTH_METHOD_CLASS= AUTH_PROVISIONED=FALSE BALLPARK=DUMPTOFILE BILLIONS_SYSTEM=AMERICAN CACHE_DEBUG_USER= CACHE_ENABLE_PURGE=FALSE CACHE_PRELOAD_LIMIT=0 CACHE_SIZE_METRICS=50 CACHE_SIZE_REPORT=50 CACHE_SYS_METRICS_MAX=200 CACHE_SYS_METRICS_MIN=150 CDB_PASS= CDB_USER=smpl CHECK_PERIOD_TABLE=FALSE CLEANUP_WAIT=1800 CLIENT_PREFS=Client.prefs CONFIG_PORT_NUMBER=2006 CONFIG_SERVER=TRUE
As you scroll down the server.prefs settings, you will find the following prefs settings relating to the logs. The LOG_LEVEL is typically set to 3, which is the recommended setting.
LOG_FILE_MAX=3000000 LOG_LEVEL=3 LOG_SAVE_COUNT=3
293
Detailed Information
After the environment and prefs settings information, Initializing appears. The following lines show an excerpt from the client.log.
06/08 00:08:15 I *** Server is Initializing, Code Level 90J8, Version 9.0.0.0.0.08 06/08 00:08:15 W *** LICENSING DISABLED *** Read prefs from C:\Hyperion\EnterpriseMetrics\Server\Client.prefs DashServer.main: creating registry DashServer.main: binding server as //carson.hyperion.com:2006 DashServer.main: initializing LocalServer Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for polling loads table ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for data access ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for metadata access ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for metadata update ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl 06/08 00:08:19 I TABLEMAP Reading DB_MAP_TABLE named <pub_map_table> for entries tagged as DB_MAP_NAME <pub> 06/08 00:08:19 I TABLEMAP Schema = SMPL, Catalog = null, finding columns using SELECT * 06/08 00:08:20 I TABLEMAP Mapping turned on, using <pub> 06/08 00:08:20 I 'loads' table name is <bap_load> 06/08 00:08:20 I SERVER Database connections established, and load complete - (re)starting Connection Pool 06/08 00:08:22 I ADM Initializing multidimensional application info 06/08 00:08:25 I ADM Established connections to 0 multidimensional application(s) 06/08 00:08:25 W AUTH Pref Setting AUTH_MODE is CSS. Resetting server prefs USER_NAME_POLICY to CUSTOM_LOGIN and CUSTOM_LOGIN_CLASS to launcher.LoginCSSImpl. Reading trusted password... 06/08 00:08:25 W AUTH Using default trusted password. TP from database is null.
Each line in the detailed area of the log is typically in the following format:
date: timestamp: I : User ID
Keep in mind that not every line item has a date stamp. Table 23 shows some hints and tips that might help you interpret information displayed in the log.
Table 23
Hints and Tips for Reading Log Lines Description The I appearing after the date and time stamp represents an informational message, an example is:
03/15 07:04:01 I QUERY
Item/Symbol I
E W
294
Table 23
Hints and Tips for Reading Log Lines (Continued) Description The first and second set of angle brackets indicate the request and return for data. The following lines show an excerpt from the client.log. In this case, the number is 9, meaning this is the ninth time the user requested data in this session. Each request is assigned a number sequentially.
03/13 07:43:04 I ashah <9> Requesting reportData 03/13 07:43:04 I ashah <9> Returning reportData
Item/Symbol <1>
When a user views a page in the Monitor Section and drills down to a chart or mini report, a request and return data message shows for each individual chart and mini report that the user requests. Any requests for reports and metrics in the Investigate and Pinpoint Sections show a single request and return message. *** Below are excerpts from the log representing a warning, error and invalid constraints:
*** Warning, no constraint_name specified for clickable <3> in mini <105>, hopefully this is a crosstab data cell... *** invalid constraints specified for [page: test, position(380, 515), size(427, 113), MINI, mini_id <50> common.NewsObj@497934] *** Error, invalidating graph_template <3> due to missing measures in metric <Bookings ASP Qago>
295
Obviously, if it really is a programming error, you are not going to fix it, and the method names and line numbers will not be useful to you (though they help Hyperion Solutions tremendously). However, if you scroll up and read several lines before the exception, you might get a very good idea of what the problem is. For example, in the Studio Utilities you may see an exception error if a SQL query or update fails, and it might be as simple as the database being down. It might also be related to a metadata problem, in which case you may be able to determine which particular item (metric, chart template, and so on) was involved. On the client, you might be able to determine that a particular mini report caused the problem, see what constraint settings were being used, and then investigate the mini report further in the Studio Utilities, or check for corresponding SQL errors in the server.log. Stack traces are used as gross indicators: they are easy to spot, almost always indicate a problem, and might give you a clue about what is happening based on the surrounding context. If the problem persists and is not obvious, please forward the logs and associated details to Hyperion Solutions Customer Support for investigation.
The log should at least give you a good idea of what it was trying to do at the time it failed. So if it returned the error while Reading hierarchies, for example, and you know that you did something a bit unusual with hierarchies yesterday, you might review that and try restoring the hierarchies to the way they were. Usually, this type of error will not result from a database or network connectivity problem. The server expects to run continuously for weeks or months, and is quite robust about handling these conditions.
Tip: When you first install the application, only the Configuration Catalog is populated with
metadata, and you must publish (using the Publishing Control tool) to migrate the information to the Metrics Catalog before attempting to start the Server.
296
Starting at the bottom, you see that the page named Contribution (owned by user #admin, meaning it is visible to all users) has an invalid chart in some column, because one or more of its metrics was invalid. Working up, you find that chart <83> had two missing metrics, in turn
297
due to the missing measure Revenue $. Finally, you see that this measure was discarded because none of the associated stars were able to access the required column (because it had been renamed). At this point, you would want to review the definition of the measure to determine whether it was a problem with the fact snippet, or the star/StarGroup definitions. Note that in this case, it is not a problem with the applicationyou have a problem with how the catalog has been configured, and you also have enough information to track it down. A similar sequence is used with reports: a report will be ignored if it contains an invalid mini report, which might occur because the SQL was deemed invalid, possibly because it used a constraint that was not properly associated with a hierarchy or was missing a parent constraint. The report simply will not appear in the menu on the client, but the server.log will give you a very clear indication of why it was rejected.
Tip: Periodically, review the initialization sequence in the server.log and clean up any errors
(you may have some errors without realizing it, since the server can apply temporary corrections in some cases). This will make it much easier to spot real problems, should they occur.
Performance is Poor
Generally speaking, the server is sophisticated enough so that it does not have to do very much work. If performance seems slow, it is usually because the Application Data area is taking a long time to execute a query. You need to determine which query is involved, and why, and then find a way to make it faster. For example, suppose an end-user complains that a page takes too long to display. This is a fairly complicated problem, because the page contains multiple charts, each using multiple metrics/measures, and the server is actually combining queries to the Application Data across all the different measures involved, wherever possible. Also, it must mean that the page has not already been cached, so either it is a private page belonging to that end-user, or they have drilled further than anyone else, or perhaps they have some unusual security restrictions. It usually helps to isolate the problem as much as possible, before looking at specific database query timings. You can look at what slicing/drilling constraints were being used, and try creating a series of pages each containing a single chart from the original page. If you can narrow it down to a particular column, that will make the rest of the analysis much easier, because you will be able to quickly pinpoint the relevant SQL statement without being distracted by server optimizations. The following example is a bit more complex. In this case, the server is preloading the cache (hence the user ID PRELOAD), but the processing is mostly the same as if some individual client had made a request for the same page, with the same constraints.
Note: The following example shows a sample log, however, you may notice minor wording differences in your log.
298
After you locate the query which is causing the problem, you then have to consider the possible need for more aggregate tables, restricting unreasonable drilling levels, creating more indexes, and so forth. But the first step is understanding what the problem is, and the following log sample below includes comments along the way.
01/04 09:09:49 I PRELOAD Qtr/Opportunity-Qtr <2> Metrics data requested for #admin/Opportunity-
Metrics data requested... appears at the start of the process, and at the very end there is a corresponding Returning metrics data... (or sometimes Returning cached data...).
Notice the Request ID in angle brackets <2>: this is used to tagged many of the entries. Request IDs are unique within a single client's login session, since you often need to consider both the USERID and Request ID to match things up. Also note the string at the end #admin/Opportunity-Qtr/Opportunity-Qtr. This identifies the page, as <userid>/<metrics page name>/<metrics page title>.
#admin indicates that it is a system, or Editor page, so <metrics page title> would appear
This is the start of a timer **** START QUERY, which will cover the entire page process. The total accumulates, while the interval shows just the time from the previous timing entry. All times are in milliseconds (60,000 = 1 minute).
--> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 143, interval: 143, all SQL generated for PRELOAD:Opportunity-Qtr
The server has now generated all required SQL for the entire page and has done the carpooling function of combining multiple select items into a single query wherever possible or reusing an item that has already been selected for some other chart on the page.
1/04 09:09:49 I PRELOAD Final SQL for DETAILS (using star Opp Rev Line): SELECT SUM(F1.opp_actual_amt), COUNT(distinct(F1.opp_key)), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_opportunity_revenue_line F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.QTR_OVERALL_NO IN (11,12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME 01/04 09:09:49 I PRELOAD Connection requested 01/04 09:09:49 I PRELOAD Connection obtained 01/04 09:09:52 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 2651, interval: 2508, finished query 0, results saved
This group of entries covers the execution of the first query for the page. It begins by showing the SQL that will be executed, and also notes which star was selected. A connection was obtained from the pool, the query executed, and it took 2.508 seconds. This includes the time for the SELECT clause, and the time to retrieve all of the result rows. Several more queries for the same page follow.
299
01/04 09:09:52 I PRELOAD Final SQL for DETAILS (using star Opp Rev Line): SELECT COUNT(DISTINCT F1.cust_key), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_opportunity_revenue_line F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.Day_Last_Of_Qtr_Ind = 1 AND P.QTR_OVERALL_NO IN (11,12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME 01/04 09:09:52 I PRELOAD Connection requested 01/04 09:09:52 I PRELOAD Connection obtained 01/04 09:09:54 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 4995, interval: 2344, finished query 1, results saved 01/04 09:09:54 I PRELOAD Final SQL for TOTALS (using star Opp Rev Line): SELECT COUNT(DISTINCT F1.cust_key), P.QTR_OVERALL_NO FROM bap_opportunity_revenue_line F1, brio_mart.bap_fiscal_period P WHERE F1.Period_Key=P.Period_Key AND P.Day_Last_Of_Qtr_Ind = 1 AND P.QTR_OVERALL_NO IN (11,12,13,14,15) GROUP BY P.QTR_OVERALL_NO 01/04 09:09:54 I PRELOAD Connection requested 01/04 09:09:54 I PRELOAD Connection obtained 01/04 09:09:55 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 6286, interval: 1291, finished query 2, results saved 01/04 09:09:55 I PRELOAD Final SQL for DETAILS (using star Booking Header): SELECT SUM(F1.order_ext_actual_amt), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_order_header_fact F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.QTR_OVERALL_NO IN (12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME
The SQL generation above is the typical case, where three items appear in the SELECT. With rare exceptions, the last item is the current hierarchy or slice level (sliced by customer country), the second to last item is the time value (quarter number), and any items before that are the actual facts or measures. In this case, there is only one (sum of actual amt), but there may be severalfor example, the first query at 09:09:49 has two measures being selected.
01/04 09:09:55 I PRELOAD Connection requested 01/04 09:09:55 I PRELOAD Connection obtained 01/04 09:09:59 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 9624, interval: 3338, finished query 3, results saved 01/04 09:09:59 I PRELOAD Final SQL for DETAILS (using star Billing Header): SELECT SUM(F1.invoice_ext_actual_amt), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_invoice_header_fact F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.QTR_OVERALL_NO IN (12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME 01/04 09:09:59 I PRELOAD Connection requested 01/04 09:09:59 I PRELOAD Connection obtained 01/04 09:12:29 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 159850, interval: 150226, finished query 4, results saved
300
Notice the above excerpt shows 150 seconds for the one query, while the others totalled less than 10. Having retrieved all of the data, the server finally does any necessary calculations to construct metrics and so forth.
01/04 09:12:29 I 13280 PRELOAD <2> Returning metrics data for Opportunity-Qtr, size
That is the end of processing for this page, when the results would normally be returned to the client (but in this case are just going into the cache). The size refers to the storage, in bytes, required for the full set of results.
In addition, there are logs files generated by your Web environment software that may also contain valuable information if you are experiencing a problem. The types and location of these log files varies by vendor. Refer to your vendor documentation for specific details.
Metadata Export Utility Files Configuring the Metadata Export Utility Running the Metadata Export Utility
301
metadata_export_table_list.txtExport table list file that defines the tables from which the records are extracted. The list may contain one or more tables. metadata_export_presql.sqlPre-SQL processing file that defines SQL that should be added to the beginning of the output file. metadata_export_postsql.sqlPost-SQL processing file that defines SQL to be
metadata_export.jarThe JAR file containing the Java code for the Metadata Export
Utility.
run_metadata_export.batRuns the Metadata Export Utility. The BAT file contains the path to the JRE and the preference files. The Java plug-in available from the Enterprise Metrics launch pages is sufficient to run the Metadata Export Utility.
This folder also contains database drivers for each supported database. The following sections provide more detail on the files associated with the Metadata Export Utility.
Metadata_export.prefs File
There are preferences that you need to define to run the Metadata Export Utility. These preferences specify directories where the source files or log files are located:
These preferences pertain to the specifics of what table(s) you are exporting from and what database you will be importing to:
TABLE_PREFIX UPDATE_USER_ID
302
For a complete list of preference settings and descriptions, see Chapter 19, Enterprise Metrics Preference File Settings.
Column one: Table NameLists the metadata table name without the normal prefix of PUB_ or PRD_. By default, the table prefix is set to PUB_. You can change the table prefix setting to PRD_ in the metadata_export.prefs file. The Metadata Export Utility concatenates the table prefix setting (from the preference file) to the name of each table listed in the export table list file. See Chapter 19, Enterprise Metrics Preference File Settings.
Column two: Action FlagContains an action flag that determines whether an insert (I) statement is to be generated or if the table should be skipped (S). The metadata tables that are core to Enterprise Metrics are marked with an I. The metadata tables that are separate and distinct are marked with an S.
Note: I and S are the only valid values for the action flag. If the action column is blank, the Metadata Export Utility logs an error and processing stops. You must enter a valid value before processing can continue.
If you want to use the insert statements to populate a complete set of metadata tables, the tables must be listed in foreign key order, so that the insert statements work if the foreign keys are enabled (the parent records are inserted before the child records).
Column three: Where ClauseDefines an optional SQL where clause to limit output to the output file for a table. This is useful when you are exporting constraint items where you do not need to output generated items.
303
Please refer to the metadata_export_table_list.txt file for a complete list of metadata tables. The following tables only exist in the PUB environment; there is not a comparable table in the PRD environment:
Output File
Each time you run the Metadata Export Utility, it creates an output file named metadata_export.sql. The same file name is used each time the output file is generated; therefore, if you plan to run the Metadata Export Utility more than once, you should rename the output file after each run (before the next run).
Log File
The Metadata Export Utility creates a log file, metadata_export.log, that supports three levels of detail. The default logging level is 5. Table 24 describes the detail associated with each logging level.
Table 24
Metadata Export Utility Logging Levels Activity Logged Minimal information Intermediate Detail
Logging Level 0 5 10
304
Reads the parameters from the preference file, the pre and post SQL files, and export table list file. Reads all rows from the specified tables applying any filters specified in the preference files. Appends all pre-SQL processing statements to the output file. Creates and outputs insert statements to the output file. Appends post SQL processing statements to the output file.
5 After you run the Metadata Export Utility, open the metadata_export.log file and review the log
activity, making sure there are no errors.
If you find errors in the log file, you generally need to address the error, and then rerun the tool and verify that data from all tables identified has been correctly exported.
305
306
Chapter
15
In This Chapter
Administration Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Launching the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Monitoring Server Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Shutting Down the Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Restarting the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Viewing the Server Log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Monitoring Server Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Exporting Settings to Preference Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Monitoring Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Exiting the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
257
Administration Overview
You can use the Server Consoles to:
Monitor server statistics Shut down the server Restart the server View the server log Monitor server settings Monitor user activity
The Metrics Server Console allows you to administer the Metrics Server, and the Configuration Server Console allows you to administer the Configuration Server. This section shows you how to work with the Configuration Server Console; the process to work with the Server Console is identical.
Note: Before you launch the Server Console, you must make sure that the server is running (Metrics or Configuration).
258
Server Statistics in the Server Console Description The host name of the Configuration Server. The port number of the Configuration Server. Typically, the Configuration Server uses port 2006 and the Metrics Server uses port 2005. The Enterprise Metrics version number. The amount of time that the server has been running. Formatted as Hour:Minutes:Seconds. Number of users currently logged in to the Configuration Server. Since there is typically only one Editor, this shows only one user. The current number of connections in the pool. The number of connections in the pool not currently being used. The smallest size the pool has reached since the server was last started. The largest size the pool has reached since the server was last started. The current size of the system Metrics pages cache in kilobytes (KB). The current size of the system Metrics pages cache in pages. The name of the Metrics and Configuration Catalog database. The name of the Application Data database. If TRUE is displayed, the console is running against the Configuration Server. If FALSE is displayed, the console is running against the Server. Indicates whether the server is accepting connections. When you restart, the server state shows that the server is initializing. You must click Refresh to determine if the server has restarted and is accepting connections.
Item Server Host Name Port Number Version Number Server Up Time Number of Users Connection Pool Size Idle Connections Minimum Connections Maximum Connections Sys Cache-Metrics Size (KB) Sys Cache-Metrics Pages Meta DB Name Data DB Name Config Server Server State
If you plan to shutdown or restart the server, the Statistics tab shows the number of users currently connected. You can also click User to show the users currently using the application. Shutting down and restarting the server both drop all users that are currently logged in. When you click Shutdown or Restart, a dialog box indicates how many users are currently logged in. If you shut down or restart the Server, the dialog box shows how many users are currently logged in to Personalization Workspace. Similarly, if you shut down or restart the Configuration Server, the dialog box shows how many users are logged in to Studio Utilities or Enterprise Metrics Studio.
259
Use the Metrics or Configuration Server Console. If you want to restart the Configuration Server only, use the Studio Utilities Publishing tool. On the Config Server tab, you can click Restart or Restart Fast. Use Restart when hierarchy changes have been made in the metadata since the last restart. Otherwise, use Restart Fast.
Use a UNIX command. For example, on UNIX, to start the Enterprise Metrics Server, type start_config (for the Configuration Server) or start_metrics (for the Server) and press [Enter].
Use the Metrics or Configuration Server Console. Use a UNIX command. For example, on UNIX, to shut down the server, type stop_config (for the Configuration Server) or stop_metrics (for the Server) and press [Enter].
260
To modify the size limit, specify a maximum size in kilobytes. To view a specific portion of the log based on a date and/or time stamp, click the option Specify Date and Time. By default, the Enter Earliest Date field is populated with yesterday's date and the Enter Earliest Time field is populated with the current time. You can change the settings by clicking the field and entering a new value. This is useful if you are troubleshooting a problem that can be isolated to a specific date or time. If you have previously viewed the server log through the console window and now want to view only new activity, click Since Last Viewed. Only the new activity appears in the log. This button is enabled for the duration of your logon to the Server Console. If you log out of the Console and log back in, you must view the server log through the console to enable this button.
3 Choose a location to save the log as a text file. 4 Click Save. 5 Using Windows Explorer, open the file from the saved location. 6 After you open server.log.txt, scroll to the end to view the most current information.
261
7 Then, scroll from the bottom up to locate the date stamp of the portion of the log you want to view.
An example of server.log.txt is shown in the following figure.
262
263
Setting Passwords
The Settings tab contains a Set Passwords button. This is used to administer the trusted password for embedded mode, or the LDAP Directory Manager password if you are using LDAP authentication in stand-alone mode.
264
Monitoring Users
The Users tab allows you to monitor the number of users logged in to Enterprise Metrics.
User Information Description The user ID of the person logged in. You can sort the list of users by clicking the User ID column. The amount of time the user has been logged in. The name of the server hosting the Configuration Server (or Server). The amount of time since the server last heard from the client. The number of queries for pages in the Investigate Section and individual charts in the Monitor Section. The number of metric queries that were satisfied by the server. The number of queries for (Pinpoint Section) pages and individual mini reports in the Monitor Section. The number of report queries that were satisfied by the server. The number of other requests from the client to the server, not including metric or report requests.
Column User ID Duration Login Host Idle Time Met Qrys Met Hits Rpt Qrys Rpt Hits Other Reqs
265
266
Chapter
16
There are four programs delivered with Enterprise Metrics that support the load processes which maintain the data in the Application Data area. These programs are: BeginLoad, FinishLoad, Publish, and Enrich. These load support programs must execute as part of the extract, transform, and load process that moves data from your source system(s) to the Application Data. The functionality of each of these load support programs is explained in this chapter.
In This Chapter
Load Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Scheduling the Load Support Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 BeginLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 FinishLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Publish Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Processed Enrichment Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Enrichment Versus ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Enrich Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Failure During Enrichment Job Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Studio Utilities in Stand-alone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Reviewing the Load Support Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
267
268
269
Table 20
Load Support ProgramsOptional Preference Settings Description Defines whether the output from the BeginLoad and FinishLoad programs should be written to a separate log file (mb.Loads.log) or to the same output stream as the calling program. If the setting is TRUE it writes output to the mb.Loads.log file. If the setting is FALSE, it writes output to the system console.
LOADS.LOG_LEVEL=2
Defines whether the output from the BeginLoad and FinishLoad programs should include SQL statements, commit, and rollback points. If the setting is 1, it does not include SQL statements unless an error occurs. If the setting is 2, it does include SQL statements.
PUBLISH.LOG_TO_FILE=TRUE
Defines whether the output from the Publish program should be written to a separate log file (mb.Publish.log) or to the same output stream as the calling program. If the setting is TRUE, it writes output to the mb.Publish.log file. If the setting is FALSE, it writes output to the mb.Loads.log file.
PUBLISH.LOG_LEVEL=2
Defines whether the output from the Publish program should include SQL statements, commit, and rollback points. If the setting is 1, the Publish program does not include SQL statements unless an error occurs. If the setting is 2, the program includes SQL statements.
ENRICH.LOG_TO_FILE=TRUE
Defines whether the output from the Enrich program should be written to a separate log file (mb.Enrich.log) or to the same output stream as the calling program. If the setting is TRUE, it writes output to the mb.Enrich.log file. If the setting is FALSE, it writes output to the mb.Loads.log file.
ENRICH.LOG_LEVEL=2
Defines whether the output from the Enrich program should include SQL statements, commit, and rollback points. If the setting is 1, the Enrich program does not include SQL statements unless an error occurs. If the setting is 2, the Enrich program includes SQL statements.
If any of the preference settings are missing from the Metrics_server.prefs file or if the value does not match one of the possible values shown in Table 20, then the default value applies for that setting. The preference setting values are not case sensitive. By default, three separate logs are written that include SQL statements, commit, and rollback points. Any combination of these settings above is considered valid. For example, to create a separate log for Enrich but not for Publish, and to show SQL in the Enrich log but not the others, you would use the following settings: LOADS.LOG_TO_FILE=TRUE LOADS.LOG_LEVEL=1 PUBLISH.LOG_TO_FILE=FALSE PUBLISH.LOG_LEVEL=1 ENRICH.LOG_TO_FILE=TRUE ENRICH.LOG_LEVEL=2
270
BeginLoad Program
The BeginLoad program is required to execute just before any Application Data ETL loads are executed. When executed, this program reads the preference settings from the preferences file and sets up the metadata and Application Data connections and defines the logging level and output stream according to the preference file settings. Then, the BeginLoad program sets the following flags in the BAP_LOAD system table, which cause the Enterprise Metrics Server to become inaccessible:
loading_flagSet to Y (loading) load_compl_flagSet toN (not complete) load_error_flagSet to N (no errors) last_load_event_nameSet to BeginLoad Succeeded
If the BeginLoad program runs successfully, the transaction is committed in the database. If any errors occur, then all processing is immediately halted, any pending transactions are rolled back and error messages are written to the log, and the BAP_LOAD table is not modified.
FinishLoad Program
The FinishLoad program is required to execute just after the Application Data ETL load has completed including any required aggregate loads. When executed, the FinishLoad program reads the preference settings from the preferences file and sets up the metadata and Application Data connections. In addition, it defines the logging level and output stream according to the preference file settings. Then, the FinishLoad program sets the following flags in the BAP_LOAD system table, which define the load time and prevent execution of the FinishLoad program if the program is already running:
loading_flagSet to Y (loading) load_compl_flagSet to N (not complete) load_error_flagSet to N (no errors) last_etl_load_timeSet to the current date and time last_load_event_nameSet to FinishLoad Started
If the FinishLoad program starts successfully, the transaction is committed in the database. If any errors occur up to this point in the processing, all processing immediately halts, any pending transactions are rolled back and error messages are written to the log, and the BAP_LOAD table is not modified. The FinishLoad program then checks the optional argument that indicates whether the Application Data load succeeded or not. If the argument exists and it is not Y, then the Application Data load is assumed to have failed which causes the FinishLoad program to fail. On the other hand, if the FinishLoad optional argument does not exist or is set to Y, then the Application Data load is assumed to have succeeded which causes the FinishLoad program to continue.
FinishLoad Program
271
After the FinishLoad program checks the optional argument, it reads the publish_meta_flag and the publish_enrich_flag from the BAP_LOAD system table. These flags are set to Y by Enterprise Metrics to indicate that publishing is requested. If standard publishing has been requested, the FinishLoad program calls the Publish program twice in order to publish the standard metadata:
First, the Publish program is called to save any user-defined metadata (such as personal pages in the Monitor Section) from the Metrics Catalog to the Configuration Catalog. Then, the Publish program is called to copy all metadata (except enrichment metadata) from the Configuration Catalog to the Metrics Catalog.
If both of these processes succeed, then the publish_meta_flag is reset to N and the transaction is committed in the database. If enrichment publishing has been requested, then the Publish program is called to copy the enrichment metadata from the Configuration Catalog to the Metrics Catalog. If this process succeeds, then the publish_enrich_flag is reset to N and the transaction is committed in the database. Next, the FinishLoad program calls the Enrich program to enrich the Application Data based on the enrichment job definitions in the catalog. The Enrich program runs every time the FinishLoad program is executed, regardless of whether enrichment publishing has been requested. Finally, the FinishLoad program determines the as of date for the Application Data based on the VAP_LOAD_DONE view and updates the BAP_LOAD system table to set the period information for the as of date. In addition, the BAP_LOAD flags are updated and committed in the following manner to indicate the success of the load:
If the FinishLoad program fails for any reason after the FinishLoad Started event has occurred, all processing is immediately halted, any pending transactions are rolled back to the last commit point, errors are logged, and the BAP_LOAD system table flags are updated and committed in the following manner to indicate the failure of the load:
Note: The failure of the Publish and Enrich programs automatically causes the failure of the FinishLoad program in the manner described above.
272
Publish Program
The Publish program is automatically called by the FinishLoad program when standard or enrichment publishing has been requested. When the FinishLoad program calls the Publish program, the preferences file and a publishing group code are passed as arguments. The publishing group code identifies which subset of the metadata tables to publish (standard versus enrichment). In order to publish the standard metadata tables, the Publish program must execute twice, once to save the user-defined metadata (such as personal pages) from the Metrics Catalog to the Configuration Catalog, and once to copy all metadata (except enrichment metadata) from the Configuration Catalog to the Metrics Catalog. To publish the enrichment metadata tables, the Publish program is only executed once in order to copy the enrichment metadata from the Configuration Catalog to the Metrics Catalog. When executed, the Publish program reads the preference settings from the preferences file and sets up the metadata and Application Data connections. In addition, it defines the logging level and output stream according to the preference file settings. Then, the Publish program reads the list of metadata tables to be published (standard versus enrichment) based on the publish group code in the PUB_MAP_TABLE. Additional columns in the PUB_MAP_TABLE define the publish order and any filters that should be applied when the data is published. For each metadata table to be published, the Publish program deletes the metadata from the target table and then inserts the metadata into the target table from the source table. When enrichment publishing is requested, the publishing process also updates a flag in the PUB_ENRICHMENT_JOB metadata table, to indicate that the enrichment job definitions in the Configuration and Metrics Libraries are consistent with one another. In Enterprise Metricss Processed Enrichment tool, this flag is used to set the Edited column on the Processed Enrichment Administration window. The Edited column tells the Editor whether the enrichment job has been edited since it was last published. If the publish process succeeds, the transactions are committed in the database. If any errors occur, then all processing is immediately halted, all transactions initiated by the Publish program are rolled back, and error messages are written to the log. In effect, the Publish program either succeeds or all transactions are rolled back, indicating that no changes will occur to the metadata in the Libraries.
Roles
To fully understand processed enrichment, it is important to understand the role of the database administrator, business analyst, and Enterprise Metrics Editor in the enrichment process.
273
Database Administrator
The database administrator prepares the Application Data to receive enriched data. For example, the database administrator may add a column to a table to receive the enriched data or add indexes to the database in order to improve performance.
Business Analyst
The Business Analyst may provide expert knowledge or specific business information that define the data mappings.
Enrichment Process
The above roles are reflected in the enrichment process described in the following steps: 1. The Editor coordinates with the database administrator to add any new columns that are required to receive the enriched data and any new indexes that are required to support enrichment processing. When choosing columns for data enrichment, please consider that most databases restrict the use of very large data types in SQL sub-queries and WHERE clauses. In order to enrich the data in the mart, the enrichment program generates UPDATE statements and executes them in the database. Any columns in the database that cannot be used directly in UPDATE statements, sub-queries, or WHERE clauses, are not supported for enrichment. Furthermore, the database administrator should insure that the fields update_time and update_user_id are included in all tables that are used in enrichment. The update_time column stores a timestamp for each row of data, identifying when the row was loaded or when changes were last made to the row. The update_user_id column stores the ID of the user that loaded or modified the row of data. The ETL should be designed to populate these two columns when data is loaded into the Application Data area. Furthermore, these columns are automatically populated when data is uploaded via manual enrichment. Processed enrichment does not modify the values in these columns; however, it does rely on the update_time when determining which rows have been modified since the last successful enrichment processing. To achieve optimal performance for processed enrichment, Hyperion Solutions recommends that the DBA create indexes on the update_time column for all source and target tables. In addition, for table-to-table enrichment, indexes should be created for all the columns in the source table that are used in joins. Often, primary or alternate key columns are used in the table-to-table joins which are already indexed.
274
2. The Editor defines the enrichment job definitions using the Processed Enrichment tool. Please see the Processed Enrichment chapter of the Hyperion System 9 BI+ Enterprise Metrics Users Guide for more information on defining enrichment jobs and the enrichment functionality that is available through Enterprise Metrics. 3. The Editor requests Enrichment Publishing through the Publishing Control tool. 4. The FinishLoad program is executed as part of the nightly load process which: a. Calls the Publish program to copy the enrichment job definitions from the Configuration Catalog to the Metrics Catalog. b. Calls the Enrich program, which reads the enrichment job definitions from the catalog and builds SQL update statements to modify the data in the Application Data accordingly. For additional information, see Enrich Program on page 277. 5. The Editor reviews the output logs and handles any error conditions. At times, the database administrator may be asked to assist in this process. For additional information, see Reviewing the Load Support Logs on page 283. Figure 30 shows the enrichment process.
Figure 30
Enrichment Process
275
ETL Tools versus Enrichment Functionality ETL Tools Enrichment Tweak the data, add small amounts of data, or inject expert knowledge into the data Analyst Desktop, such as forecasts stored in spreadsheets, dimensional attributes, or hierarchy mappings based on rules that only the analyst knows Fairly small; typically tens or hundreds of rows of data or rules (occasionally thousands of rows, using manual enrichment). Shorter process, driven by the business analyst. Involves obtaining the source data, possibly adding column or indexes to the Application Data, and defining enrichment mappings in the Enrichment tool.
Automate the bulk loading of data (extract, transform, and load) System Information Systems or warehouses, such as ERP, CRM, SCM Can be very large; thousands to millions of rows per night. Longer process; more involved and more formal. Includes formal requirements, design, coding, testing, validating data, operations procedures, and migrating to production. Requires heavy involvement from the information technology, database, or systems administrator. Extensive extract, transform, and load functionality. Transformations may be complex, with coded transformations as well as the use of predefined functions.
Volume of Data
Functionality
Functionality is limited to satisfying the main enrichment use cases. Transformations supported by enrichment are fairly straightforward. Furthermore, there are a few technical restrictions: 1. The number of distinct target values for a column cannot exceed 999. 2. The UPDATE statement from an enrichment job cannot exceed the maximum statement size allowed by the database. 3. For UPDATE statements from an enrichment job, there must be sufficient rollback space in the database to perform the update as a single transaction.
276
Enrich Program
The FinishLoad program automatically calls the Enrich program and the preference settings are passed as arguments. When executed, the Enrich program:
Reads the preference settings and sets the metadata and Application Data connections. Defines the logging level and output stream according to the preference file settings. Reads the active enrichment job definitions from the catalog and builds update statements to enrich the Application Data, which are executed in the Application Data area based on the sequence specified by the Editor.
For Direct and Rule-Based enrichment jobs, the default value defined by the Editor is used whenever a row does not qualify for an explicit value assignment. For Table-to-Table enrichment jobs, only rows that join are enriched and all other rows are not modified.
When you modify the enrichment job definitions, you can determine whether all rows should be processed or only new rows, meaning those rows that have been added to the Application Data area since the last successful processing for this job. If new rows processing has been selected for an enrichment job, a filter is added to the enrichment update statement to select those target rows with an update_time greater than the prd_enrichment_job.max_target_update_time. For table to table jobs, the filter also selects any target rows that join to a source table row with an update_time greater than the prd_enrichment_job.max_source_update_time. The prd_enrichment_job.max_target_update_time and prd_enrichment_job.max_source_update_time columns are maintained by the Enrich program. These columns are updated at the end of each successful enrichment job processing by selecting the max(update_time) from the source and target tables. As a special consideration, the database administrator can manually manipulate these dates to have more control over which rows of data get enriched (beyond the simplistic all rows versus new rows approach). By manipulating the max_target_update_time and the max_source_update_time in the PUB_ENRICHMENT_JOB and PRD_ENRICHMENT_JOB tables, rows can be skipped or reprocessed as desired (when the enrichment job is set to process new rows). For example:
Your Editor has created a new enrichment job for a table that contains a large amount of historical data. The Editor is only interested in enriching current and future data within this table, and you would like to eliminate unnecessary processing time that would be needed to enrich all of the historical data. In this case, you can set the max_source_update_time and max_target_update_time to a current timestamp (since the job has never been processed before), and make sure the Editor has set the enrichment job to process only new rows before requesting enrichment publishing.
You have an enrichment job that is based on a product category code, and this job has been used to successfully enrich data for many months. Now, your Editor learns of a new product category code that has been in use since the beginning of last monthyet the enrichment
Enrich Program
277
job has not been using that code. The Editor could immediately modify the enrichment job to include the new product category code, request enrichment publishing, and reprocess all rows of data. Alternatively, the Editor could set the job to process only new rows and then ask the database administrator to manipulate the enrichment job processing to enrich only rows of data that have been loaded since the beginning of last month. In this case, the database administrator would update the PUB_ENRICHMENT_JOB and PRD_ENRICHMENT_JOB table to set the max_source_update_time and max_target_update_time to the beginning of last month for the desired enrichment job. Note that if you plan to manually manipulate the max_source_update_time and max_target_update_time fields, it is important to ensure that a very recent database backup exists. If the enrichment processing succeeds for an enrichment job, the all_rows flag is reset to new, the max_source_update_time and max_target_update_time columns are updated and all transactions for that job are committed in the database. If any errors occur, then all processing is immediately halted, all transactions for the current job are rolled back, and error messages are written to the log. In effect, each enrichment job succeeds and is committed until a failure occurs which causes the rollback of the current job and no additional enrichment jobs are processed.
Enrichment Jobs After a Failure Example State of Data Relative to this Load Enriched Enriched Not Enriched Not Enriched Not Enriched
Sequence 1 2 3 4 5
max_source_update_time
max_target_update_time 1/23/03 10:06 p.m. 1/23/03 9:49 p.m. 1/22/03 8:51 p.m.
Note: Jobs are either enriched or not enriched and are never partially enriched.
278
Enterprise Metrics Load Support Programs
Once a failure occurs, the Editor is responsible for determining the best course of action. One approach might be to ignore the failure and mark the load process as done despite incomplete enrichment. This would be appropriate in cases where the Editor does not expect heavy use of the enriched data by the user community before the next load. The administrator and/or Editor would then fix the enrichment problem by the next load, at which point the processing would begin again with the job sequenced first. Another approach is to fix the enrichment problem and then re-run the enrichment jobs. If the failure occurred because of a problem with the underlying data, the Administrator could fix the data manually and then just run the FinishLoad program again. If the failure occurred because of the way the enrichment job was defined, the following steps could be used to fix the problem and complete the load process: 1. The Editor launches the Studio Utilities in stand-alone mode (since the server is down). 2. In the Studio Utilities, the Editor edits the problematic enrichment job and requests enrichment publishing. 3. The Editor re-runs the FinishLoad program. 4. The FinishLoad program publishes the new job definition(s) to the catalog. 5. The FinishLoad program processes the enrichment job. The enrichment timestamps for each jobalong with the rows-to-enrich flaginsure that the proper rows of data are processed for each job.
The FinishLoad program failed due to faulty metadata; as a result, the Configuration Server will not start, but you need to modify that metadata through the Studio Utilities. You need to view the metadata in the Metrics Catalog using the Studio Utilities.
Each situation is described in more detail in the following sections, which include instructions on how to run the Studio Utilities in standalone mode:
Responding to a Finish Load Failure Viewing Catalog Metadata Running the Studio Utilities in Stand-alone Mode
279
You want to examine the enrichment jobs that are in the Metrics Catalog (and therefore running nightly), because you cannot remember how they were defined. That is, you have not yet published some recent changes to the enrichment job definitions, and you need to view the jobs that are currently being executed with each load. You want to look up some standard metadata definition (metric, chart, etc.) that was unintentionally deleted or modified in the Configuration Catalog.
To view Metrics Catalog metadata, you may launch the Studio Utilities in stand-alone mode while pointing to the Metrics Catalog. Note that this is recommended solely for viewing metadata, and should never be used as a normal means of creating or modifying metadata. The steps for launching the Studio Utilities in stand-alone mode are covered in the next section, Running the Studio Utilities in Stand-alone Mode.
280
2. Go to the server that hosts the Enterprise Metrics Servers. 3. Copy the server startup script. The specific server startup script you should copy depends upon which metadata catalog you want to access with the Studio Utilities (determined in Step 1 above), and whether you are using Windows or UNIX. The following table identifies the startup script you should copy.
To use the Studio Utilities to access: Configuration Catalog
Copy this menu entry/file: Start > Programs > Hyperion Solutions > Enterprise Metrics > Start Configuration Server
\Hyperion_Home\EnterpriseMetrics\Server\start_config.sh
Metrics Catalog
Windows UNIX
Start > Programs > Hyperion Solutions > Enterprise Metrics > Start Metrics Server
\Hyperion_Home\EnterpriseMetrics\Server\start_metrics.sh
Note: Each startup scripts java command line ends with the name of a prefs file: Metrics_server.prefs in the Server startup script, and Configuration_server.prefs in the Configuration server startup script. Copying the correct server startup script will insure that you are referencing the correct prefs file when you launch the Studio Utilities in stand-alone mode. (The Studio Utilities look at settings within that prefs file to determine whether to point to the Metrics Catalog vs. the Configuration Catalog. In the prefs file, if either CONFIG_SERVER=FALSE, or DB_MAP_NAME includes the string prd as one of the values, then the Studio Utilities will point to the catalog.)
4. Name the copied script. The following table suggests names you might use, depending on which script you have copied from Step 3 above:
To use the Studio Utilities to access: Configuration Catalog
Metrics Catalog
Windows UNIX
281
5. Edit the java command line in the copied script to change the second-to-last item from DashServer to admin.DashAdmin, matching case exactly. (This change is what causes the Studio Utilities to launch, instead of the server.) 6. Save the copied script. 7. If you need to run the tools in stand-alone mode from a different machine (that is, a machine other than the one that hosts the Enterprise Metrics Enterprise Metrics Servers), you will need to copy the database driver jar, the dashall.jar, the server prefs file, and the startup script created above in Steps 4-6 to the target machine. You may also need to make adjustments to the classpath setting in the startup script, to indicate the location of these files. 8. Execute the script as follows: a. If you are using Windows, select the entry in the Start > Programs menu. For example, if you used the name suggested in Step 4 above, and you are pointing the Studio Utilities to the catalog, then select Start > Programs > Hyperion System 9 BI+ Enterprise Metrics > Start Config Utilities Analytic View. b. If you are using UNIX, run the file. Move to the
\Hyperion_Home\EnterpriseMetrics\Server directory and, at the prompt, type a period, forward slash, and the name of the script you created in steps 4-6 above.
For example, if you used the name suggested in Step 4 above, and you are pointing the Studio Utilities to the catalog, then type ./start_config_tools_analytic_view.sh. 9. Use the Studio Utilities as needed. Upon executing the script, the Studio Utilities should launch and operate as documented in the Hyperion System 9 BI+ Enterprise Metrics Users Guide. If you are running the tools against the Metrics Catalog, you will see several additional warnings as you enter the tools. First, you will see a warning about connecting to production metadata, with an option to exit. If you click Yes to continue, you will see a second dialog indicating that you should not save any changesthat is, that you will simply be viewing the metadata. It is strongly encouraged that you click Yes, thereby disabling all Save functions. If you want to make changes to the catalog, you should do so using the standard method of using the Configuration environment to configure your metadata and then request publishing. Assuming you click Yes, the main Studio Utilities window should appear and you should be able to proceed with viewing the metadata. All functionality of the Studio Utilities is available except saving changes.
Note: If you do try to save a change, you will see two error dialogs: one stating that updates to production metadata are prohibited, and another indicating a database error (simply noting that the database update did not occur).
If you override the recommendation on the Prohibit Updates dialog box shown above by clicking No, the Studio Utilities allow you to save changes directly to the catalog. This is strongly discouragedtherefore, a third dialog box will appear that displays this message:
282
If you do make any changes, you must go back and make those same changes to the Configuration Catalog as soon as possible Enterprise Metrics does not provide a reverse publishing mechanism, and you may encounter serious problems at a later
Each major event performed by the load support programs is logged to a log file along with the timestamp of the event. Depending on the preference file settings, SQL statements, commit and rollback statements, and row counts may also be included.
mb.Loads.log
By default, the mb.Loads.log file contains the output from the BeginLoad and FinishLoad programs. For the BeginLoad program, the following major events are logged.
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Starting BeginLoad Processing Setting load flags in table <bap_load> BeginLoad Successfully Completed
For the FinishLoad program, the following major events are logged:
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Starting FinishLoad Processing Setting last load time in table <bap_load> Reading publish flags from table <bap_load> Calling Publish Program for user-defined metadata Calling Publish Program to publish standard metadata Setting publish_meta_flag to 'N' Calling Publish Program to publish enrichment metadata Setting publish_enrich_flag to 'N' Calling Processed Enrichment Program Reading the as of date using view <VAP_LOAD_DONE>
283
Begin reading period data for the as of date <1978-01-01> Begin Reading period data for the year ago date <1977-01-01> Updating period data in table <bap_load> FinishLoad Successfully Completed
If an error occurs, events are logged as shown above to the point of the error, where an error message is included in the log and possibly a rollback statement as mentioned earlier in this chapter. In the case of the FinishLoad program, additional output is included in the log to indicate that the flags in the BAP_LOAD table are being updated to indicate the failure. As an example:
********** Error: No row found in <vap_fiscal_period> for the as of date at 2003-08-02 19:09:33 ********** Setting load error flag in table <bap_load> to 'Y' UPDATE bap_load SET LOAD_ERROR_FLAG='Y', LOADING_FLAG='N', LAST_LOAD_EVENT_NAME = 'FinishLoad Failed', UPDATE_TIME= {ts '2003-08-02 19:05:37'}, UPDATE_USER_ID='FinishLoad' <1> rows effected. COMMIT Completed update of load error flag
mb.Publish.log
By default, the mb.Publish.log file contains the output from the Publish program including the following major events:
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Begin Publishing for Publish Group <P> Begin reading metadata table information for publish group <P> Begin deleting metaData for publish group <P> [each metadata table will be listed as it is deleted] Begin inserting metaData for publish group <P> [each metadata table will be listed as it is inserted] Successfully Completed Publishing for Publish Group <P>
If an error occurs, events are logged as shown above to the point of the error, where an error message is included in the log possibly with a rollback statement as indicated earlier in this chapter.
284
mb.Enrich.log
By default, the mb.Enrich.log file contains the output from the Enrich program including the following major events:
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Begin Enrichment Processing Begin Reading Enrichment Source Criteria Begin Reading Enrichment Target Criteria Begin Enriching Data each enrichment job is listed as it is processed Enrichment Processing was Successfully Completed
If an error occurs, events are logged as shown above to the point of the error, where an error message is included in the log along, possibly with a rollback statement as indicated earlier in this chapter.
285
286
Chapter
18
In This Chapter
This chapter describes how to use the Performance Statistics tool to tune Enterprise Metrics and gather performance statistics and use the information to identify performance problems, determine the causes, and design solutions.
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Statistics Reporting Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Launching the Performance Statistics Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Understanding the Enterprise Metrics Performance Statistics Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Using the Performance Statistics Utility to Tune and Troubleshoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
307
Introduction
The most important factor for Enterprise Metrics users, besides accuracy, is database query response time. Data mart performance tuning is therefore one of the most important administration tasks. Enterprise Metrics provides a comprehensive tool to monitor Enterprise Metrics query performance, and to provide all necessary information for quick and effective database tuning. The Enterprise Metrics Server continuously records performance statistics in two metadata database tables and records the SQL and timing for every query in the server log (see Statistics Reporting Background on page 308). The Performance Statistics tool helps you organize and understand this historical data, isolate the root causes of problems, determine the tuning actions to take, and check the effectiveness of your actions. This tool requires database administration knowledge in common star schema tuning practices, such as aggregation and indexing techniques. You should be familiar with how Enterprise Metrics represents hierarchies and stars, and configures aggregate navigation using StarGroups. This chapter is organized into these topics:
Statistics Reporting BackgroundExplains how the tool gathers and presents information Launching the Performance Statistics UtilityDescribes how to run the tool Understanding the Enterprise Metrics Performance Statistics UtilityProvides details on interpreting each pivot table Using the Performance Statistics Utility to Tune and TroubleshootShows how to use each pivot table to understand specific types of performance problems and scenarios Preference File SettingsDescribes how to configure Enterprise Metrics Server to record statistics
PRD_STAR_STATS_DETAILstores statistics at the query transaction level PRD_STAR_STATS_SUMMARYstores statistics at a summary level by star
Preference files are text files containing settings that affect the appearance and functionality of Enterprise Metrics. Specific settings in the Enterprise Metrics Server preference file are used to trigger Enterprise Metrics to collect performance statistics data and store the data in the Metrics Catalog tables. For information about settings that gather performance statistics, see Preference File Settings on page 327.
308
The Enterprise Metrics Server log file contains detailed information on each query that is executed by Enterprise Metrics. This information includes the time of the query execution, the SQL statement issued to the database and the user that requested the query. For detailed information on reading log files, see Using Log Files for Tuning and Troubleshooting on page 288. The Enterprise Metrics Technical Tools include a Performance Statistics tool that provides pivot tables which combine the performance statistics with basic hierarchy level and StarGroup information. These pivot tables are provided in the format of an Interactive Reporting document (.bqy extension). You can use the pivot tables to analyze the statistics. During your analysis, you may need to correlate information from multiple reports or use the query details from the Enterprise Metrics Server log file to determine the cause of a specific problem and resolve the problem. The following sections provide detailed information to guide you through this process and describe possible causes and solutions to typical problems.
2 Follow the steps for the import process and click Finish. 3 Then, right-click the perf stats.bqy file and choose Open.
The document opens and displays one of the Query sections. Before you start analyzing and tuning your installation, Hyperion recommends that you make a backup copy of the Perf stats.bqy file incase you need to restore the original file.
Note: If you want to restore the Performance Statistics tool BQY file, open the Technical Tools Zip file from the Enterprise Metrics Editor launch page and extract the Perf Stats.bqy document.
In addition, as a precaution, copy the pivot tables before you drill down any cells in the BQY file.
309
To make a copy of a pivot, duplicate the Pivot section in your Interactive Reporting document.
This creates a duplicate of the Pivot section with a numeric suffix appended to the original section label. For example, if you duplicate a section named SalesPivot one time, the Section pane would show SalesPivot and SalesPivot2. Now you can use this duplicated section for analysis, while ensuring that the original pivot table remains intact. You can delete the duplicated section after you complete your analysis, and you can easily recreate the section from the original. Also, note that the default setting for the retention of the statistics is 14 days in the STAR_STATS_DELETE DAYS preference setting. You can change the setting to a maximum of 90 days depending upon how many days of history is needed for performing the tuning tasks. When you save the Interactive Reporting document, it saves the data so you need to reprocess the Query section(s).
Note: If stars or hierarchies are deleted and the new metadata is published, the performance statistics BQY history for those stars and hierarchies is orphaned. If orphaned, some of the pivots that displayed data for those objects no longer can. This is how the tool is designed. Accordingly, we recommend that you keep versions of your Perf stats.bqy file so that you can get to this history if necessary. The history will not reside in the performance tables in the database. The information becomes obsolete with the metadata changes and there is also a default of deleting detail records after 72 hours.
To reprocess a Query section, select the Query section label and click Request line and choose
Process Query from the shortcut menu.
Note: If you want to restore the Performance Statistics tool file to its original state, open the Technical Utilities Zip file from the Enterprise Metrics Editor launch page and extract the Perf stats.bqy document. Before doing so, you should rename the previous version of the file.
Pivot Sections in the Performance Statistics Utility Description Contains performance information about each StarGroup broken down to the individual star level. Shows the performance of queries related to each star and StarGroup. Shows the performance of queries in relation to time. Shows the performance of queries, specifically how the aggregates are used, based on the needed versus supported levels.
Pivot Table Star Stats Summary Pivot on page 311 Query Performance Analysis Pivot on page 312 Query Performance Analysis Over Time Pivot on page 313 Agg Usage Analysis Pivot on page 313
310
Table 25
Pivot Sections in the Performance Statistics Utility (Continued) Description Shows the performance of queries based on each user. Lists the slowest running queries for your Enterprise Metrics installation. Shows the performance of queries in relation to the last time publishing occurred. Shows you the performance of queries only for the most recent set of statistics written. Shows the performance of queries related to each star and StarGroup; accepts a parameter to filter (customize) the query. Shows the hierarchy levels for each hierarchy in terms of level number and level name. Shows the supported level number along with the supported level for each StarGroup broken down by aggregate rank and star. Shows the supported level number along with the supported level for each StarGroup broken down by aggregate rank and star; displays information vertically. Lists each of the supported level codes on the left (sorted), and lists the slices across the top in the same order they appear in the needed level code. Shows the status of each star in a star group if it was picked or rejected.
Pivot Table User Performance Analysis Pivot on page 314 Slowest Queries Pivot on page 315 Query Performance Analysis Over Publish Time Pivot on page 316 Query Performance Analysis Using Max Start_Time Pivot on page 316 Query Performance Using Parameter Pivot on page 317 Hierarchy Levels and Column Reference Pivot on page 317 Star Supported Levels Reference Pivot on page 318 Star Levels and Columns Reference Pivot on page 319
Reference of Bursted Supported Levels Pivot on page 319 Query Performance with Reject Reason Pivot on page 320
The following sections provide specific information relating to each pivot table, including detailed column descriptions.
StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level.
311
Star NameThe name of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Times RejectedThe number of times the star was rejected for use by queries. Times PickedThe number of times the star was picked for use by queries. Times UsedThe number of times the star was used by a query after it was picked. Percent of UsageThe percentage this star was used as compared to other stars. Total Query SecsTotal number of seconds that the associated queries took to run. Percent of Query SecsThe percentage of time (in seconds) the queries took for this star as compared to other stars. Aft Query SecsThe average time (in seconds) the queries took for this star as compared with other stars.
StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Total Query DurationTotal number of seconds of all queries combined for this star. Percent of Query DurationPercent of duration (in seconds) of all queries for this star as compared with other stars. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query.
312
Number of QueriesThe total number of queries that were issued against this star. Percent of QueriesThe percentage of queries issued against this star as compared with other stars. Avg Query DurationThe average time (in seconds) the queries ran for this star.
Start TimeThe last time statistics were updated and a new set of statistics are gathered. This is dependent upon the ETL load schedule. StarGroup DescrContains the name of the StarGroup. Percent of Query DurationPercent of duration (in seconds) of all queries for this star as compared with other stars. Slowest QueryQuery time in seconds for a given star to help find the slowest running query.
StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level.
Understanding the Enterprise Metrics Performance Statistics Utility
313
Star NameName of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Needed LevelsThe levels that were required based on the query issued. To find the levels that correspond to each digit in the needed levels code, see Hierarchy Levels and Column Reference Pivot on page 317. StatusIndicates if the star was rejected, picked, used, or offline. Reject CauseThe reason the star was rejected. The possible values are: Not Applicable, Missing Needed Levels, Needed ok Missing Cols, and Offline. The value, Not Applicable, appears when the star was either picked or used. Rejected StarThe number of times the star was rejected because it was not suitable based on the request. The most likely cause of a rejected star is that the needed levels were not available in the supported levels. Picked StarThe number of times the star was picked because it was a candidate star to satisfy the request. You may not see a record in the used star for every entry in the picked star, since a star could get picked but ultimately the query could get carpooled and write one record for the used star. Used StarThe number of times that the star was actually used to satisfy a query. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query. Total DurationTotal number (in seconds) of all queries combined for this star. Avg Query DurationThe average time (in seconds) the queries took to run for this star.
User IDA unique identifier of a user of the application. A user ID of #admin indicates that this is the chart as defined by the page publisher. Any other user ID means that this definition is specific to the user. Total Query DurationTotal number of seconds of all queries combined for this star. Percent of Query DurationPercent of duration (in seconds) of all queries for this star as compared with other stars. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query.
314
Number of QueriesThe total number of queries that were issued against this star. Percent of QueriesThe percentage of queries that was issued against this star as compared with other stars. Avg Query DurationThe average time (in seconds) the queries took to run for this star.
Query DurationTotal number of seconds for each query, sorted so you can see the slowest query at the top. Query TimeThe time the query was issued. This will help in correlating this information with the log file User IDUnique identifier of a user of the application. A user ID of #admin indicates that this is the chart as defined by the page publisher. Any other user ID means that this definition is specific to the user. Request IDUnique identifier of a query request within a single client session. This information is important when you look at the log file and need to quickly locate the query. Item IDUnique identifier of a sub activity within a single query. This information is also required when you look at the log file and need to locate the query and make a correlation between the log and the pivot table. Number of QueriesThe total number of queries for this star. Star NameThe name of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. This Star Avg Query SecsThe average time (in seconds) the queries took to run for this star.
315
Last Publish Meta TimeThe last time a publish occurred and shows the affect of modifying the Metrics Catalog (metadata) by adding or removing aggregates, indexes, and so on. StarGroup DescrContains the name of the StarGroup. Percent of Query DurationPercent of duration (in seconds) of all queries for this star in comparison to other stars. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query.
316
Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies. Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports. The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Default AliasThe alias used in the SQL that you will find in the log file. You will easily be able to refer to this pivot table when you are looking at the log file to check the name of the hierarchy based on the alias.
317
Level NumberEach hierarchy contains at least one level, but generally two or more levels. The level number indicates the corresponding level for each item that comprises a hierarchy. Level Name and ColumnThe business name for a level followed by the name of the database column for that level. This is helpful when you are reviewing the SQL in the log file.
StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies. Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports.The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema (for example, the combination of a single fact table and all related dimension tables). Supported Level-Level NameThe lowest supported level for each hierarchy of this star. This shows the level number with the level name.
318
StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies. Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports. The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema (i.e., the combination of a single fact table and all related dimension tables). Supported Level-Level NameThe lowest supported level for each hierarchy of this star. This shows the level number with the level name. Column NameThe physical name of the database column for each level-level name.
Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies.
319
Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports.The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Supported LevelsThe lowest supported level for each hierarchy of this star. Supported Level-Level NameThe level number followed by the level name. This information should be sorted to match the supported levels on the left column.
Stargroup DescrThe name of the star group. Aggregate RankThe rank of the star. Denotes the order in which the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, it is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema (for example, the combination of a single fact table and all related dimension tables). Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Needed LevelsThe levels that are required based on the query issued. You can refer to the Hierarchy Levels and Column Reference Pivot table to find the levels that correspond to each digit in the needed levels code. StatusShows if the star was rejected, picked, used, or offline. Explain RejectShows the reason for the rejection of the star. There are four possible values: Not Applicable, Missing needed levels, Needed ok missing cols, or offline. A reject reason of Not Applicable indicates that the star was used or picked and hence the Reject Reason does not apply in this case. Missing needed levels indicates that the star was missing some levels that was required in the query. Needed ok missing cols indicates that the star had the needed levels but there were other columns that were
320
required in addition to the needed levels for the query possibly because of query carpooling which this star could not satisfy and hence a different star was picked. Offline indicates that the star could still be loading or offline for other reasons.
Rejected StarThe number of times the star was rejected because it was not suitable based on the request. The most likely cause is that the needed levels were not available in the supported levels. Picked StarThe number of times the star was picked because it was a candidate star to satisfy the request. Keep in mind that you do not necessarily see a record in Used Star for every entry in the Picked Star because a star could get picked but ultimately the query could get carpooled and finally one record is written for the used star. Used StarThe number of times that the star was actually used in order to satisfy a query. Slowest QueryQuery time (in seconds) for a given star. This helps find the slowest running query. Total DurationTotal number of seconds of all queries combined for this star. Avg Query DurationThe average time in seconds the queries took to run for this star.
Star and Aggregate Performance on page 322 Slow Queries on page 322 Needed Versus Supported Levels on page 323 Carpooling on page 324 A Star is Picked but Not Used or Rejected on page 324 Needed Columns and Levels on page 324 Frequently Used Stars on page 325 User Complaints on page 326 Analyze the Performance After Tuning on page 326
321
Slow Queries
Some of the causes for slow running queries include:
Lack of indexes or unused indexes. No aggregate table available. The star does not support specific levels. A page in the Investigate Section is carpooling queries and some of the queries need to go to the base star. Multiple queries are running at the same time which slows down Enterprise Metrics. Query design is not optimal. There could be an expensive join between a large dimension table and a large fact table which you could avoid by joining the fact to a different and smaller dimension. For example, if you need to select recently shipped orders from a large fact table, you can join the fact table to a large Order dimension table to obtain the ship date and use that column to filter the fact rows. However, a more optimal query design would use a ship_period_key within the fact table to join to a small Period dimension and use the corresponding calendar date from the Period dimension table to filter the fact rows.
5 In the log file, locate the query that was running slow to analyze the problem.
You can look at the Query Time, User Id and Request Id columns in the Slowest Queries pivot table where you have narrowed down your search and match the entries in the log file to find the query.
322
Once you have found the slow running query, there could be many reasons why the query is running slow as described earlier in this chapter. You can use the information in the following sections to further investigate the cause for the slow running query.
The query from Enterprise Metrics may require some levels that may not be supported by the aggregate star. In some cases, the star is rejected even if the needed and supported levels match and this information is shown in the Reject Cause column. You can determine if a star supports the levels by reviewing the needed versus supported levels. For example, the set of numbers that you see in the pivot table is a group of numbers where each digit corresponds to the lowest supported level of the hierarchy for the given star. They are in the order of the hierarchy slice order. To decode the digit that displays in the Agg Usage Analysis pivot table for the needed and supported levels, you will need to refer to the Star Supported Levels Reference pivot table.
If you review the Star Supported Levels Reference pivot table, the very first column on the left is the StarGroup referencewhere you want to locate the StarGroup that contains your star. The Star Name column shows the star. To the right, you can review the Supported Level for each hierarchy. Use the following hints to decode the lowest supported level number (53136501111011):
The very first hierarchy listed is the hierarchy with the slice order of 0 which is the Period hierarchy. The pivot table shows that for this Period hierarchy, the lowest supported level is level 5 which is the Day level. The 5 here corresponds to the first digit in the supported level combination of 53136501111011. The second number displayed is 3, which in the Star Supported Levels Reference pivot table is the second hierarchy for the star that you are checking. Thus, this would be the lowest supported level that corresponds to the slice order of 1.
You can continue doing this for the remaining digits to determine the lowest supported level for all the hierarchies of the star. Now, you can look at the Needed Levels in the Agg Usage Analysis pivot table and perform the same exercise to determine the levels required by the query. As you traverse the digits from left to right, you may find that a given level was needed but was not supported in the star. This
323
means that the aggregate star is unable to satisfy the request and Enterprise Metrics needs to go against the base star to satisfy the query. This could provide you with a clue if you need to build an aggregate star and add that to the StarGroup.
Carpooling
Typically, many of the metrics on a page in the Investigate Section require the use of the same StarGroup. Enterprise Metrics is designed for optimal performance. The server automatically combines the separate select items of each metric into a single query (if the query is against the same StarGroup) and issues a single query that avoids multiple round trips to the database. This is called carpooling. Since all the queries are against the same StarGroup, if there are some queries that need to access a given star at the base level where as other queries need to access some of the stars at an aggregated level, Enterprise Metrics chooses the base star because it is required to access the base star and it would be an overhead to access the other aggregate stars.
To determine if the needed levels for a query can be satisfied by the supported levels of the star:
1 Look at the query for the mini report in the server log file and locate the query that was running slow. If it is
a report query, identify the star against which the query was issued.
2 Launch the Enterprise Metrics Studio Utilities and click Mini Reports.
For detailed information on using the Studio Utilities, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.
324
3 Locate the mini report that you found in the log file, then click Edit button to view the details of the mini
report.
4 Click the View Constraints button to see which constraints are being used.
There could be some constraints, which are hierarchical or non-hierarchical.
5 To determine if a constraint is hierarchical or not, close the Mini Report tool and return to the Studio
Utilities main window.
7 For each constraint, check the level being used, then close the Constraints tool to return to the Studio
Utilities main window.
8 Click the Stars button to launch the Star tool and click the aggregate star that you suspect should have
been used.
9 Check the hierarchies of the star to find out if the star supports the levels (columns) that the constraint was
based on.
If the star does not support the hierarchy or the level for the constraint, then the star would not be used by Enterprise Metrics despite the fact that the supported levels matched with the needed levels. If you find that the query is performing slow because of the star getting rejected due to your constraint, you will need to improve the performance by either indexing the table you are using for the non-hierarchical constraint appropriately or adding the hierarchy of the constraint to the aggregate star.
6 After you have collected the details for the star, view the server log file and search for the queries by
looking at the query time, request ID and user ID.
325
7 Compare the queries that are issued for this star and look for common hierarchy levels (columns) across all
the queries.
If you find that there are queries that are accessing a subset of the levels that are common across all the queries, it may be beneficial to build aggregate stars either on the base star or an aggregate star over an existing aggregate star.
User Complaints
At times, you may receive complaints from users about Enterprise Metricss performance. You can use the User Performance Analysis pivot table to analyze Enterprise Metricss performance, by user. First, locate the user ID of the user experiencing performance problems, then compare the information with the other users to see if it is widespread problem or is isolated to only one user. If it is an issue isolated to only one user, drill down on the StarGroup name, star name, request ID, or query time to narrow down the problem. You can also check the Star Stats Summary or Query Performance Analysis pivot tables to further analyze the problem. Typically, the reasons for slow performance may be that there are aggregate stars that are supporting the query or that the tables being accessed needs to be indexed. You can look at the query by looking at the server log file and search for the query based on the information gathered using the Performance Statistic tool.
Query Performance using ParameterAllows you to pick a date that you want to see the performance statistics. For detailed information on setting the start date, see Query Performance Using Parameter Pivot on page 317. Query Performance Analysis over TimeShows you the performance based on capturing information on the basis of the ETL load job and reflects the changes according to your ETL schedule. Query Performance Analysis over Publish TimeShows you the performance statistics based on when you publish the catalog (metadata). Typically, you will use this pivot table first to determine the changes in performance as you add or remove aggregates, indexes, and so on.
326
STAR_STATS.COLLECT_DETAILSpecifies whether collection of star usage statistics is enabled at the detail level. By default, this setting is set to TRUE. STAR_STATS.COLLECT_SUMMARYSpecifies whether collection of star usage statistics are enabled at the summary level. By default, this setting is set to TRUE. STAR_STATS.DELETE_DAYSWhenever detail records are written to star usage statistics, existing records older than the number of days specified by this setting will be deleted. The minimum setting is 0 (which means never delete) and is not recommended. The maximum setting is 90. The default setting is 14 days. STAR_STATS.DETAIL_WRITE_EVERYIf detail star usage statistics records are being collected, they are written to the catalog database each time this number of records has been accumulated, or when the server is shutdown or restarted. The default is 1000 records. STAR_STATS.SUMMARY_INTERVAL_SECSWhen summary star usage statistics collection is enabled, statistics are accumulated in memory and only written to the catalog database each time this specified interval (in seconds) expires, or the server is shutdown or restarted. The default is 1800 seconds.
For additional information regarding each of the above preference settings, see Chapter 19, Enterprise Metrics Preference File Settings. To begin the tuning process, verify that the STAR_STATS.COLLECT_DETAIL and STAR_STATS.COLLECT_SUMMARY preferences are set to TRUE. Since these are the default value, if you open the Metrics_server.prefs file and do not see an entry for either preference setting then the default values are being used. You can confirm this by opening the Enterprise Metrics Server log and reviewing the preference settings at the beginning of the log file to confirm the values being used. Then, make sure to set the STAR_STATS.DETAIL_DELETE_HOURS appropriately so that the records are retained for the period you wish to tune and troubleshoot Enterprise Metrics. Depending upon the user load and activity, you may want to change the defaults for the STAR_STATS.DETAIL_WRITE_EVERY and STAR_STATS.SUMMARY_INTERVAL_SECS. After you verify or change the preference settings, the statistics data will be captured in the metadata tables and server logs based on usage of the application. Ensure that these statistics are retained in the tables for the duration of the tuning process. It is also important that you keep an archive (and do not purge) any Enterprise Metrics Server log files in the \Server folder and retain these archives for the duration of the tuning process.
327
328
Chapter
19
In This Appendix
This chapter describes preference settings for the Enterprise Metrics Server and Configuration Server, Workspace and Personalization Workspace, Studio Utilities, and Metadata Export Utility.
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Metrics_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Configuration_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Client.prefs Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Metadata_export.prefs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
329
Overview
Preference files are text files that contain settings that affect the appearance and functionality of Enterprise Metrics. Four preference files supply settings for Enterprise Metrics.
Table 26
Application/Server Metrics Server Configuration Server Workspace and Personalization Workspace Metadata Export Utility
metadata_export.prefs
When Personalization Workspace or an Enterprise Metrics server is started, it reads the preference file to determine the settings. If you change settings in a preference file, the changes take effect when the server is started or restarted. The following lines show an excerpt from the Configuration_Server.prefs file.
CONFIG_SERVER=TRUE STAR_STATS.COLLECT_SUMMARY=TRUE # logging is DUMPTOFILE by default # uncomment to METABOLISM to write to console window #BALLPARK=METABOLISM # by default, save three logs up to 3 Meg each #LOG_FILE_MAX=3000000 #LOG_SAVE_COUNT=3 SQL.PRINT_SQL=TRUE SQL.TIME_MINIS=TRUE SQL.TIME_QUERIES=TRUE # for normal operation, default log level of 3 is best. # for debugging purposes, up to a value of 6 may be useful. #LOG_LEVEL=6 # for demo and personal systems, should probably change to TRUE SERVER_WINDOW=TRUE # Only allow login by userids in the security tables REQUIRE_UTABLE=TRUE CLIENT_PREFS=Client.prefs DB_MAP_NAME=pub DB_MAP_TABLE=pub_map_table
330
Note: Lines in a preference file that begin with the pound sign (#) are comments and blank lines are ignored. Comments are not read by the Enterprise Metrics Server.
Many of the settings are intended for use only by development or support and should not be altered. These are indicated by an X in the column labeled Do Not Edit. In the tables that follow, some settings are too long to fit without wrapping to the next line; whereas in the preference file, the settings each appear on a single line. The values that need to be entered following the equal (=) sign should be entered without a space between the equal sign and the value.
Metrics_Server.prefs Settings
The Metrics_server.prefs file is stored in the same directory where you installed the Metrics Server. Settings prefixed with TOOLS set the catalog (metadata) configuration functions of Enterprise Metrics. Settings prefixed with DB relate to the Application Data, and settings prefixed with MDB relate to the catalog (often referred to as metadata).
Table 27
Description When you launch the Enterprise Metrics Security tool, this setting determines whether or not to display a list of group entries. If you set the AUTH_DEF_FILTER_TYPE to USERS, the Security tool will display a list of users initially. This setting works in conjunction with the AUTH_DEF_FILTER_TYPE setting. If you click Reset Filter in the Security tool, it reverts to these defaults. The default is GROUPS.
AUTH_DEF_FILTER_CRIT=*
Use this setting to provide a different search string (0 or more characters with optional wildcard asterisk at the end) to work with the AUTH_DEF_FILTER_TYPE. This only affects the list of USERS/GROUPS initially displayed in the Security tool. Limits the number of users displayed within a group when displaying properties of a user. X Specifies that the server should use CSS authentication when logging on users. If you have special authentication requirements, contact Hyperion Customer Support. Used to provide the name of a custom authentication driver. This setting is ignored unless AUTH_METHOD is set to OTHER. When set to TRUE, Enterprise Metrics synchronizes authentication and authorization (for Roles) with the rest of the Hyperion System 9 BI+ modules. It uses the Shared Services to obtain role information for a user trying to log into metrics. Other related settings for provisioning configuration are AV_URL, AUTH_METHOD, AUTH_AV_PROD_ID, and CSS.CONFIG_FILE. The default is TRUE. Setting to FALSE is not recommended for a customer installation.
AUTH_MAX_DISPLAY_USERS=50 AUTH_METHOD=CSS
AUTH_METHOD_CLASS AUTH_PROVISIONED=TRUE
X X
Metrics_Server.prefs Settings
331
Table 27
Description The default is blank. This setting is the ID that was used to register the Hyperion System 9 BI+ instance. Enterprise Metrics uses this ID to access user provisioning information. The setting appears as: <Product code>:<local_id> and is specified during registration. If AV_URL is provided, you do not need to set this. If this setting is supplied and AV_URL is supplied, the setting for AUTH_AV_PROD_ID overrides. We recommend that you specify AV_URL only and let the server derive this setting.
AV_URL=
Identifies the Hyperion System 9 BI+ URL, exactly as the end-user is expected to use it to launch the system. For example, http:// <machinename:19000>/workspace/. Normally, this value is set when you run the Hyperion System 9 Configuration Utility and configure Enterprise Metrics. You must do this whenever the System 9 BI+ URL has changed, and then also repeat server setup and Shared Services registration. This setting is used to synchronize User Management/Provisioning settings with that used by the rest of the Hyperion System 9 BI+ modules. When set, Hyperion System 9 BI+ must be up and running at the time of Enterprise Metrics servers startup. The value here is used to derive values for CSS.CONFIG_FILE, AUTH_AV_PROD_ID, and CLIP.URL_PREFIX, unless they have been set explicitly. The recommended approach is to use the AV_URL setting rather than set the four preference settings explicitly.
BALLPARK=DUMPTOFILE BILLIONS_SYSTEM=AMERICAN
A value of METABOLISM causes logging to the console window for debug mode, and DUMPTOFILE causes logging to a file for normal operation. This setting determines what system settings to apply for scale codes used in the Metrics tool. You can change the setting to BILLIONS_SYSTEM=BRITISH to apply European abbreviations for thousands and millions. X X X Specifies the user for which to trace cache activity, or * (asterisk) for tracing all users. For development purposes only. Do not change this setting. The default value is zero and should not be changed for normal operation. A non-zero value limits the cache preload on the Metrics Server only to the first <n> global Monitor Section pages. Specifies the number of private (Investigate Section) pages cached on the Server per user. A larger number can improve performance, but at the cost of a larger memory footprint for the Server. The default is 50. The minimum value is 1, and the maximum value is 100.
CACHE_SIZE_METRICS=50
CACHE_SIZE_REPORT=50
Specifies the number of (Pinpoint Section) pages cached on the Server per user. A larger number can improve performance, but at the cost of a larger memory footprint for the Server. The default is 50. The minimum value is 1, and the maximum value is 100.
CACHE_SYS_METRICS_MAX=200
Specifies the maximum number of global (Investigate Section) pages to cache. When the maximum is reached, the server purges the older pages down to the minimum setting (see below). The maximum is 1000. Specifies the minimum number of global (Investigate Section) pages to cache. The maximum allowed is 900.
CACHE_SYS_METRICS_MIN=150
332
Table 27
Description Specifies the corresponding password for the user ID set in CDB_USER. Specifies the user ID the servers use in establishing the connection pool for accessing the Application Data. This user ID requires only read-only database privilege, since it is used only for making SQL queries. Some IS groups may find it acceptable to use the same user ID for CDB_USER as that used for DB_USER. This setting normally is set to FALSE; however, if you use the Calendar Utility to generate the bap_period table, you should set this preference to TRUE. If TRUE, the server performs a number of consistency checks on the BAP_PERIOD table during initialization. Note: This setting should be set to TRUE the first time you start the Metrics or Configuration server after data has been loaded into or modified in the BAP_PERIOD (calendar) table. You should also set this to TRUE if you make changes to the init scripts which might possibly be related, including the population of period_trans, ago_code_lookup, epc_lookup, and so on.
CHECK_PERIOD_TABLE=FALSE
CLEANUP_WAIT=1800
Specifies, in seconds, how long the sweeper should wait between periodic housekeeping checks. The default is a half hour. Sweeper functions include such things as logging off idle users, writing summary statistics to the log, checking whether the log should roll over to a new file, and so forth. Specifies the name of the client preference file. The standard installation sets this to Client.prefs. Note: This setting is case-sensitive. Specifies the port number for the Configuration Server. This setting is used only when the CONFIG_SERVER is set to a value of TRUE. Set to FALSE to cause the server to launch in Server mode. If this setting is set to TRUE, it causes this server to launch in Configuration Server mode, which allows the user to login as Editor, to define global pages for the Monitor and Investigate Sections. Specifies the connection pool idle limit in seconds (that is, how long a connection to the Application Data is kept open if it is idle). Specifies the initial number of connections to be created in the Application Data connection pool upon server initialization. Specifies how many wait loops the server should run, when the connection pool is found to be empty, before extending the size of the pool. See CONNPOOL.WAIT_TIME below. Specifies how many seconds the server should wait for each wait loop before checking to see whether a connection has become available in the pool. Points to the XML file that contains configuration information for external authentication. It contains details of all the authentication sources and the order to access for authentication.
CONNPOOL.WAIT_TIME=5 CSS.CONFIG_FILE
Metrics_Server.prefs Settings
333
Table 27
Description If the server or tools fail to connect to a cube using the native Analytic Services driver, they attempt to connect using the thin EDS driver. This setting allows you to specify where to attempt to contact the EDS server for this alternate connection. With the default setting of <nothing> an alternate URL will be constructed that assumes the EDS server is on the same machine as the Analytic Services host, or you may specify an explicit EDS server to use for all fallback connection attempts. (This option is intended primarily for the case where the Editor might be temporarily working from a machine that does not have the native Analytic Services driver installed, so that sources may still be configured to use the native driver.) Determines which of the defined cube data sources the server and Studio Utilities will connect to automatically when they initialize, in conjunction with the auto connect code setting specified in the Sources function of the Cube Tool. The default setting of 1 causes the server and tools to automatically attempt to connect to any cube source with an auto connect code greater than or equal to 1, while a zero or negative setting means that all source definitions are ignored (not connected). A value greater than 1 connects only sources with an identical auto-connect code.
CUBE.AUTO_CONNECT_CODE=1
X X
For development purposes only. For development use only. A value of TRUE dumps detailed information about how metrics query result cubes are constructed. At various times, member names are retrieved from a cube for some dimension (mostly in the tools, for display or selection purposes). This setting limits the number of child nodes that are retrieved beneath any particular node, to minimize problems with extremely large dimensions. When appropriate, the tools provide an option to override this setting. This setting does not affect server behavior or limit data query results. These are suggested values, to help the Cube Wizard pre-select the Use As settings when creating a cube-star. If a dimension in the cube has one of the names in this list, it is pre-selected as the dimension containing measures, provided no dimension is explicitly flagged as Accounts. Suggested names must be separated by commas, without extraneous spaces.
CUBE.MEASURE_DIMS=Accounts,Measures
This string indicates the value returned from cube data queries that should be interpreted as no data. Similar to CUBE.MEASURE_DIMS; this identifies likely names for the scenario dimension. Identifies the name of the Analytic Services alias table to be used, only in the case where CUBE.USE_SLICE_ALIAS_TABLE=TRUE. Note the same alias table name will be used for all cubes in the configuration, and the aliases in those cubes must be consistently defined. Setting this to TRUE enables member name transformations using either the CUBE_NAME_TRANSFORM or CUBE_TRANSFORM_MAP tables. Defaults to FALSE. This setting is useful only in a mixed environments with cubes and a data mart, where member name transformations were applied in the process of loading the cubes.
CUBE.SLICE_NAME_TRANSFORM=FALSE
334
Table 27
Description Similar to CUBE.MEASURE_DIMS; this identifies likely names for the detail time dimension.
Used internally, and should not be changed. Configures the Enterprise Metrics Server(s) to use Analytic Services data security, rather than the security group definitions offered by the Security tool. To use Analytic Services security for all metrics queries, in addition to setting this to TRUE, you must also set AUTH_METHOD=CSS, and must not be using any relational stars (other than perhaps the initial Days star). All system caching and preload is disabled, and each user will effectively have a private connection pool, for each cube they access.
CUBE.USE_SLICE_ALIAS_TABLE=FALSE
Specifies the display of alias names for cube members. This setting is used in conjunction with CUBE.SLICE_ALIAS_TABLE, which defaults to the value DEFAULT. You have the option to display alias names for cube members, rather than the member names, if all of the following conditions are satisfied:
You are not using member name transformations (relational-cube). All cube sources are actually Analytic Services cubes. All cube sources have an alias table of the same name (such as 'Default'). All alias tables contain consistent values, across cubes.
If all these conditions are met and you wish to see aliases (when they exist), set CUBE.USE_SLICE_ALIAS_TABLE to TRUE, make sure the setting of CUBE.SLICE_ALIAS_TABLE is accurate, and be sure that CUBE.SLICE_NAME_TRANSFORM is set to FALSE. Note that this setting applies to all cube queries issued by the server, and you can only use a single alias table. CUBE.YEARS_DIMS=Years CUSTOM_POLICY_CLASS X Similar to CUBE.MEASURE_DIMS; this identifies likely names for the years dimension. Used to provide the name of a class file to implement a custom user policy. This setting is ignored unless USER_NAME_POLICY is set to CUSTOM_LOGIN. If you have special requirements, contact Hyperion Customer Support. For development purposes only. Specifies the date formats available in the Personalization Workspace. You can set the formats to MDY (month before the day) or DMY (month before day). The default is MDY. If the setting is MDY then time settings appear with AM/PM indicators, whereas DMY does not. X (See the description of SQL.FIND_COLUMNS below.) The default setting is NULL and is the only thing that works with the Data Direct drivers.
DATA_MGR_REGRESSION=FALSE DATE_ORDER
DB_CATALOG=NULL
Metrics_Server.prefs Settings
335
Table 27
Note: The DB_ prefix for this group of settings indicates that they all refer to the Application Data, not the catalog. DB_DRIVER= $J(pbDbJdbcDriver) DB_MAP_NAME=dev Specifies the driver used for accessing the Application Data. For example,
DB_DRIVER=hyperion.jdbc.oracle.OracleDriver.
Specifies which entries in the catalog table named PUB_MAP_TABLE are read by this server, selecting only rows in which the column db_version contains this value. The standard settings are prd for the Server, and pub for the Configuration Server. You can also specify multiple map table version names. For example, DB_MAP_NAME=pub,lq would first read all map table entries where db_version='pub', and then apply any where the version was lq as overrides (adding or replacing). The overrides are listed in the log, and you can apply as many different versions as you like, separated by commas. Naturally, this applies to both the tools and the servers.
Note: These settings are case-sensitive and must be lowercase.
DB_MAP_TABLE=pub_map_table
Specifies the name of the catalog table that is used to provide a map between the catalog table names that the servers Java code uses and the physical names of the tables in the database. The default is pub_map_table. Specifies the corresponding password for DB_USER below. The default setting is set by the Enterprise Metrics installer. You should update the DB_SCHEMA setting with the Application Data user ID. Specifies the user ID that the server uses to connect to the Application Data. This user ID is usually the Editor's database user ID. The DB_USER user ID requires database read-write privileges on the BAP_LOAD and BAP_DUMMY tables and read-only for all other tables. X Development use only. This setting controls number formatting in the placement of the comma and decimal. The default is COMMA_PERIOD. You can also set this to: PERIOD_COMMA (for example: 0.000,00), SPACE_COMMA (for example: 0 000,00), and APOSTROPHE_COMMA (for example: 0'000.00). X X X X Causes the server to record all of these prefs settings in the log. A value set to FALSE logs only settings with non-default values. Used by the enrichment process and should not be changed. Used by the enrichment process and should not be changed. If TRUE, the server dumps the generated ending period numbers during initialization. Specifies the file name used to save non-default settings to when the Export function is used from the Server Console and Configuration Server Console. The default filename is saved.server.prefs.
DEBUG_MOVING_AGGREGATES DECIMAL_FORMAT=COMMA_PERIOD
336
Table 27
Description Do not change this setting without assistance from Hyperion Solutions Customer Support. Specifies whether to dynamically rebuild all the constraints which appear in the (Pinpoint Section) page menus when initializing the server. This should always be left as TRUE. Note: The Studio Utilities have a Restart Fast option, which temporarily overrides this setting for only the Configuration Server.
GEN_CONSTRAINTS_LIMIT=2000
If the number of constraint items to be generated for a single hierarchy (dimension) would exceed the specified limit, new constraint generation for that entire hierarchy is skipped. If constraints had been generated previously, the old constraint items remain in place and a warning appears in the log. The default is TRUE. Communicates to the server to keep any dimension member trees, requested by Hyperion System 9 BI+ Scorecard, in memory. May be set to FALSE to release them upon return to Scorecard; has no effect unless using Scorecard integration. Specifies the number of seconds a user may be idle before the server terminates his or her session. The default setting is 3600 seconds (one hour). This setting applies in all authentication modes; if the Enterprise Metrics Server has not seen a request for a given client in the specified time, the clients session is invalidated, and the next request results in the user being logged out with a timeout message. Allows you to disable number scaling in charts, ZoomCharts, and reports. Setting this to TRUE causes the server to ignore all scaling codes, as if you had gone through all ZoomChart line and report definitions and selected NONE as the scaling code, which may be useful for data validation purposes, but not for normal operation. The Enterprise Metrics installer populates this setting with the deployment ID. The Enterprise Metrics installer populates this setting during installation. (port@host) X X This setting is used by the load process and should not be changed. This setting is used by the load process and should not be changed. A value of TRUE forces the server to disable all caching, so that every client request results in fresh queries to the Application Data to retrieve the most current data. This has severe performance implications, and is intended only for very limited use. This setting applies only when using AUTH_METHOD=DATABASE or LDAP, and provides a limited form of SSO when running in standalone mode. The default setting is 2 hours (7200 seconds). Once you supply your user ID and password to the Launcher Servlet in this mode, as long as you keep your browser window open, you may then launch other Enterprise Metrics applets without having to re-enter your user ID and password for two hours (at which time the next launch prompts you, and then you are logged in for another two hours).
HPS.SAVE_MEMBER_TREES=TRUE
IDLE_TIME_OUT=3600
IGNORE_SCALING=FALSE
LOGIN_REPROMPT=7200
Metrics_Server.prefs Settings
337
Table 27
Description Setting to TRUE causes a log entry to be written for every single request from a client to the server. This is intended only for debugging. By default, all logs include a MM/DD prefix before the timestamp on each log entry. This setting should not be changed, as it reduces the effectiveness of the log viewing utility. Specifies the directory in which server logs should be stored. You can use forward slashes to separate elements in the path even for Windows. If you prefer to use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is C:\\. The doubling of backslashes is necessary because Java uses '\' as an escape character. The default value is empty, which causes logs to be written to the current directory (from which the server was launched). Specifies the approximate maximum size of one log file in bytes, defaulting to 3 MB. When the server notices at CLEANUP_WAIT intervals that the current log exceeds this setting, it switches logging to a new file. This setting has no effect if LOG_SAVE_COUNT is set to a value of 1.
LOG_DIRECTORY=
LOG_FILE_MAX=3000000
LOG_LEVEL=3 LOG_SAVE_COUNT=3
Specify the level of logging detail for the server. The default is 3 and should not be changed unless recommended by development or support. Specifies the maximum number of log files maintained by the server, erasing older logs when the count is exceeded. The default is 3. Changing it to a value of 1 (not recommended) causes the server to write indefinitely to a single log with no timestamp in the name. Determines the number of characters to use for the user ID field in server log entries. Limits the number of items returned by the server when a user requests a tab-delimited export of a report. This value provides a default limit on the number of cells (rows x columns) to be returned for a single mini report, when the Row Limit for the mini report itself is left blank. An explicit setting for a mini report overrides this value. Specifies the location of the catalog database. See DB_DATABASE for examples. Note: The MDB_ prefix for this group of settings indicates that they all refer to the catalog.
MDB_DATABASE=$J(pbMdbJdbcURL)
Specifies the driver to use for accessing the catalog database. See DB_DRIVER for examples. Specifies the corresponding password for MDB_USER (see below). Specifies the user ID that the server should use to access the catalog database. This user ID is usually the Editor's ID, and must have read-write access to all catalog tables. X MODULE_ID is used as part of the interface and should not be changed.
MODULE_ID=HMB.send
338
Table 27
Description Specifies how often, in seconds, the server should poll the Application Data. Polling checks the status of the database, so the server can take the right action when the database server is available, unavailable for loading, or unavailable due to network or system maintenance or failure conditions. Specifies the polling interval for interrogating the BAP_TABLE_LIST table to determine whether fact tables should be considered available for use or not. The default setting is 900 seconds (15 minutes). This setting supports the delayed loading of aggregate tables. At the end of each interval, the server also attempts to connect to any cube data sources that were previously unavailable. Specifies the polling interval to use during periods when connection to the database is unsuccessful, instead of the standard polling interval (POLL_DB=60) that is used while the database is responsive. Default is 300 seconds (5 minutes). This setting only applies to checking the flags in the BAP_LOAD table in the Application Data. Specifies the port number the Server uses for accepting client connections. This setting is used when the CONFIG_SERVER setting is set to FALSE. The default is 2005. The client applets must be assigned to connect to this same port in their anchor Web pages.
POLL_DB_FOR_TABLES=900
POLL_DB_WHEN_DOWN=300
PORT_NUMBER=2005
X X
This setting is used by the publish process and should not be changed. This setting is used by the publish process and should not be changed. This may be set to TRUE to place the server in read-only mode, in which case it does not update any catalog tables (as a result of user configuration changes, and so on). Note that you would typically be setting up a new server instance, for which you would likely want to change the port number, and other configuration changes are required.
READ_ONLY_DLG=TRUE
If READ_ONLY=TRUE, the default value of READ_ONLY_DLG causes a dialog box to be displayed to users at login, reminding them that any user configuration changes will be lost at logout. Restricts the Enterprise Metrics Server to reading the BAP_PERIOD (calendar) table only down to the level of day, and sampling data at finer levels, such as hour, if present. This setting should never be set to anything other than Day without guidance from Hyperion Solutions Customer Support. Normally the server registers itself by host name. Setting this to TRUE causes it to register by IP address instead (and requires that the connection parameters in the Web site be changed accordingly). There is no good reason for doing this. With the default setting of TRUE, all Enterprise Metrics users must be granted data security through the Security tool to be allowed to launch Metrics clients. Rule sets must be associated with the user or with at least one of the direct groups that the user may belong to (in addition to being authenticated and having valid Metrics roles). If the setting is changed to FALSE, users need only pass authentication, and are assumed to have unrestricted data access if they were not defined through the Security tool.
READ_PERIOD_LEVEL=Day
REGISTER_IP=FALSE
REQUIRE_UTABLE=TRUE
Metrics_Server.prefs Settings
339
Table 27
Description Controls how long the server remembers a user's role and data security information, before acquiring them again from CSS and our security tables. (min 0, max 1800) Specifies whether to show the small server status window. This is typically set to TRUE on NT servers and FALSE on UNIX servers.
For development only. Should never be set to TRUE. Specifies whether you want to allow users to cancel queries on client (Investigate Section) pages. To disable the Cancel feature, change this setting to FALSE.
SQL.CLOSE_IF_CANCELLED=FALSE
If TRUE, the connection used when a metrics query was cancelled will be closed and discarded, rather than returned to the connection pool. There currently appear to be no reasons to do this, and it is highly recommended the setting be left as FALSE. For development and testing purposes only. For development only. Should never be set to a non-zero value. If the bap_period table represents a non-fiscal calendar (for example, a given week starts in one month and ends in the next), this setting should be set to TRUE to force an additional constraint to be used for database queries which are retrieving cumulative data, to ensure consistent results.
X X
SQL.FIND_COLUMNS=SELECT
This setting determines what technique the Enterprise Metrics Server uses to find out what columns exist in the Application Data tables. The default method is to issue a SELECT * SQL statement and interrogate the result set. The other possible setting is TABLE, through which the information is obtained by querying the database catalog. In this case, it may also be necessary to identify the database catalog and schema names using DB_CATALOG and DB_SCHEMA. However, not all database drivers support this function, and the recommended setting is SELECT. For development only. Should never be set to TRUE. Enables special support for ragged relational hierarchies, which defaults to FALSE. To enable, you must also have SQL.MAX_DASH_REQ_THREADS set to a value greater than 0 (default is 4), and also have a value set for SQL.SKIPPED_LEVEL_STRING.
SQL.FORCE_ALL_JOINS=FALSE SQL.MART_HAS_RAGGED_HIERARCHY=FALSE
SQL.MAX_DASH_REQ_THREADS=4
This setting determines the number of parallel queries the server executes on behalf of a single client request for a page in the Monitor or Investigate Sections. If the database is running on a multi-processor machine with sufficient resources, increasing this number typically improves client response time for these requests dramatically. This is, however, a tuning issue which must be carefully coordinated with settings for the connection pool, database process limit, and various other settings. NOTE: The minimum value of 0 is not recommended.
SQL.PRINT_SQL=TRUE
Specifies whether to include SQL statements in the log file. Logging level setting has no effect on this SQL logging. For typical installations, this should always be set to TRUE.
340
Table 27
Description For development only. Should never be set to a non-zero value. For development only. Should never be set to TRUE. If using ragged relational hierarchies (SQL.MART_HAS_RAGGED_HIERARCHY=TRUE), this setting specifies the value that must be stored in the hierarchy level column(s) for the lowest <n> levels that do not exist in some particular path. The default setting logs the timing information on mini report query processing, which is recommended for typical installations. Specifies whether to log the timing information of the query processing for pages in the Monitor or Investigate Section requests, which is recommended for typical installations.
SQL.TIME_MINIS=TRUE SQL.TIME_QUERIES=TRUE
SQL.TIME_QUERY_DETAILS=FALSE SQL.USE_INS=TRUE
X X
Specifies whether to log the detail timing information of the query processing for pages in the Monitor and Investigate Sections. This setting determines whether SQL query generation for pages in the Monitor and Investigate Sections constructs the time period constraint as a single IN (l, m, n) phrase, or uses a series of greater or less than comparisons. Based on experience to date with different databases, this should always be left TRUE. For development only. Should never be set to TRUE. For development only. Should never be set to TRUE. Specifies whether collection of star usage statistics is enabled at the detail level. Results are stored in a catalog table and can be extremely useful for query tuning. There is, however, some overhead involved, and you may wish to change this to FALSE for normal operations. Note: Setting STAR_STATS.COLLECT_DETAIL to TRUE also forces STAR_STATS.COLLECT.SUMMARY to TRUE (see below).
X X
STAR_STATS.COLLECT_SUMMARY=TRUE
Specifies whether collection of star usage statistics are enabled at the summary level. The overhead for this is minimal, and it should be left enabled. Whenever star usage statistics collection is started (normally, during server restart) any summary and detail statistics older than the specified number of days are deleted. If detail star usage statistics records are being collected, they are written to the catalog database each time this number of records has been accumulated, or when the server is shutdown or restarted. When summary star usage statistics collection is enabled, statistics are accumulated in memory and only written to the catalog database each time this specified interval (in seconds) expires, or the server is shutdown or restarted. X For development use only. Should never be set to TRUE.
STAR_STATS.DELETE_DAYS=14
STAR_STATS.DETAIL_WRITE_EVERY= 1000
STAR_STATS.SUMMARY_INTERVAL_SECS=18 00
TGC_DUMP=FALSE
Metrics_Server.prefs Settings
341
Table 27
Description Provided for backward compatibility only. Allows different names to be specified for the columns in the Application Data that the standard data model refers to as create_time. Enables the Cubes tool in the Enterprise Metrics Studio Utilities. Defaults to TRUE, so that the Cubes icon appears in the Studio Utilities main window. May be set to FALSE if not using multidimensional data sources. The Filter dialog box, used in both the Measures tool and Mini Report SQL Editor, supports both CASE and DECODE SQL syntax, but only one at a time. The default is CASE, which causes the tools to generate the so-called simple form of a standard CASE statement, which is supported as a native database function by Oracle, DB2, and SQL Server. DECODE and IIF are still supported, but unless you have a specific need to use one of these forms, you should remove any explicit setting you might have in your server prefs. Note that the generated syntax is different than it used to be. The old form constructed by the Filter and Time dialogs, when set to CASE, was:
CASE a WHEN b THEN c ELSE 0 END
TOOLS.CUBE_TOOL=TRUE
TOOLS.FILTER_SYNTAX=$J(pbFilter)
Specifies the default chart color to be used for the actual metric, when generating metrics and chart templates. Specifies the default chart color to be used when generating time offset metrics and chart templates. Specifies the default chart color to be used when generating comparison metrics and chart templates. Specifies the default limit for generating cumulative charts, as a time grain code where 1=year, 2=quarter, 3=month, and so on. Used in the Cube, Measures, and Metrics tools, to avoid generating week to date and day to date charts unless explicitly overridden (and the data supports it). Offers flexibility in generating chart headers. If this setting is empty (nothing other than just blanks or an immediate return after the equal sign), then a two-metric chart uses the full metric labels for both metrics, and will not insert a middle line (such as 'vs.'). Three metric charts will use the full metric names, adjusted for cumulative or time offsets as usual. If the value is non-blank, then a middle header is inserted with this value for two metric charts (always black), and simplified names are used for the comparison and/or time offset metric. For example, [Sales][vs. Budget][vs. Prev Year], while an empty setting would instead produce [Sales][Budget Sales][Sales Prev Year].
TOOLS.GEN_VERSUS= vs.
Provides the string to use for separating metric names, when generating the name of a chart template containing more than one metric. Note that single quotes are required, assuming you wish to use leading and/or trailing blanks. Identifies the image used for the background of the main configuration window. This image is used when TOOLS.CUBE_TOOL=TRUE.
TOOLS.IMAGE.MAIN_CUBE_BACKGROUND=co nfig_tools_cube_bgnd.jpg
342
Table 27
Description Identifies the image used for the background of the main configuration window. This setting must not be changed. Used only in emergency situations, in which the Configuration Server is not available to authenticate login to the Studio Utilities. At installation, TOOLS.LOGON_PASS should be set to the same values as DB_PASS. Used only in emergency situations, in which the Configuration Server is not available to authenticate login to the Studio Utilities. At installation, TOOLS.LOGON_USER should be set to the same values as DB_USER.
TOOLS.LOGON_USER=$J(pbDbOwner)
TOOLS.LOG_DATE=TRUE TOOLS.LOG_FILE_MAX=500000
Causes MM/DD to be prefixed on log entries for the Studio Utilities. Specifies an approximate size limit, in kilobytes (KB), for the log file written by the Studio Utilities. Each time the Studio Utilities is launched (when logging to a file), the size of the most recent log file is compared to this value, and a new file is started if the size exceeds this value. However, this setting is ignored if TOOLS.LOG_SAVE_COUNT is set to a value of 1.
TOOLS.LOG_LEVEL=6 TOOLS.LOG_SAVE_COUNT=2
Determines the level of detail in the Studio log. Should not be changed without guidance from Hyperion Solutions Customer Support. Specifies the maximum number of log files maintained by the Studio, erasing older logs when the count is exceeded. The default is 2. If set to 1, the tools log indefinitely to a single file which omits the date/time usually indicated in the log filename. If changed to FALSE, log messages from the Studio Utilities are written to the Java console window, instead of to a file. The recommended setting is TRUE. Determines the number of characters available for displaying a userid in the log entries for the Studio Utilities (longer IDs are truncated). When using the Processed Enrichment tool for editing Direct jobs, the initial display includes all distinct values found in the source column if less than this setting. If more values exist, only the Show Used view is enabled. Allows you to adjust the limit that determines whether we show you a menu of values for a selected column, or require you to simply type one in. For example, in the Enrichment tool, when you select a source column, you must also designate a source value. There is a menu of values providing that the number of choices falls below this limit. By default, the limit is 100 distinct values. This limit applies to the Filter dialog box (used in the Measures tool and Mini Report SQL Editor), various menus showing column values in the Processed Enrichment tool, and for selection of comparison values in the Security tool.
TOOLS.MAX_VALUES=100
TOOLS.MIN_AUTOGEN_ITEM_ ID=1000
The items that populate the menus on (Pinpoint Section) pages are, for the most part, automatically generated by the Enterprise Metrics Server. This setting establishes the minimum item ID that the server assigns, preserving lower ID values for use by manually-defined constraints (such as a date range). Do not change this value unless directed to do so by Hyperion Solutions Customer Support.
Metrics_Server.prefs Settings
343
Table 27
Description Used when generating metric and chart names for cumulative metrics. The default is PTD (period to date), however you can change this setting to use a different suffix. If the specified suffix is used, Enterprise Metrics automatically converts (for example) PTD to YTD, QTD, and so forth for cumulative metrics. This suffix appears in metric names and chart template headers. This setting must be coordinated with the TOOLS.PTD.SUB_PATTERN preference setting for proper results. Note: Single quotes are necessary around the value, in cases where either leading or trailing blanks must be preserved, as in the default shown here.
TOOLS.PTD.SUB_PATTERN=-*--
This setting is used in conjunction with TOOLS.PTD.DEFAULT_SUFFIX, and specifies which portion of that string should be substituted with a specific time grain value. Ignoring the surrounding single quotes (if any) on the TOOLS.PTD.DEFAULT_SUFFIX value, this substitution pattern should use asterisk character(s) to indicate the positions in the pattern to be replaced with a specific time grain name, and minus signs (hyphens, dashes) to fill in all other positions. If only a singly asterisk is used, then the substitution will be done using the first character of the time grain name (Y/Q/M/ and so on); if more than one consecutive asterisk is present, then the full name (e.g. Year) will be substituted. As an example, if you were to set TOOLS.PTD.DEFAULT_SUFFIX=' so far this CHUNK' and also set TOOLS.PTD.SUB_PATTERN=------------*****, then the resulting names would be something like 'so far this Year'.
X X X
Determines the general appearance of the Studio Utilities. Determines the general appearance of the Studio Utilities. Provided for backward compatibility only. Allows a different name to be specified for the columns in the Application Data that the standard data model specifies as update_time. Specifies the syntax to be used when generating case-insensitive report constraints. The default is TRUE, which causes a warning dialog to be displayed if the Studio Utilities are connecting to the metrics catalog. The password for the UMDB_USER below. This setting specifies the user ID that the server should use when connecting to the catalog database for UPDATE purposes. This user ID must have write privileges to a number of catalog tables, which is different from the MDB_USER that requires only read access to the catalog.
X X
344
Table 27
Description This is the default setting. Other valid options are HTTP_USER, REMOTE_USER, and CUSTOM_LOGIN. If you have special requirements, contact Hyperion Customer Support. If set to TRUE, the Enterprise Metrics Server logs an overwhelming (but sometimes useful) number of details about the catalog information processed during initialization. This setting can be useful for viewing detail on stars, measures and chart templates. It affects the logs in the following ways:
VERBOSE_INIT=FALSE
For each star, it tells which facts are used. For each star, it tells you the detail about the star. Shows which measure is used. Shows the fact snippet for the measure. For normal production use, this should be left as FALSE.
Metrics_Server.prefs Settings
345
Configuration_Server.prefs Settings
The Configuration_Server.prefs file for the Configuration Server is analogous to Metrics_server.prefs, but the settings noted in Table 28 have different values. Configuration_Server.prefs resides in the same directory as the Configuration Server.
Table 28
Description Specifies the port number for the Configuration Server. This setting is used only when CONFIG_SERVER is set to a value of TRUE. This setting causes the Enterprise Metrics Server to launch as a Configuration Server, rather than an Server. When a client connects to the Configuration Server, the Login dialog contains an extra check box, allowing a user to login as the Editor and create or modify global pages in the Monitor and Investigate Sections. The Configuration Server also runs with caching disabled, to improve server restart performance, since the Studio Utilities are recycled every time metric and report metadata changes are to be tested. Note that the Configuration Server listens on the port specified by CONFIG_PORT_NUMBER. Specifies which db_version names in PUB_MAP_TABLE to use for this Enterprise Metrics Server. The default is pub. Points to the Configuration Catalogwhich is used for editing. This allows the Editor to change and test the metrics and pages before migrating the Configuration Catalog to production, where all users are affected by the change.
DB_MAP_NAME=pub MDB_DATABASE=
346
Client.prefs Settings
The settings in Client.prefs control Enterprise Metrics Workspace and Personalization Workspace. This file resides in the same directory as the Metrics and Configuration Servers.
Table 29
Description The value METABOLISM causes client logging to be directed to the Java Console window on the client machine for debug mode. The standard setting, DUMPTOFILE, creates a log file on the client system during the client session. The location of the client log file is browser dependent unless specified by LOG_DIRECTORY. Specifies how many rows to move when using the Fast Scrolling feature on the (Investigate Section) page. Ignored if the value exceeds MAX_DIMENSION_ROWS, but if set to a value that is one less, for example, fast scrolling would display one row from the previous block and 19 new rows. Limits the amount of staggering that occurs with ZoomChart detail lines. This applies only to Monitor Section charts that have x-axis set to display by point of view (time period labels are never truncated). The labels are not limited unless they would cause staggeringsuch as if you have two bars which are very wide, it uses the available space. The minimum setting is 50; the maximum is 200.
BLOCK_SCROLL_AMOUNT=20
BLOWUP.MAX_AXIS_LABEL_WIDTH=80
CHART.GRAY_BARS=TRUE
Specifies that any chart which is drawing bars for only a single metric draws the bars in gray. This is the default setting. If set to FALSE, the metric color specified in the Chart tool is always used for drawing bars, regardless of how many metrics are displayed that way. When a ZoomChart displays color keys for more than the maximum possible colors (20), the label specified identifies the remaining (black) area. Specifies that the End of xxx labels in chart headers should be suppressed when they are not meaningful (that is, the chart displays months and the current month is right-most, so End of Month is unnecessary). This is the default. If set to FALSE, the End of xxx label is always displayed. The maximum number of objects (charts and mini reports) to be cached by the clipping servlet. X For development/testing purposes only. Contains the prefix used when generating the Clip URL. For example, if you generate clips to be used in a Hyperion System 9 BI+ Interactive Reporting dashboard environment, you would set this to CLIP.URL_PREFIX=
http://<HPSu web server:port>/Hyperion/browse/ extRedirect?extUrl=
CHART.KEYS.OTHER=(other) CHART.LIMIT_ENDOF=TRUE
CLIP.CLIPS_CACHED=50
CLIP.DEBUG_CACHE=FALSE CLIP.URL_PREFIX=
This setting is relevant only if CLIP.URL_TYPE is set to PREFIX or BOTH. If AV_URL is set, you do not need to set this. If this setting is supplied and AV_URL is supplied, the setting for CLIP.URL_PREFIX overrides. We recommend that you specify AV_URL only and let the server derive this setting.
Client.prefs Settings
347
Table 29
Description Controls the options displayed on the Clip Generation options dialog box. This setting can have one of three possible values:
GENERAL-is the default option. This allows the user to generate URLs for Clips in the standard format. These URLs can be used to embed Clips in a single sign-on web environment other than Hyperion Performance Suite. PREFIX-allows Clip URLs to be generated in the required format for the Interactive Reporting Studio. In this mode, you must also set the CLIP.URL_PREFIX value. Essentially, the standard URL is URL-encoded and appended to the prefix, for these options. BOTH-allows the user to generate Clip URLs in any of the above formats. In this mode, you must also set the CLIP.URL_PREFIX value.
If AV_URL is set, you do not need to set this. If this setting is supplied and AV_URL is supplied, the setting for CLIP.URL_TYPE overrides. We recommend that you specify AV_URL only and let the server derive this setting. DEBUG_WIZ_GRAPHS=FALSE X The default is FALSE and should not be changed unless directed to do so by Hyperion Solutions Customer Support. When set to TRUE, additional debugging information is written to the log whenever displaying a ZoomChart or previewing a chart in the wizard. The default directory on the users computer for data that the user exports to a file from a (Investigate Section) or (Pinpoint Section) page. You can use forward slashes to separate elements in the path for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is C:\\. The doubling of backslashes is necessary because Java uses '\' as an escape character. Though the default setting may not be appropriate for all users, you can change the destination if desired. EXPORT.JPEG_QUALITY=95 Controls JPEG quality for export images (not thin client images). The default of 95 reduces the size of the exported file by roughly a third of the best quality, with no serious compromise of the image. This is a global setting that applies to all full clients. X X X X X The name of the JAR for the client applet, which contains default and required images. The name of the JAR containing custom images. The path to the directory containing the applet and image files relative to the applet code base. The path to the directory containing the applet and image files relative to the Enterprise Metrics Web site URL. By default, all logs include a MM/DD prefix before the timestamp on each log entry. The default is TRUE; you can turn this off by changing this setting to FALSE. The default directory of the log file; defaults to the users temporary directory.
DEFAULT_DIRECTORY=C:\
LOG_DIRECTORY=
348
Table 29
Description Approximate size limit, in bytes, for the log file written by the Personalization Workspace. Each time the client is launched (when logging to a file), the size of the most recent log file is compared to this value, and a new file is started if the size exceeds this value. This setting is ignored, however, if LOG_SAVE_COUNT is set to 1.
LOG_LEVEL=4 LOG_SAVE_COUNT=2
The level of logging detail. By default, the client saves a maximum of two log files, erasing older ones as necessary. If set to 1, the client writes indefinitely to a single file named mb.client.log, ignoring the value of LOG_FILE_MAX. The number of characters to use for recording the user ID in log entries. The number of detail rows displayed in a (Investigate Section) page. Setting this to a smaller value somewhat reduces client memory requirements. Setting it higher is strongly discouraged. The maximum number of columns permitted in a (Investigate Section) page. Changing this is discouraged. The number of characters permitted in a page title in the Monitor Section, and in page names and titles in the Investigate Section. Must not exceed the field widths in the catalog tables. May be set to a smaller value at installation to prevent the page selector menu from becoming too wide, but should never be set to a value smaller than existing titles and names.
LOG_USERID_LENGTH=16 MAX_DIMENSION_ROWS=20
MAX_METRIC_COLUMNS=7 MAX_TITLE_LENGTH=480
PIXELS_FREE_EXTRA_HEIGHT=30
The PIXELS_FREE settings are used to reserve some screen area around the client window, to avoid conflict with things such as a Windows Microsoft Office toolbar. If you prefer to have the client open with a maximized view, changing these settings will help, but may not be suitable for all users. See description for PIXELS_FREE_EXTRA_HEIGHT. See description for PIXELS_FREE_EXTRA_HEIGHT. The font to use for the footer on printed pages. The text specified appears at the bottom of all printed pages. Limits the size of the preview image. It is highly recommended that you not increase this setting. Limits the size of the preview image. It is highly recommended that you not increase this setting. Specifies the filename suffix to be appended, when exporting tab-delimited files from (Investigate Section) or (Pinpoint Section) pages. Note that the file is really just a text file, but the default extension of .xls simplifies the process of opening the file in a spreadsheet.
X X
Client.prefs Settings
349
Table 29
Description This setting specifies (in pixels) the minimum width to be used for all printing. For example, if you print a mostly empty page with just an object in the top left corner, the width of the printed image is still guaranteed to be this value. This may be useful for ensuring that enough of the background/foreground images above the tabs will appear. Note: The printed size is determined by the space required to show everything, not by the current size of the client window.
THIN.ANTI_ALIAS=TRUE
The default is TRUE. If set to FALSE, this disables anti-aliasing when drawing lines on charts, and for the pie chart. If the servlet is generating GIF format images for charts, disabling anti-aliasing drastically reduces CPU utilization, because the anti-aliasing pushes us over the limit of 256 colors for a GIF and the quantization code is terribly expensive. This applies only to the thin client servlet. X X X For development use only. Do not set this to TRUE without explicit direction to do so from development. The default is FALSE. If set to TRUE, the glass pane will be 50% opaque. This is useful in debugging any problems with the glass pane. For development use only. When using JPEGs for thin client chart images, this setting controls the image quality, with permissible values of 50-100. We strongly advise against changing this setting, as 95 gives you most of the image size (and network traffic) reduction with minimal loss of quality. X This should not be changed without direction from Hyperion Customer Support. The level of logging detail for the thin client servlet and the launcher servlets. By default, the thin client servlet saves a maximum of two log files, erasing older ones as necessary. If set to 1, the client writes indefinitely to a single file ignoring the value of THIN.LOG_FILE_MAX. Default setting. X X For development use only. If set to TRUE, userid and password are read from the HTTP request, rather than cookies. As the name implies, this is provided for performance testing only. If set to TRUE, only the current page is cached (effectively no caching). Note that in this case, Cancel (progress dialog) and Export are not expected to work. Determines how often (in seconds) the thin servlet polls the Server, to check for changes in state, such as restart after the nightly load, so that the servlet may determine when to re-initialize, or prevent logins.
THIN.JPEG_QUALITY=95
350
Table 29
Description Determines how often (in seconds) the thin servlet writes summary statistics (for example, number of users) to the servlet log file. By default, this setting causes the thin servlet to generate JPEG images (rather than GIFs) for charts, and major portions of the Investigate Section. If set to FALSE, images are generated as GIFs, which preserves the color accuracy of the full client, but at a tremendous cost in CPU for the thin client servlet. In this case, it may be important to set THIN.ANTI_ALIAS to FALSE.
Client.prefs Settings
351
Metadata_export.prefs
Table 30 lists the valid preference settings for the Metadata Export Utility preference file.
Table 30
Metadata Export Utility Preference File Settings Description Driver syntax (differs per database type). Note: The driver and URL information for each supported database are included. Only one DRIVER and URL needs to be uncommented for a run of metadata export. The values should be based on the source database.
URL= jdbc:brio:oracle://<host>:<port>;SID=<sid>
JDBC URL reference (differs per database type). See note above.
Supports levels 0, 5, and 10. Location of the log file. You can use forward slashes to separate elements in the path for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Name of the log file. Location of pre- and post-SQL files. You can use forward slashes to separate elements in the path for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Name of pre-SQL file. Name of post-SQL file. Location of output file(s). You can use forward slashes to separate elements in the path even for Windows. If you prefer to use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Output file name. Location of table list file. You can use forward slashes to separate elements in the path even for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Name of extraction table list.
TABLES_FILE=metadata_export_tables.txt
352
Table 30
Metadata Export Utility Preference File Settings (Continued) Description Valid values are PUB_, PRD_, or blank. The Metadata Export Utility concatenates the prefix to the name of each table listed in the table files. If the setting is blank, the tool reads the table names from the table list. If it cannot find the tables in the specified database, it generates errors. You can leave the prefix field blank, if the prefix of PUB or PRD is included in the name of each table in the export table file list.
Setting TABLE_PREFIX=PUB_
UPDATE_USER_ID= USER=
Specify the value for the update_user_id column to filter the records based on that column value. The database user who owns the database tables. If set, the module list is ignored. Used with the user password (below) to determine the database.
PWD=
Password for the user. Password used if USER is set. Used with the user ID (above) to determine the database.
Source database type. Target database type. Number of rows after which commit text is added. Commit text to be added after each interval.
Metadata_export.prefs
353
354
Part
III
355
356
Chapter
20
In This Chapter
Deleting User POVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Report Server Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Analytic Services Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Scheduler Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Batch Input File XML Tag Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 RMI Encryption Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
357
For example:
DeletePov admin pass localhost user1.* .* ShowOnly DeletePov admin pass localhost user .*
Table 31
Delete User POV Parameters Description System administrators ID used at login System administrators Password used at login Name of the report server you are using Includes user name Includes data source user name, data source server name, application name, database name, data source type name
Note: You must use double quotation marks on the command line for data source names with spaces and special characters.
ShowOnly
Displays a list of matching user POVs without prompting to delete them. [ShowOnly] Result format: User, Db Connection Name, Server, App, Db, Type, ID
358
359
Note: See fr_global.properties for details on logging levels (FATAL, ERROR, WARN, INFO, DEBUG) and formatting options. The Financial Reporting logging settings can be changed without restarting the servers. The .properties files are monitored for changes to the logging setting every minute. The frequency is set in fr_global.properties. This can be very handy if you want to set the logging level to DEBUG briefly and then change it back for troubleshooting a production environment.
In addition to the default RollingFileAppender there is a DailyRollingFileAppender option to make periodic backups of the current log file. The DailyRollingFileAppender rolls the log file over at a user chosen frequency: monthly, weekly, half-daily, daily, hourly, or every minute. The rolling schedule is specified by the DatePattern option. An example of a schedule that rolls the log file on a daily basis:
log4j.rootLogger=ERROR,dest1 log4j.appender.dest1=org.apache.log4j.DailyRollingFileAppender log4j.appender.dest1.ImmediateFlush=true log4j.appender.dest1.File=d:\\Hyperion\\HR\\Logs\\Daily_HRReportSrv.log log4j.appender.dest1.Append=true log4j.appender.dest1.DatePattern='.'yyyy-MM-dd log4j.appender.dest1.layout=org.apache.log4j.PatternLayout log4j.appender.dest1.layout.ConversionPattern=%d{MM-dd HH:mm:ss} %-6p%c{1}\t%m%n
In the above example, at midnight, the Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-30 at midnight on October 30. Logging for the current day continues in Daily_HRReportSrv.log until it rolls over the next day as Daily_HRReportSrv.log.2003-10-31.
Note: The current log file is not rolled to a daily backup file until a log entry needs to be written. So it may not happen right at midnight and if you go a day without log entries you will not see a backup log file for that day. This is done for efficiency reasons. Note that regardless of the delay, all logging events are logged to the correct file.
Reports uses the standard Log4j package from the Apache group to handle logging duties. For more details on the DailyRollingFileAppender syntax and options, see:
http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/DailyRollingFi leAppender.html
360
While Financial Reports logs nothing of interest to stdout or stderr, you can check the output in case of a JVM crash (blown heap or native thread dump).
DatePattern Options Rollover Schedule Rollover at the beginning of each month. Example At midnight, on October 31, 2003, the
Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10. Logging for the month of November continues in Daily_HRReportSrv.log until it rolls over the next day to Daily_HRReportSrv.log.2003-11.
DatePattern
-yyyy-MM
-yyyy-ww
Rollover at the first day of each week. The first day of the week depends on the locacle. Rollover at midnight each day.
Assuming the first day of the week is Sunday, on Saturday midnight, October 9th 2003, the file Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-23. Logging for the 24th week of 2003 is output to Daily_HRReportSrv.log until it is rolled over the next week. At midnight, on October 2003, the Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-30. Logging for the current day continues in Daily_HRReportSrv.log until it rolls over the next day to Daily_HRReportSrv.log.200310-31.
-yyyy-MM-dd
361
Table 32
DatePattern Options (Continued) Rollover Schedule Rollover at midnight and midday each day. Example At noon, on October 9th, 2003, Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-09-AM. Logging for the afternoon of the 9th is output to Daily_HRReportSrv.log until it is rolled over at midnight.
DatePattern
-yyyy-MM-dd-a
-yyyy-MM-dd-HH
11th hour of the 9th of October is output to Daily_HRReportSrv.log until it is rolled over at the beginning of the next hour
-yyyy-MM-dd-HH-mm
the minute of 11:23 (9th of October) is output to Daily_HRReportSrv.log until it is rolled over the next minute.
Assigning Financial Reporting TCP Ports for Firewall Environments or Port Conflict Resolution
By default, Financial Reporting components communicate with each other through Remote Method Invocation (RMI) on dynamically assigned Transmission Control Protocol (TCP) ports. To communicate through a firewall, you must specify the port of each Financial Reporting component separated by the firewall in its .properties file and then open the necessary ports in your firewall. These .properties files are located in the Financial Reporting lib directory. In addition, you may need to open ports for the Reports Server RDBMs, for data sources that you report against, and for LDAP/NTMLM for external authentication.
Note: Ports should be opened in the firewall only for Financial Reporting components that must communicate across the firewall. If the Financial Reporting components are not separated by a firewall, they can use the default dynamic port setting.
You can change the port assignments to use in a firewall environment for servers in these Financial Reporting .properties files.
The Communication Server runs on each computer running any of the Financial Reporting server components shown below and requires 1 port. By default, this is 1099 but can be specified in the fr_global.properties file using RMIPort=. The Report Server requires 2 ports which are specified in the fr_repserver.properties file using HRRepSvrPort1= and HRRepSvrPort2=.
362
Workspace requires 1 port which are specified in the fr_webapp.properties file using HRHtmlSvrPort=. The Scheduler Server requires 1 port which is specified in the fr_scheduler.properties file using HRSchdSvrPort=. The Print Server requires 1 port which is specified in the fr_printserver.properties file using HRPrintSvrPort=.
Note: When assigning static ports for each Financial Reporting component, the typical values are between 1024 and 32767.
When the Financial Reporting Server Components are distributed among several machines, there may be a need for you to change the default RMIPort on one or more machines. For example, suppose you installed a Report Server on MachineA, and left the default RMIPort configuration intact, but installed a Print Server on MachineB, and had to change the default RMIPort assignment to 1100 to resolve a conflict with another application. In this case, it would be necessary for you to reference the Print Server using hostname:port nomenclature in any or all .properties files that refer to the Print Server. In this example, your would assign printserver1=machineB:1100 in fr_repserver.properties.
Note: If you change the RMIPort for the Report Server component, users logging on through the Reports Desktop should use the same hostname:port nomenclature.
What follows is a list of properties file entries that require :port be appended if the target hostname computer uses another RMIPort. If all components define the same RMIPort, you need supply only hostname in all properties files.
Table 33
fr_webapp.properties
HRWebReportServer=
Note: If RMIPort is changed in fr_global.properties on the computer where the Report Server is running, and Planning Details is a valid data source, then ADM_RMI_PORT should also be changed in ADM.properties. For example: C:\Hyperion\common\ADM\9.0.0\lib\ADM.properties.
363
To add Java arguments to Windows systems for the Report Server, Print Server, and Scheduler
Server:
364
Add two new String Values: JVMOptionx and JVMOptiony, where x and y are replaced with the next available number on their JVMOption series. Assign the new entries these values:
If you chose to run Workspace as a Windows service, open the Windows registry, and navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
beasvc hr_domain_HReports\Parameters.
365
2 Add two new String Values called JVM Option Number x and JVM Option Number y, where
x and y are replaced with the next available number on their JVM Option Number series.
JVM Option Number x --> -Djava.rmi.server.hostname=<IP or hostname of NAT device> JVM Option Number y --> -Djava.rmi.server.useLocalHostname=false
To add Java arguments to UNIX systems for the Report Server, Print Server, Scheduler Server,
and Tomcat:
1 Open .../BIPlus/bin/freporting for editing. 2 In the start block for each component installed, add the two new required entries;
-Djava.rmi.server.hostname=<IP or hostname of NAT device> -Djava.rmi.server.useLocalHostname=false after the appropriate -c "${JAVA_HOME}/bin/java" line.
For example:
-c "${JAVA_HOME}/bin/java -Djava.rmi.server.hostname=IP or hostname of NAT device -Djava.rmi.server.useLocalHostname=false
366
WebLogic
For the WebLogic Web server:
1 Open .../HyperionReports/HRWeb/hr_domain/startWeblogic.sh in a text editor,
and add -Djava.rmi.server.hostname=<IP or hostname of NAT device> -Djava.rmi.server.useLocalHostname=false to the JAVA_OPTIONS variable declaration.
WebSphere
For the WebSphere Web server:
1 Start the WebSphere Administrator's Console. 2 Navigate to Servers > Application Servers and select your server. 3 In the Additional Properties section, select Process Definition. 4 In the Process Definition's Additional Properties section, select Java Virtual Machine. 5 Append the following to the Generic JVM Argument property:
-Djava.rmi.server.hostname=<IP or hostname of NAT device> -Djava.rmi.server.useLocalHostname=false
You are licensed by Analytic Services ports. A 100 concurrent user license for Analytic Services means 100 Analytic Services ports are licensed. An unlimited number of connections is allowed on each of those ports. The number of connections you open to Analytic Services is not relevant for licensing purposes. What matters is the number of Analytic Services ports.
367
When a user runs a report in Financial Reporting, connections are opened to Analytic Services. For performance optimization purposes, these connections are cached. When the connections become idle, a process is run periodically to close them. The system administrator can modify the length of time before a connection is considered inactive (MinimumConnectionInactiveTime, default of 5 minutes) and the length of time before inactive connections are closed (CleanUpThreadDelay, default of 5 minutes) in the fr_global properties file. The number of ports used by Financial Reporting varies, depending on the configuration, as follows:
If a Report Client such as the Windows UI runs a report, two Analytic Services connections are made; one for the Report Client, and one for the Report Server. If the Report Client and Report Server are on the same computer, two Analytic Services connections using one Analytic Services port are made.
The Report Client keeps the Analytic Services connection until the window with the report displayed is closed. The Report Server keeps this Analytic Services connection until the process is run to close idling open connections. When both connections are closed, the port is released.
If the Report Client and Report Server are on two different machines, two Analytic Services connections using two Analytic Services ports are made.
The Report Client keeps the Analytic Services connection until the window with the report displayed is closed. The Report Server keeps this Analytic Services connection until the process is run to close idle open connections. When the Report Client connection is closed, the corresponding port for that connection is released. When the Financial Reporting connection is closed, the corresponding port for that connection is released.
When a Financial Reporting Studio user such as the Browser UI runs a report, two Analytic Services connections are made: one for the Web server and one for the Report Server.
If the Web server and Report Server are on the same computer, two Analytic Services connections using one Analytic Services port are made.
The Web server keeps the Analytic Services connection until the process is run to close idle open connections. The Report Server keeps this Analytic Services connection until the process is run to close idle open connections. When both connections are closed, the port is released.
If the Web server and Report Server are on two different computers, two Analytic Services connections using two Analytic Services ports are made.
368
The Web server keeps the Analytic Services connection until the process is run to close idling open connections. The Report Server keeps this Analytic Services connection until the process is run to close idling open connections. When the Web server connection is closed, the corresponding port for that connection is released. When the Report Server connection is closed, the corresponding port for that connection is released.
The Report Server and Web server are installed on the same computer. The Report Client is installed on several other computers. In this case, you must take two Analytic Services ports only for users working with the Report Client. All users connecting to view reports in Workspace take a single Analytic Services port for each Analytic Services user, because the Web server and Report Server are on the same computer.
Add or increase the following Analytic Services client settings in the essbase.cfg file:
Calculating the Formula for the Maximum Number of Analytic Services Ports
The basic formulas for calculating the maximum number of Analytic Services ports you need for Financial Reporting are as follows:
If Workspace and Report Server are on the same computer: Number of Analytic Services ports = 2 X the number of Report Clients + the number of Workspace users
If Workspace and Report Server are on different computers: Number of Analytic Services ports = 2 X the number of Report Clients + 2 X the number of Workspaces
369
Note: This formula is for Financial Reporting and does not consider other ways users might be connecting to Analytic Services; for example, the Application Manager, Web Analysis, or the Excel Add-in. You must consider those potential port-takers separately. If they are used on the same computer as one of the Financial Reporting components, no extra ports are taken as long as the same Analytic Services user ID is being used.
If you run a report with two data sources, your number of connections doubles, but the number of ports remains the same as described previously. If you run a report with three data sources, your number of connections triples, but the number of ports remains the same as described previously. If, after closing the report with two data sources, you run a report with a 3rd data source, your connections increases again but the number of ports does not change.
A user's connection is open for at least five minutes and remains open for up to 10 minutes, assuming no new activity occurs during that time. If you have a limited number of Analytic Services ports, and many users are accessing Financial Reporting, you may want to lower both values to 30 seconds (30000).
2 Open the mybatch.xml where mybatch is the name of your batch input file. 3 Modify this file as needed by editing the values in the tags, see Modifying Attributes on page 372 for the
commonly used attributes.
370
To launch a batch from a command line prompt in the BIPlus\bin directory, enter the
command by specifying the fully qualified name of the batch input file and the computer name or IP address of the Scheduler Server on which to schedule the batch, for example:
ScheduleBatch c:\DailyReports\mybatch.xml MySchedulerServer
where MyBatch.xml is the name of your batch input file and MySchedulerServer is the name or IP address of your scheduler server which is typically located on the same computer as the report server. This launches a batch to run immediately against the scheduler server specified.
Encoding Passwords
Your passwords are encoded when you export the batch input file. To specify another user ID or data source ID in the batch input file, then you can use the following file to produce an encoded password for use in the batch input file.
WindowsEncodePassword.cmd UNIXEncodePassword
To encode passwords:
1 Open the batch input file to modify the data source and user ID passwords. 2 From the command line, run the EncodePassword.cmd file. 3 Type EncodePassword Password, where Password is the new password you want to use. 4 Place the encoded password produced in the batch input file.
371
Modifying Attributes
In a typical batch input file, there are very few attributes to modify. Most attributes are already set properly based on the originally scheduled batch. The following table lists attributes that you are most likely to modify for the associated XML tags.
Table 34
Commonly Used Attributes Attribute AUTHOR Description Displays in the batch scheduler's User ID column and is a useful place to show a comment or the name of the XML file that generated the batch. Enter a Yes or No value, depending on whether you want to attach PDF or HTML files generated to the e-mail. E-mail to recipients if schedule batch failed Text if scheduled batch fails A comma-separated list of recipients e-mail addresses. The senders e-mail address. The subject of the e-mail. The encrypted data source password from an existing batch or that you generate using the command line utility. The data source user whose credentials are used for running the reports/books in the batch. The encrypted Financial Reporting user password from an existing batch or that you generate using the command line utility. The Financial Reporting user whose credentials are used for running the reports/books in the batch.
Category General
ATTACH_RESULTS
DS_USER_NAME HR_PASSWD
HR_USER_NAME
372
Table 34
Commonly Used Attributes (Continued) Attribute HTML VALUE PDF VALUE HTML EXPORT_HTML_FOLDER_LABEL PDF EXPORT_HTML_FOLDER_LABEL Description Enter a Yes or No value, depending on whether you want to generate HTML output for the batch. Enter a Yes or No value, depending on whether you want to generate PDF output for the batch. If exporting as HTML (Value=Yes), The path and folder to external directory. If exporting as PDF (Value=Yes), the path and folder to external directory. Enter a Yes or No value, depending on whether you want to save the snapshot output in the repository. The Folder Name where the Snapshots are to be stored. This must be specified in ReportStore:\\ format. If SAVE_NAME = , the snapshot output is saved to the same folder as the original object. Comma-separated Financial Reporting user names who are granted access to the snapshot output. Comma-separated Financial Reporting group names which are granted access to the snapshot output. A special system-defined group, called Everyone, includes all Financial Reporting users and can be used to ensure that all users have access to a snapshot output. The printer name, if the PRINT VALUE attribute is set to Yes. Note: You must make sure that this printer is available to the scheduler server. Enter a Yes or No value, depending on whether you want to generate printed output for the batch.
Snapshot Output
SAVE_AS_SNAPSHOT VALUE
SAVE_NAME
USER_NAMES
GROUP_NAMES
Printed Output
PRINT NAME
PRINT VALUE
Note: In the USER_POV section of the XML file, HIDDEN="0' indicates a dimension which is on the POV and therefore is a candidate or value to be set in the XML file. The value to be changed is _ in this example.
373
374
375
Child Node - Save as Snapshot Description Enter a Yes or No value, depending on whether you want to save the snapshot output in the repository. The Folder Name where the Snapshots are to be stored. This must be specified in ReportStore:\\ format. If SAVE_NAME = , the snapshot output is saved to the same folder as the original object. Comma-separated Financial Reporting user names who are granted access to the snapshot output. Comma-separated Financial Reporting group names which are granted access to the snapshot output. A special system-defined group, called Everyone, includes all Financial Reporting users and can be used to ensure that all users have access to a snapshot output. This attribute can be left blank or removed from the text file. Note: This attribute is ignored if USER_NAMES or GROUP_NAMES is used.
USER_NAMES GROUP_NAMES
SUBJECT_TOKENS
Caution! This should be modified only by power users. Specifying a partial USER POV does not work.
Note: In the USER_POV section of the XML file, HIDDEN="0' indicates a dimension which is on the POV and therefore is a candidate or value to be set in the XML file. The value to be changed is _ in this example.
376
377
After enabling or disabling encryption, all Financial Reports services must be restarted. The following text appears in the fr_repserver.properties file:
# Specify the class name of encryption algorithm to encrypt values # passed in RMI calls. # # By default, no encryption is applied. # # To use the encryption provided with the product, set the value to # com.hyperion.reporting.security.impl.HsRMICryptor # # To use any other custom encryption algorithm, extend your # implementation from # com.hyperion.reporting.security.IHsRMICryptor interface. # This interface defines two methods # public String encrypt(String value) throws HyperionReportException; # public String decrypt(String value) throws HyperionReportException; # #RMI_Encryptor=com.hyperion.reporting.security.impl.HsRMICryptor
378
Part
IV
In Administering Interactive Reporting Studio: Chapter 20, Understanding Connectivity in Interactive Reporting Studio Chapter 22, Using Metatopics and Metadata in Interactive Reporting Studio Chapter 23, Data Modeling in Interactive Reporting Studio Chapter 24, Managing the Interactive Reporting Studio Document Repository Chapter 25, Auditing with Interactive Reporting Studio Chapter 26, IBM Information Catalog and Interactive Reporting Studio Chapter 27, Row-Level Security in Interactive Reporting Documents Chapter 28, Troubleshooting Interactive Reporting Studio Connectivity Chapter 29, Interactive Reporting Studio INI Files
379
380
Chapter
21
In This Chapter
This section describes how to connect to a relational database and a multidimensional database using connection files, including how to set up connection files and connection preferences, and how to manage connections.
About Connection Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Working with Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Connecting to Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Using the Connections Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Working with an Interactive Reporting Document and Connecting to a Database . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Connecting to Web Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Connecting to Workspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
381
Connection software Database software Database server hosts Database user names (optional)
Note: For security reasons, user passwords are not saved with Interactive Reporting database connections.
Interactive Reporting database connections have significant advantages in network environments with many database users. One connection can be created for each database connection in the environment and shared with each end user. Interactive Reporting database connections simplify the connection process for company personnel by transparently handling host and configuration information. Each user can substitute his or hew own database user name when using an Interactive Reporting database connection, which enforces security measures and privileges that are centralized at the database server. Because passwords are not saved with Interactive Reporting database connections, there is no danger that distribution will provide unauthorized access to any user who receives the wrong Interactive Reporting database connection or acquires it from other sources. By default, no explicit access to an Interactive Reporting database connection is required to process Interactive Reporting documents or job outputs using the Workspace or Interactive Reporting Web Client. That is, a user is not required to have specific access privileges to process an Interactive Reporting document. However, a control setting of an Interactive
382
Reporting document or job access can be defined to require explicit access. For more information, see the Hyperion System 9 BI+ Workspace Administrators Guide and the Hyperion System 9 BI+ Workspace Users Guide.
Note: It is to your advantage to create and distribute Interactive Reporting database connections to facilitate the logon process when storing Interactive Reporting Studio data models.
Connection API software and version (for example, Essbase, SQL*Net for Windows NT, and so on) Database software and version (for example, MetaCube 4, Oracle 8, and so on) IP address, database alias, or ODBC data source name for you database server Database user name
2 Select the connection software that you want to use to connect to the database server from the pull down
list in the What connection software do you want to use? field.
3 Select the database server that you want to use in the What type of database do you want to connect to?
field.
4 To configure metadata settings, select Show Meta Connection Wizard. 5 To configure advanced connection preferences, select Show advanced options. 6 Click Next.
The second dialog box of the wizard is displayed.
383
7 Depending on the database, enter your user name in the User Name field, your password in the Password
field, the IP address, ODBC database source or server alias name in the Host field and click Next.
If you selected to work with meta data settings, the Meta Data Connection Wizard launches. See the Accessing the Open Metadata Interpreter on page 406 for more information.
8 The wizard prompts you to save the connection file. 9 To save the connection file so that it can be reused or modified, click Yes.
The Save dialog box is displayed. Interactive Reporting Studio saves the connection file in the default Interactive Reporting database connection directory.
10 To save the connection file in a different directory, navigate to the desired directory and click Save.
Table 36
Database Connection Configuration Wizard options Description Select the connection software with which you want to connect to the database from the pull-down list. Depending on the connection software you select, additional fields may be displayed in this dialog box. These fields enable you to customize the connection file; show metadata settings; and select ODBC logon dialogs.
What type of database do you want to connect to? Show Metadata Connection Wizard? Show advanced options?
Select the type of database to which you want to connect from the pull-down list. To view and edit meta data settings, select this field. The Metadata Definitions dialog box is configured with specific SQL statements to read meta data on multiple databases. To select advanced preferences for the connection file, select this field. Connection preferences enable you to select what instructions and protocols the database connection should observe. The preferences are saved with the connection file and applied each time you use the connection. For example, you can use connection preferences to filter extraneous tables from the Table Catalog or specify how the connection software should manage SQL statements. Connection preferences vary depending on the connection software and database.
To select the specific database name on the server, select this field. If you select ODBC connection software and that you want to use the ODBC logon dialog boxes instead of the Interactive Reporting Studio dialog boxes, select this field. To use the Interactive Reporting Studio connection dialog boxes, leave this field unchecked. Enter the name that you want to use to sign onto the database. Enter the password that you want to use to sign onto the database Enter the IP address, database alias, or ODBC data source name.
384
Database Connection Configuration Wizard options Descriptions Enable s support for the Intersection and Difference operators in the Append Query option. Enables specification of table filter conditions for limiting or customizing the list of tables in the table catalog. Specifies exclusion of all repository tables from the table catalog. Filter by and metadata definitions override this preference. Prohibits processing when topics are not joined in the Query Contents frame. Specifies use of SQL to retrieve tables, instead of using SQL Server sp_tables and sp_columns stored procedures. This option enables table filtering, but may be slower than stored procedures. (Sybase and MS SQL Server) Specifies how the server returns data. In most cases, Retrieve data as Binary is the most appropriate, and fastest method. Select Retrieve data as Strings if the connection API does not support native datatype retrieval, or if queries return incorrect or unreadable data.
Options ALLOW SQL-92 Advanced Set Operations Apply Filters to restrict the tables that are displayed in the table catalog Exclude Hyperion Repository Tables Allow Non-Joined Queries Use SQL to get Table Catalog
This setting will establish an automatic disconnect from the database after the specified period of inactivity. Sends a commit statement to the database server with each Interactive Reporting Studio SQL statement to unlock tables after they have been used. Use this feature if tables are locked after use or users experience long waits for tables. Enables general distribution of an Interactive Reporting database connection by saving it generically, without a user name. Instead, any user can log on by typing their own user name. Specifies that internal keywords or table and column, or owner names with special characters sent to the server be enclosed in quotation marks. For example, SELECT SUM(AMOUNT), STORE_ID FROM HYPERION.PCS_SALESGROUP BYSTORE_ID The default value for new connections is off.
Save Interactive Reporting database connection Without User Name Use Quoted Identifiers
Adds Database field to logon dialog box enabling the user to select a specific database when logging on to the DBMS. (Sybase and MS SQL Server)
385
Table 37
Database Connection Configuration Wizard options (Continued) Descriptions Specifies a binding process to retrieve more records per fetch call. If the ODBC driver supports binding, use this option for faster retrieval. (ODBC only). If this feature is turned on, the ODBC Extended Fetch call requests data at 32k at a time. The Packet Size setting enables Sybases DB-Lib users to set up a large buffer retrieval from the database so that more data can be transferred at one time. If this feature is selected, you can specify a multiple of 512 bytes for the number of bytes that you want to transfer at one time. Before you specify a multiple of 512 bytes, the server must have enough memory to allocate for the transmission of the selected packet size. To check which packet size the Sybase server will support, run the isql command: sp_configure and type go. A list of parameters is returned. Find the parameter showing the Maximum Network Packet Size. If the packet size you entered exceeds the maximum packet size, you will have to reenter a smaller packet size. To change the packet size, issue the following command in isql: Sp_configure maximum network packet size. <new value> (where <new value> is the new size).
Determines the default buffer size when retrieving rows of data from an Oracle connection. The default size is 8000 bytes. A user can change this value to retrieve more rows per buffer, which may result in a performance improvement, but at the expense of additional memory requirements. The minimum size is 8000. If a user specifies a smaller value, nor error is returned, but 8000 bytes is used. There is no hard coded maximum size value for this field. Turns off the ability to make simultaneous requests to the database server. This feature is available in Interactive Reporting Studio only.
Interactive Reporting Studio uses the default formats specified by the database server when handling date, time, and timestamp values. If the default formats of the server have been changed, you can retain or preserve these adjusted preferences to ensure Interactive Reporting Studio interprets date/time values correctly. Enables alteration of internal Interactive Reporting Studio date handling to match server default settings in case of a discrepancy. For more information on this feature, see Modifying Server Date Formats on page 389. On upload to the repository, Interactive Reporting Studio brackets SQL Insert statements with transaction statements. Disable Transaction Mode if the RDBMS does not support transactions. This feature is only available in Interactive Reporting Studio.
Server Dates
Enables you to save the connection file so that it can be reused at a later time.
386
Table 37
Database Connection Configuration Wizard options (Continued) Descriptions Inserts an outer join operator (+) in the SQL on limits applied to the inner table for Oracle Net connection software to an Oracle database. By default this feature is enabled and is recommended; it is provided to work around Oracle restrictions when using outer joins with certain limit conditions, such as when an OR expression is needed. An outer join operator enables Interactive Reporting Studio to retrieve all rows from the left or right table matching joined column values if found or retrieves nulls for non-matching values. If this feature is disabled, then nulls for non-matching values are not retrieved. Use the Join Properties dialog box to assist in determining which is the left and right table. Oracle does not support full (left AND right) outer joins with the (+) operator. When an ODBC driver is used, this feature is greyed out. When a limit has been applied to an inner table of an outer join, this feature enables the limit to be placed on the On clause of the SQL statement instead of the Where clause. The default setting for this feature is unchecked. Inserts ODBC outer join escape syntax in the SQL statement.
Filtering Tables
For databases with many tables, it can help to filter out tables you do not need from the Table catalog. The table filter enables you to specify filter conditions based on table name, owner name, or table type (table or virtual views).
Note: The table filter works with all database server connections except ODBC. If you are working with a Sybase or Microsoft SQL Server database, modify the connection and specify that Interactive Reporting Studio use SQL statements to retrieve the Table catalog before filtering tables.
Typically, you filter tables when creating a connection file, although you can modify an existing connection file later to filter tables.
To filter tables from the Table Catalog when creating a connection file:
1 Select Tools > Connection > Create.
The Database Connection Wizard is displayed.
2 Select the connection software that you want to use to connect to the database server from the pull down
list in the What connection software do you want to use? field.
3 Select the database server that you want to use in the What type of database do you want to connect to?
field.
4 Select Show Advanced Options and click Next. 5 Connect to the data source and click Next.
The dialog box varies according to the connection software you are using. In most cases, you need to specify a user name, password and host name. Click Next.
6 Click Define next to a table name, table owner, or table type filter check box.
The Limit:Filter Table dialog box is displayed.
387
7 Select a comparison operator from the drop-down box. The filter constraints determine which tables are
included in the Table catalog.
Enter constraining values in the edit field and select the check mark. Click Show Values to display a list of potential database values and select values from the list. If you are comfortable writing your own SQL statements, click Custom SQL to directly code table filters that have greater flexibility and detail.
8 Click OK.
Interactive Reporting Studio prompts you to save the filter settings. Once saved, a check mark displays in the appropriate filter check box, which you can use to toggle the filter on and off.
Note: After you complete the Data Connection Wizard, verify that the filter conditions screen out the correct tables. In the Catalog frame, select Refresh on the pop-up menu.
To filter tables from the Table Catalog when modifying a connection file:
1 To filter tables for the current connection, select Tools > Connection > Modify.
The Meta Connections Wizard dialog box is displayed.
2 If you want to filter tables for another connection, select Tools > Connections Manager > Modify.
The Connections Manager dialog box is displayed. In the Document Connections frame, select the connection file that you want to modify and click Modify. The Meta Connections Wizard dialog box is displayed.
3 Configure the first Wizard as necessary, and then click Next to go to the second Meta Connections Wizard
dialog box.
4 Configure the second Wizard as necessary, and then click Next to go to the third Meta Connection Wizard
dialog box.
5 On the third Meta Connection Wizard dialog box, click Define next to a owner, table or type filter check box.
A Filter dialog box is displayed. The Filter dialog boxes resemble and operate using the same principles as the Limit dialog box.
6 Select a comparison operator from the drop-down box. The filter constraints determine which tables are
included in the Table Catalog.
388
9 Click Next to continue through each dialog box, selecting any preferences for the connection file. 10 Click Finish. 11 In the Hyperion dialog box, click Yes to save the connection file. 12 In the Save Open Catalog dialog box, browse to a directory, enter the new connection name in the File
Name field, and then click Save.
13 In the Table Catalog of the Query section, select Refresh on the pop-up menu to verify that the filter
conditions screen out the correct tables.
2 Select Show Advanced Options and click Next. 3 Click Server Dates.
The Server Date Formats dialog box is displayed.
To Server FormatsDate and time formats submitted to the server (such as limit values for a date or time field). From Server FormatsFormats Interactive Reporting Studio expects for date/time values retrieved from the server. The default values displayed in the To and From areas are usually identical.
4 If the server defaults have changed, select the date, time, and timestamp formats that match the new
server defaults from the To and From format drop-down boxes.
If desired, click Default to restore all values to the server defaults stored in the connection file.
5 If you cannot find a format that matches the database format, click Custom.
The Custom Format dialog box is displayed.
6 Select a data type from the Type drop-down box. 7 Select a format from the Format drop-down box or type a custom format in the Format field. 8 Click OK.
The new format is displayed as a menu choice in the Server Date Formats dialog box.
389
2 Select the connection software that you want to use to connect to the OLAP database server from the dropdown box.
3 Select the OLAP database server that you want to use from the drop-down box and click Next.
Depending on the database you select in this field, you may have to specify a password to connect to the database. Enter your name, password, and host address information. The sequence of dialog boxes that are displayed depend on the multidimensional database server to which you are connecting. The following sections provide connection information for these multidimensional databases:
2 Select the application/database name to which you want to connect and click Next.
This is the cube from which you want to retrieve values.
3 Select the measures dimension for the cube in the Dimension Name field and click Next.
This is the specific measure group from which you want to retrieve values.
390
Note: As a default, Interactive Reporting Web Client users are prompted to enter their Windows credentials (user ID, password, and optionally Windows domain, which can be specified in the login user ID prompt field, preceding the user ID text and delimited by a backslash (\); for example, if domain is HyperionDomain and user ID is user1, HyperionDomain\user1 can be specified in the user ID field) when logging on to Microsoft OLAP databases. These changes are enforced to provide more secure access to these databases. If prompted, the user must enter credentials that can be successfully authenticated by the Windows operating system at the database server. Failure to provide credentials that can be successfully authenticated by Windows results in an error message being returned to the user and login to the database being denied. If the user's credentials are successfully authenticated, the database login proceeds and any role-based security on cube data granted at the database level for the specified user ID is invoked and honored. If no role-based security is implemented at the database level (the database cubes and their data are available to all users), the database administrator can choose to publish an Interactive Reporting database connection for the database with a pre-assigned system-administratorlevel user ID and password. Thus, if users access the database using this Interactive Reporting database connection, they are not prompted to enter any login credentials. They will have passed through to the database, where access to all cube data is allowed. Note that these statements also apply to Interactive Reporting Web Client users who access local cube files created from Microsoft OLAP or other OLE DB for OLAP databases (such as the sample cube files that are presented with the sample files provided with the installation).
3 If the OLE DB for OLAP database provides the ability to retrieve dimension properties and you want to work
with them, click Enable Retrieval Of Dimension Properties and click Next.
4 Select the name of the Provider from the drop-down box and click Next.
For more information about the remaining dialog boxes, consult the database documentation of the provider.
3 Select the connection file that you want to modify and click Open.
The Database Connection Wizard is displayed showing the information for the Interactive Reporting database connection you selected.
4 Make any desired changes and then save the Interactive Reporting database connection when prompted.
391
Connecting to Databases
In Interactive Reporting Studio, you use an Interactive Reporting database connection whenever you perform tasks that require you to connect to a database, such as:
Downloading a data model Processing a query to retrieve a data set Showing values for a server limit Using server functions to create computed items Scheduling an Interactive Reporting document
The way you select an Interactive Reporting database connection depends on which edition of Interactive Reporting Studio you are using and the data model or Interactive Reporting document with which you are working. If a data model is present in the Query section workspace, Interactive Reporting Studio automatically prompts you with the correct Interactive Reporting database connection when your actions require a database connection. When you open Interactive Reporting Studio to begin a work session (for example, by downloading a data model from an Interactive Reporting Studio repository, or creating a data model from scratch) you must select the correct Interactive Reporting database connection for the targeted database.
Monitoring Connections
Before you attempt to connect to a database, make sure you are not already connected. You can monitor the current connection status by observing the connection icon, the lower right side of the Status bar. An X over the icon, database connection. , on
To check the connection information, position the cursor over the connection icon. The Interactive Reporting database connection in use and database name is displayed on the left side of the Status bar.
392
To select an Interactive Reporting database connection when you create a new Interactive
Reporting document:
1 Select File > New to display the New File dialog box. 2 Select the Recent Database Connection Files radio button and select a connection file from the list, then
click OK.
If the Interactive Reporting database connection that you want to use is not displayed, click Browse to display the Select Connection dialog box. Navigate to the connection file that you want to use and click Open. Interactive Reporting Studio prompts you for a user name and password.
Connecting to Databases
393
2 Click the File Locations tab to display the File Locations tab. 3 Under Connections Directory, enter the default connection directory that contains the Interactive Reporting
database connection files you use to connect to different databases and click OK.
4 Under Default Connection, enter the full path and file name of the Interactive Reporting database
connection that you want to use as the default connection.
The next time you log on (and create a new Interactive Reporting document), the default connection is automatically used. Be sure to store your default Interactive Reporting database connection in your connections directory so that Interactive Reporting Studio can find them when you or users of your distributed Interactive Reporting documents attempt to log on.
Logging On Automatically
Interactive Reporting Studio provides an Auto Logon feature that maintains the current database connection when you create a new Interactive Reporting document. Auto Logon is enabled by default.
2 Click the General tab to display the General tab. 3 Select the Auto Logon check box and click OK.
394
ConnectionName of the selected Interactive Reporting database connection StatusConnection status (connected or disconnected) Used ByName of the Interactive Reporting document section that accesses the database
Use the plus (+) and minus () signs to navigate through the tree structure.
Logging On to a Database
To log on to a database:
1 Select Tools > Connections Manager.[F11]
The Connections Manager dialog box is displayed.
2 Select the Interactive Reporting database connection associated with the database that you want to use
and click Logon.
395
2 Select the Interactive Reporting database connection associated with the database that you want to log off
of and click Logoff.
2 Select the connection file that you want to modify and click Modify.
The Database Connection Wizard is displayed showing the information for the Interactive Reporting database connection you selected.
3 Make any desired changes and then save the Interactive Reporting database connection when prompted.
2 Select the connection file associated with the database whose passwords that you want to change and
click Change Database Password.
396
2 Select the Recent Connection Files field and select a connection file from the list. 3 If the connection file that you want to use is not displayed, click the Browse button to display the Select
Connection dialog box. Navigate to the connection file that you want to use and click Open.
4 Type your user name in the Host User field and your password in the Host Password field, and then click
OK.
If you do not have the right connection file to connect to a particular database, ask your administrator to provide or help you create a connection file.
To create a new Interactive Reporting document using a new database connection file:
1 Select File > New.
The New File dialog box is displayed.
2 Select A New Database Connection File field and then click OK.
The Database Connection Wizard is launched.
397
2 Navigate to the connection file that you want to use and click Open.
When querying the database, you first select the data items that interest you from a Data Models, Standard Query or Standard Query with Reports. You can find a repository object to start with by selecting one from the Repository Catalog and downloading it to the desktop. When you download the object to the Contents frame, the object becomes the basis of a new Interactive Reporting document.
3 If you are not connected, log on to the database containing the document repository by selecting a
connection file from the Select Connection dialog box and entering your database user name and password.
The Open from Repository dialog box is displayed. The Open from Repository dialog box shows the Repository Catalog in the left frame and description information in the right frame. The Repository Catalog is in directory tree format, which enables you to navigate through the repository structure. Repositories are organized into subdivisions, which depending on the database may have subdivisions called databases, and will most likely have subdivisions called owners. Databases and owners can be departmental headings, people in your organization, or other criteria established by the administrator. You cannot access versions 4.0 and older of the repository.
4 Under each owner name in the repository, there are user groups.
User groups are established by an advanced user to categorize and store repository objects by content and access privileges. You have been granted access to only the items you see in the Repository Catalog.
398
5 Select the document icons in the directory tree to display profiles in the Model Info and Description Areas
to the right.
6 When you have navigated to the correct repository owner and user group and found the repository object
that you want, select the object in the directory tree and click Open.
Interactive Reporting Studio downloads the repository object to the appropriate section.
3 Select the connection method that you want to use for the web client:
Immediately connect to databaseSelect this method to immediately connect to a database using genuine database authentication. You are prompted for the logon credentials to the database being accessed. The value set here for the Interactive Reporting document in Interactive Reporting Studio cannot be changed in Interactive Reporting Web Client. This connection method is the preferred method for Interactive Reporting documents created in Hyperion Intelligence version 8.2 and later. Defer connection to database until used to process SQLSelect this method to defer making a connection to a database until the query is processed. You are prompted for logon credentials to the database without using genuine database authentication. That is, no actual database connection is attempted until the query is processed.
399
Connecting to Workspace
Use the Connect to Workspace dialog box to specify the Data Access Servlet URL required to launch Workspace. Workspace consists of services, applications, and tools for those users who need to find and view Interactive Reporting documents, and for users who need to import files, schedule jobs, and distribute the output. For more information on Workspace, see Hyperion System 9 BI+ Workspace Users Guide.
Note: To use the Connect to Workspace dialog box to connect to the repository (that is, for embedded browser/hyperlink content in Interactive Reporting Studio), see the Hyperion System 9 BI+ Interactive Reporting Studio Developers Guide).
To connect to Workspace:
1 Select Tools > Connect to Workspace.
The Connect to Server dialog box is displayed.
2 Specify the Data Access Servlet URL required to launch Workspace in the Server Address field.
400
Chapter
22
This section explains how to use metatopics and metadata to simplify data models for end users.
Note: Most of the information in this section is intended for Interactive Reporting Studio advanced users and does not apply to Interactive Reporting Web Client users.
In This Chapter
About Metatopics and Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Data Modeling with Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 MetaData in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Using the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
401
Interactive Reporting Studio provides two solutions to deal with each of these problems. These complementary solutions can be integrated to shield company personnel from the technical aspects of the query process and make end-user querying completely intuitive:
MetatopicsTopics created from items in other topics. Metatopics are higher level topics, or virtual topics that simplify the data model structure and make joins transparent. A metatopic looks and behaves like any other topic and can accept modifications and metadata. MetadataData about data. Typically stored in database tables, and often associated with data warehousing, metadata describes the history, content, and function of database tables, columns, and joins in understandable business terms. Metadata is useful for overcoming the awkward names or ambiguous abbreviations often used in a database. For example, for a database table named CUST_OLD, metadata can substitute a description business name for the table, such as Inactive Customers, when it is viewed by the end user. Metadata may also include longer comments. Because most business maintain their metadata on a database server, it is a potentially useful guide to the contents of the database, if it can be synchronized and used in conjunction with the data it describes.
402
Creating Metatopics
You can create a new, empty metatopic or copy an existing topic to use as the basis for a metatopic.
Caution! If a metatopic contains items copied from an original source topic, do not remove the original
topic from the workspace or use the icon view. Because metatopic items model data through the original source topics, removing the original source topics or using an icon view also removes the copied topic items from the metatopic.
403
3 Enter a descriptive item name in the Name field. 4 Type or use the following buttons to create computed item expression:
Functions buttonApplies scalar functions to data items. Reference buttonAdds Request Items to the expression. Options but tonSpecifies a data type. Operator buttonsAdds logical and arithmetic operators to the expression.
Select the metatopic or topic item that you want to remove and select Remove on the popup menu. Press [Del]. Press the Delete button.
Caution! If you remove a metatopic item, it cannot be restored to the metatopic. You must copy the
404
Viewing Metatopics
There are a number of ways to view a data model. By default, database-derived source topics and any metatopics you have created are displayed together in the Content frame in Combined view.
CombinedDisplays both original (database-derived) and metatopics in the Content frame. OriginalDisplays only database-derived topics in the Content frame. MetaDisplays only metatopics in the Content frame.
Caution! If an original topic contains items that have been copied to a metatopic, do not iconize or
remove the original topic from the Content frame in Combined view. Metatopic items are based on original items and remain linked to them. If an original topic is iconized or removed, any metatopic items based on its contents become inaccessible.
405
3 Select whether to run the Meta Connection Wizard on the current connection or on a different connection.
If you select a different connection, the Select Metadata Interactive Reporting database connection field becomes active. a. Enter the full path and file name of the connection file that you want to use. You can also click Browse to navigate to the location of the connection file. b. Click Next. The Password dialog box is displayed. c. Enter the database name in the Host Name field and the database password in the Host Password field and click OK. d. Select the current database name and password to make the metadata connection or to specify an alternate name and password. If you specify an alternate user name and password, enter the name and password that you want to use for the metadata connection.
4 Click Next.
406
5 Select the metadata schema where the meta settings are stored from the drop-down box.
Metadata schema are provided by third party vendors and saved in the bqmeta0.ini file. When you select a metadata schema, the predefined schema populates the fields in the Metadata Definition dialog box and is saved to the connection file. If you select another schema, the metadata definitions are overwritten in the connection file. If you want to customize the metadata settings, select Custom from the drop-down box and click Edit. The Metadata Definition dialog box is displayed, which contains tabs for tables, columns, joins, lookup, and remarks. For detailed explanations of the metadata definitions, see Configuring the Open Metadata Interpreter on page 407.
6 Enter the schema name or owner of the metadata repository table (for custom settings) or click Next to
complete the Meta Connection Wizard and return to the Data Connection WIzard.
SelectGenerates SQL Select statements, and is divided into distinct fields which specify the columns that store the metadata. The columns are located in the database table described in the From field. If necessary, you can use aliases in the Select fields to distinguish between multiple tables. FromGenerates an SQL From clause, and specify the table(s) that contains metadata that applies to the database item described by the tab. You can also enter SQL to access system tables when necessary. If you need to reference more than one table in the From field, you can use table aliases in the SQL. WhereGenerates SQL Where clauses and is used on the Columns and Joins pages to indicate which topic needs to be populated with item names or joined to another topic. It can also be used to establish relationships between multiple tables or filter tables.
407
Entries are required in all From entry fields, and in all fields marked with an asterisk (*). Under default settings, Metadata Definition fields specify the system-managed directory tables (except when using ODBC). You cannot modify field values when the Default radio button is selected. Clicking Reset at any time when defining a custom source populates the entry fields with the database default values. It may be helpful to start with the defaults when setting up metadata definitions. You may sometimes use database variables when entering a Where clause. Interactive Reporting Studio provides :OWNER, :TABLE, :COLUMN, :LOOKUPID, :TABALIAS, and :COLALIAS variables which temporarily store a database owner, table, column, or domain ID number and aliases of the active topic or item. Each variable must be entered in all caps with a leading colon.
2 In the Select fields, enter the appropriate column names as they are displayed in the alternate table of
tables.
Owner NameName of the owner column in the alternate table of tables Physical Table NameName of the column of physical table names in the alternate table of tables Table AliasName of the column of metadata table aliases in the alternate table of tables Table TypeName of the column of physical table descriptions in the alternate table of tables
3 In the From field, enter the physical name of the alternate table of tables. 4 Use the Where fields to filter selected topics (for example, to limit the metadata mapping to include only
certain owners).
Note: If multiple folders exist in the repository, the following modifications are necessary to the Interactive Reporting Studio bqmeta0.ini file in order to filter the list of tables by folder:
408
2 Change the ColumnWhere property as follows (do not include brackets): ColumnWhere=table_name
':TABLE' and SUBJECT_AREA='<folder name>'
2 In the Select fields, enter the appropriate column names as they are displayed in the alternate table of
columns and/or system-managed table of columns.
Physical Column NameName of the column of physical column names in the alternate table of columns Column AliasName of the column of metadata column aliases in the alternate table of columns Column TypeName of the column of column data types Byte LengthName of the column of column data lengths FractionName of the column of column data scales Total DigitsName of the column of column precision values Null ValuesName of the column of column null indicators
If you use more than one table in the From field, enter the full column name preceded by a table name in the Select field.
table_name.column_name
3 In the From field, enter the physical names of the alternate table of columns (and system-managed table
of tables, if necessary).
If you are using both tables in the From field, you can simplify SQL entry by using table aliases.
4 Use the Where field to relate columns in the alternate and system-managed tables of tables to ensure
metadata is applied to the correct columns.
Use the following syntax in the Where field (do not include brackets):
<table of columns>.<tables column>=:TABLE and <table of columns>.<owners column>=:OWNER.
409
Interactive Reporting Studio automatically populates a topic added to the Content frame with the metadata item names when it finds rows in the alternate table of columns that match the names temporarily stored in :TABLE and :OWNER. Use also the variables :TABALIAS and :COLALIAS to specify table and column aliases in SQL.
Note: The database variables must be entered in upper case and preceded with a colon.
Best GuessAutomatically joins columns of similar name and data type. CustomSelects joins defined in a custom metadata source. Server-DefinedUses joins that have been established on the database server.
The Joins tab uses SQL instructions to employ a custom join strategy stored in metadata. Once Interactive Reporting Studio is directed to the metadata source, all data models using the connection apply specified join logic between topics.
2 In the Select fields, enter the appropriate column names as they are displayed in the alternate table of
joins. Interactive Reporting Studio requires data in the Primary Table and Primary Column fields to find the primary keys.
Primary Database NameSets the name of the column of databases for primary key tables in the alternate table of joins. Primary OwnerSets the name of the column of owners belonging to primary key tables in the table of joins. Primary TableSets the name of the column of primary key tables in the table of joins. Primary ColumnSets the name of the column of primary key items in the table of joins. Foreign Database NameSets the name of the column of databases for foreign key tables in the alternate table of joins. Foreign OwnerSets the name of the column of owners belonging to foreign key tables in the table of joins. Foreign TableSets the name of the column of foreign key tables in the table of joins. Foreign ColumnSets the name of the column of foreign key items in the table of joins.
410
If you use more than one table in the From field, enter the full column name preceded by a table name in the Select fields.
table_name.column_name
3 In the From field, enter the physical name of the alternate table of joins. 4 Use the Where field to tell Interactive Reporting Studio which topics to auto-join.
Use the following syntax in the Where field (do not include brackets):
<owners column>=:OWNER and <tables column>=:TABLE
If Auto-Join is enabled, Interactive Reporting Studio automatically joins topics added to the Content frame when it finds rows in the alternate table of joins that match the names temporarily stored in :TABLE and :OWNER. You can also use the variables :TABALIAS and :COLALIAS to specify table and column aliases in the SQL.
Note: The database variables must be entered in upper case and preceded with a colon.
2 In the Select fields, enter the appropriate column names as they displayed in the domain registry table.
The Lookup Table, Lookup Value Column, Lookup Description Column, and Lookup Domain ID Column are required for Interactive Reporting Studio to locate lookup values.
Lookup DatabaseName of the column of databases in the domain registry table. Lookup OwnerName of the column of owners in the domain registry table. Lookup TableName of the column of tables containing lookup domain description values in the domain registry table.
411
Lookup Description ColumnName of the column of columns containing descriptive lookup values in the domain registry table. Lookup Value ColumnName of the column of columns of original column values in the domain registry table. Lookup Domain ID ColumnName of the column of domain IDs in the domain registry table.
3 In the From field, enter the physical name of the domain registry table.
Interactive Reporting Studio first sends SQL to the domain registry table to see if Lookup values are available for a given item.
4 Use the Where field to identify which items have lookup values.
Use the following format (do not include brackets):
<tables column>=:TABLE and <columns column>=:COLUMN
When you limit an item and show values, Interactive Reporting Studio stores the physical table and column names of the item in the variables, :TABLE and :COLUMN. Interactive Reporting Studio searches the domain registry table for a row that matches the values temporarily stored in :TABLE and :COLUMN. When it finds a row that matches, it pulls lookup values from the specified columns in the domain descriptions table. You can also use the :LOOKUPID variable to store the lookup domain ID value.
Note: The database variables must be entered in upper case and preceded with a colon.
5 Use the Lookup Where field to sync the values in the domain registry and domain description tables.
Click Clear to clear the entry fields if you make a mistake and want to start over.
2 In the Tab Name field, type the name of the tab that you want to be displayed in the Show Remarks dialog
box.
412
3 In the Select field, enter the name of the column of table or column remarks. 4 In the From field, enter the physical name of the table containing table or column remarks. 5 Use Where to link the selected topic to its corresponding remark.
Use the following syntax in the Where field:
Name of the Remarks Table =:TABLE
and
Name of the Remarks Column=:COLUMN
The dynamic variable automatically inserts the physical name of the object from which the user is requesting data in the application. Interactive Reporting Studio displays remarks when it finds rows in the remarks tables which match the names temporarily stored in :TABLE and :COLUMN. You can also use the variables :TABALIAS (displays name of a table) and :COLALIAS (displays name of a column) to specify table and column aliases in the SQL.
Note: The database variables must be entered in upper case and preceded with a colon.
UpMoves a tab up one position (toward the front of the Show Remarks dialog box). DownMoves a tab down one position (toward the back of the Show Remarks dialog box).
2 Enter the desired changes in the Select, From, and Where fields, and then click Update.
413
414
Chapter
23
In This Chapter
This section describes how to create data models from the database tables. It provides detailed information on joins, topics, and views, and data model properties and options.
About Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Building a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Understanding Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Working with Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Working with Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Data Model Menu Command Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
415
They substitute descriptive names for arcane database table and column names, enabling users to concentrate on the information, rather than the data retrieval. They are customized for users needs. Some kinds of data models include prebuilt queries that are ready to process, and may even include reports that are formatted and ready to use. Other data models may automatically deliver data to a users computer. They are standardized and up-to-date. A data model stored in the document repository can be used throughout the company and is easily updated by the database administrator to reflect changes in the database structure.
Note: You can only add create and modify Data Models if you have the Data Model, Query, and Analyze adaptive state.
A Data Model displays database tables as topics in the Contents frame.Topics are visually joined together like database tables and contain related items used to build a query. Multiple queries can be constructed against a single Data Model in the same Interactive Reporting document. If you modify the Data Model, any changes are automatically propagated to the corresponding queries. In addition to standard Data Models derived from database tables, you can create metatopicsvirtual views independent of the actual database. You use metatopics to standardize complex calculations and simplify views of the underlying data with intuitive topics customized for business needs. If you want to preserve a Data Model for future queries, you can promote it to a master data model and lock its basic property design. This feature enables you to generate future queries without having to recreate the Data Model. A Interactive Reporting document can contain any number of master data models from which any numbers of queries can be generated.
416
417
Understanding Joins
Tables in relational databases share information through a conceptual link, or join, between related columns in different tables. These relationships are displayed in the data model through visual join lines between topic items. Joins enable you to connect or link records in two tables by way of a shared data field. Once a data field is shared, other data contained in the joined tables can be accessed. In this way, each record can share data with another record, but does not store and duplicate the same kind of information. Joins can be automatically created joins for you, or you can manually join topics. Suppose you queried only the Customers table to determine the number of customers. You would retrieve 32 records with the names of the stores that purchase products since 32 is the exact amount of stores that have made a purchase. But suppose you made the same query with the Customers table and Sales table joined. This time you would retrieve 1,000 records, because each store made multiple purchases. Figure 31 shows the intersection of all records in the Sales table that mention stores listed in the Customers table.
Figure 31
In other words, a database query returns the records at the intersection of joined tables. If one table mentions stores 1-32 and the other table mentions those same stores repeatedly, each of these records will be returned. If you join still a third table, such as items, records are returned from the intersection of all three. Figure 32 shows the intersection of all records in the Sales table that have stores in the Customers table and items in the Items table.
Figure 32
418
The following sections discuss the types of joins available and how to use them:
Simple Joins on page 419 Cross Joins on page 419 Automatically Joining Topics on page 420 Specifying an Automatic Join Strategy on page 420 Manually Joining Topics on page 421 Showing Icon Joins on page 421 Specifying Join Types on page 422 Removing Joins on page 422 Using Defined Join Paths on page 423 Using Local Joins on page 423
Simple Joins
A simple join between topic items, shown in Figure 33, retrieves rows where the values in joined columns match.
Figure 33
Joins need to occur between items containing the same data. Often, the item names between two topics are identical, which sometimes indicates which items join. When selecting items to join, recognize that two items may share the same name, but refer to completely different data. For example, an item called Name in a Customer table and an item called Name in a Product table are probably unrelated.
Cross Joins
If topics are not joined, a database cannot correlate the information between the tables in the data mode. This leads to invalid datasets and run-away queries. In this case, a database creates a cross join between non-joined tables, where every row in one table is joined to every row in another table.
Understanding Joins
419
2 Select the General tab. 3 Select the Auto Join Tables check box and then click OK.
When you add tables from the Table catalog to the Content frame, joins automatically display between topics. Clear the Auto Join Tables check box to turn off this feature and manually create joins yourself.
Note: Joins are not added for topics that are in the Content frame before you select the Auto Join Tables option.
3 Click Next.
The Meta Connection Wizard displays the repository where the meta settings are stored.
4 Click Edit.
The Metadata Definition dialog box is displayed.
420
Best GuessJoins topics through two items that share the same name and data type CustomJoins topics according to specified schema coded in SQL in the Metadata Join Definitions area Server-DefinedJoins topics based on primary and foreign keys established in the underlying relational database
Figure 34
Manually Created Join Between Two Related Data Items in Two Topics
To manually join two topics, select a topic item, drag it over a topic item in another topic, and
release. A join line is displayed, connecting the items in the different topics.
2 Select the General tab to display the General tab. 3 Select the Show Icon Joins check box and click OK.
Clear the Show Icon Joins check box to turn off this feature and hide joins of iconized topics.
Understanding Joins
421
Simple join (=, >,<, >=, <=+)A simple (linear) join retrieves the records in both tables that have an identical data in the joined columns. You can change the default join setting for simple joins by choosing an operator from the drop-down box. The default setting, Equal, is preferred in most situations.
Left outer join (+=)A left join retrieves all rows from the topic on the left and any rows from the topic on the right that have matching values in the join column. Right outer join (=+)A right join retrieves all rows from the topic on the right and any rows from the topic on the left that have matching values in the join column. Outer or full outer join (+ = +)An outer join combines the impact of a left and right join. An outer join retrieves all rows from both tables matching joined column values, if found, or retrieves nulls for non-matching values. Every row represented in both topics is displayed at least once.
Note: A fifth join type, Local Join, is available for use with local Results sets. See Using Local Joins as Limits on page 425 for more information.
Caution! Not all database servers support all join types. If a join type is not available for the database to
which you are connected, it is unavailable for selection in the Join Properties dialog box.
Removing Joins
You can remove unwanted joins from the data model. Removing a join has no effect on the underlying database tables or any server-defined joins between them. A deleted join is removed from consideration only within the data model.
To remove a join from a data model, select the join and select Remove on the pop-up menu.
422
2 Select the Joins tab to display the Joins tab. 3 Select the Use Defined Join Paths option and click Configure.
The Define Join Paths dialog box is displayed.
4 In the Define Join Paths dialog box, click New Join Path to name and add a join path.
The New Join Path dialog box is displayed.
5 In the New Join Path dialog box, enter a descriptive name for the join path and click OK.
The join path name is highlighted in the Defined Join Paths dialog box.
6 Select a topic in the Available topics list and use the move right (-->) button,
In Join Path list.
7 To remove join paths from the Topics in Join Path list, select the move left (<--) button,
paths from the Topics In Join Path list.
, to remove join
8 When join paths are completely defined for the data model, click OK.
Tip: Join paths are not additive; Interactive Reporting Studio cannot determine which tables are
common among several paths and link them on that basis. Join paths are not linear, and if selected, the simplest join between all tables in the path is included when processing a query.
For example, you might want to see budget figures drawn from MS SQL server and sales figure drawn from an Oracle database combined in one Results set.
Caution! Local joins are memory and CPU intensive operations. When using this feature, please limit
Understanding Joins
423
Creating Local Joins on page 424 Using Local Joins as Limits on page 425 Limitations of Local Results and Local Joins on page 426
a. Verify item data types and associated data values in source documents so you will know how to join them in the Interactive Reporting document. b. Build the Request Line, and add server and local limits, data functions, and computations to the query as needed. c. Process the query, which will fill the Results section.
Tip: For consistent results, queries that use local joins should be placed after queries that generate
2 Select Insert > Insert New Query to create the second query.
Add topics from the Table catalog to the Content frame, and build the Request line.
3 In the Table catalog, select Local Results on the pop-up menu. 4 In the Table catalog of the second query, select Local Results on the pop-up menu.
A Local Results icon, displays in the Catalog frame.
.
5 Expand the Local Results icon to display the Results table icon, 6 Double-click a Results set or drag it to the Content frame.
The Results set from the first query that you built is displayed as a topic in the Content frame.
7 In the Content frame, manually create a join between the Results set and another topic. 8 Build the Request line and click Process.
Local joins are processed on the client machine. You can use Process All to process the queries, in which case the queries are processed in the order in which they are displayed in the Section catalog. For example, in a Interactive Reporting document with three queries, Query1, Query2, and Query3, the queries are executed in the order shown. If Query1 is a local join of the results of Query2 and Query3, it will still be processed first. If Query2 and Query3 have existing Results
424
sets, then the local join in Query1 will occur first, before processing Query2 or Query3. If the Results sets for either Query2 or Query3 are not available, then one or both of those queries will be processed first, in order to get the required results.
To use the values retrieved from one query as limit values for another query:
1 Build the first query that you want to include as a limit in the second query:
a. Verify item data types and associated data values in source documents so you will know how to join them in the second query. b. Build the Request line, and add server limits, data functions and computations to the query as needed. c. Click Process.
2 Select Insert > Insert New Query. 3 Build the second query.
a. Verify item data types and associated data values in source documents so you will know how to join them to the first query. b. Build the Request line, and add server and local limits, data functions, and computations to the query as needed.
4 In the Table catalog of the second query, select Local Results on the pop-up menu.
A Local Results icon, is displayed in the Catalog frame.
.
5 Expand the Local Results icon to display the Results table icon, 6 Double-click the Results icon or drag it to the Content frame.
The Results set from the first query that you built is displayed as a topic in the Content frame.
Note: The purpose of embedding the Results is to obtain a list of values. Do not include and Results set topic items on the Request line. Also, do not place any limits on topic items in this Results set. must not include any fields from the embedded Results section. If you do add a topic item from or set a limit on this Results set, you will not be able to set a Limit Local join.
Understanding Joins
425
7 In the Content frame, manually join the Results set to a another topic in the second query.
A join line is displayed, connecting the different topics.
8 Double-click the join line that was created by joining the Results set and other topic, or click the Properties
icon, .
10 Click Process to build the query and apply the limit constraint.
Returning Unique Rows Row limit Time limit Auto-Process Custom Group by
2. You cannot have more than one local join per local results topic. When setting up a query using a local results topic, you cannot have more than one local join between the local results topic and another topic/local results topic. 3. You cannot set query limits on local results topic items. Limits must be set in the query/result sections of the query that produces the local results. Attempting to set a query limit on a local results topic item invokes the following error message: Unable to retrieve value list for a computed or aggregate request item. 4. You cannot aggregate local results tables. 5. You cannot process local results data to a table.
426
6. You cannot have more than one limit local join. A limit local join involves two topics, one of which is a local results topics. A local results item is used as a limit to the other topic. Attempting to define more than one limit local join invokes the following error message: This query contains a local results object involved in a join limit. It is not possible to have other local results objects when you have a local join limit. 7. You cannot combine limit local joins with local joins. Attempting to combine a limit local join and local join invokes the following error message: This query contains a local results object involved in a join limit. It is not possible to have other local results objects when you have a local join limit. 8. You should expect compromised performance when a query is associated with large local results sets. This is expected behavior since Interactive Reporting Studio is not a database. 9. You cannot use metatopics with local results. You cannot promote a local results topic to a metatopic or add a local results topic item as a metatopic item. The Promote To Meta Topic and Add Meta Topic Item DataModel menu options are not available for local results topics and topic items. 10. You cannot access or change properties for local results topic items. Properties include remarks, number formatting, aggregate/date/string functions, data types, and name. 11. You cannot have query request line computed columns from local results topic items. The Add Computed Item menu option is not available for local results topic items. 12. You cannot use Append Query features of unions or intersections with local results topic items. The Append Query menu option is not available when a local result topic is part of a query.
Changing Topic Views on page 428 Modifying Topic Properties on page 429 Modifying Topic Item Properties on page 430 Restricting Topic Views on page 430
427
Figure 35
Structure viewDisplays a topic as a simple list of component data items. This is the default setting. Structure view enables you to view and select individual data items to include in a query. This is the easiest view to use if you are familiar with the information that a data model, topics, and topic items represent.
Detail ViewPresents a topic in actual database view with a sample of the underlying data. When you change to Detail view, a small query is executed and a selection of data is loaded from the database server. The topic is displayed as a database table with each topic item displayed as a database column field. Detail view is useful when you are unfamiliar with a topic. You can browse the first few rows of data to see exactly what is available before adding a topic item to the query.
Note: Detail view is not available for special items such as metatopics or computed data items.
Icon ViewDeactivates a topic and reduces it to an icon in the Content frame. When a topic is displayed in Icon View, associated items are removed from the Request and Limit lines. The topic is not recognized as being joined to other topics, and is temporarily removed from the data model and the SQL statement. If no items from a topic are needed for a particular query and the topic does not link together other topics which are in use, reduce the topic temporarily to Icon view to make large queries run faster and to consume fewer database resources.
428
Topic NameThe name of the topic that is s in the Catalog frame. You can change this field to display a more user-friendly name in the Content frame. Physical NameFull name of the underlying database table. Items To DisplayThe topic items available for the selected topic.
Hide/Show AllHides or actively displays all topic items. Up/DownMoves selected item up or down one space in the topic display. SortAlphabetically sorts listed items.
Set As DimensionDefines the drill-down path or hierarchy for dimensional analysis as shown in the data model. This feature is used in conjunction with the Set As Fact field in the Topic Item Properties dialog box. Allow Icon ViewEnables the icon view option for the topic Allow Detail ViewEnables the detail view option for the topic. Cause ReloadSpecifies automatic reloading of server values the next time Detail View is activated. Rows to LoadSpecifies the number of rows to be loaded and displayed in Detail View.
429
The Topic Item Properties dialog box is displayed, showing information about the source of the topic column in the database.
2 Change the topic item properties to the desired setting and click OK.
Available options include:
Item NameDisplays the name of the item. Set As FactEliminates items with integer or real values from a drill-down path. This feature is used in conjunction with the Set As Dimension field in the Topic Properties dialog box. InformationAdditional column information from the database. Information about keys is displayed only when server-defined joins are enabled. LengthEnables you to change the string length of columns.
2 Select the Allow Icon View or Allow Detail View check boxes to toggle the availability of either view. 3 If necessary, Cause Reload to specify loading from the server when Detail View is selected.
New data is retrieved the next time Detail View is activated for the topic, after which Cause Reload will be toggled off automatically.
4 If desired in Detail View, enter the number of rows to be returned from the server for Detail View, and click
OK.
By default, the first ten rows of a table are retrieved for preview in Detail View.
430
Changing Data Model Views on page 431 Setting Data Model Options on page 432 Automatically Processing Queries on page 436 Promoting a Query to a Master Data Model on page 436 Synchronizing a Data Model on page 437
CombinedDisplays both original (database-derived) and metatopics in the Content frame. OriginalDisplays only database-derived topics in the Content frame. MetaDisplays only metatopics in the Content frame.
Caution! If an original topic contains items that have been copied to a metatopic, do not iconize or
remove the original topic from the Content frame in Combined view. Metatopic items are based on original items and remain linked to them. If an original topic is iconized or removed, any metatopic items based on its contents become inaccessible.
431
2 Set the desired options for the data model and click OK.
Note: All users have access to the join preferences, but not to the limit, query governor, or auditing features, which are designed to customize data models stored for distribution.
One of the first three limit options (Show Values, Custom Values, or Custom SQL) must be enabled in order for users to apply limits in the Query section. Changing join usage usually changes the number of rows retrieved from the database. It also introduces the possibility that novice users may create improperly joined queries. If query governors are set as part of a data model, and end users set query governors on a query built from the data model, the more restrictive governor takes precedence.
The following sections provide additional information about data model options:
Saving Data Model Options as User Preferences on page 432 Saving Data Model Options as Profiles on page 433 Data Model Options: General on page 433 Data Model Options: Filters on page 434 Data Model Options: Auditing on page 436
To change the defaults without affecting any existing data models (including the current one),
click Save as Defaults and then click Cancel.
To change the defaults and apply them to the current data model, click Save as Defaults and
then click OK.
Note: The following data model options apply to the current data model only and cannot be saved as defaults: Topic Priority information and the Use Defined Join Paths option on the General tab.
432
Design Options
Auto Alias TablesEnables the product to replace underscores with spaces and display item names in mixed upper/lower case when a table is added to the Content frame from the Table catalog. Auto Join TablesInstructs the product to automatically join database tables based on one of three different join strategies as they are added to the Content frame if their names and data types are identical. If Auto Join Tables is not selected, you must manually create joins between topics in the Content frame. Show Icon JoinsShows topic joins when a topic is in icon view (minimized). It is recommended that you activate this feature. Allow Drill AnywhereActivates the drill anywhere menu item on the menus within the Pivot and Chart sections. This option enables users to drill to any field. Allow Drill To DetailActivates the drill to detail menu item on the menus within the Pivot and Chart sections. This option enables users to query the database again once they have reached the lowest level of detail; it only works if the Allow Drill Anywhere option is selected.
Return First ____ RowsSpecifies a cap on the number of rows retrieved by a query against the data model, regardless of the size of the potential Results set.
Note: All users can also set query governors, but data model options automatically override governors set at the query level. If row limits are also set at the query level, the lower number is enforced.
Time Limit ____ MinutesSpecifies a cap on the total processing time of a query against the data model. Seconds are entered as a decimal number. Available for asynchronous connection API software (for example, Open Client) that support this feature.
433
Filter Options
Show Minimum Value SetDisplays only values that are applicable given all existing filters. This preference takes into account limits on all tables and related through all joins in the data model (which could be potentially a very large and long running query). Show Values Within TopicDisplays values applicable given existing limits in the same topic. This preference does not take into account limits associated by joins in the data model. Show All ValuesDisplays all values associated with an item, regardless of any established limits.
Tip: When setting these preferences for metatopics, be sure to display the data model in Original
Show ValuesGlobally restricts use of the Show Values command in the Limit dialog box, which is used to retrieve values from the server. Custom ValuesGlobally restricts use of the Custom Values command in the Limit dialog box, which is used to access a custom values list saved with the Interactive Reporting document or in a flat file. Custom SQLEnables the user to code a limit directly using SQL.
Note: The Topic Priority dialog box is displayed only if you first select join in the data model.
Note: Since most data models do not have the same set of topics, you cannot save changes to the topic priority as default user preferences. (For more information on default user preferences, see Saving Data Model Options as User Preferences on page 432.)
434
Use All Joined Topics Specifies the use of all joined (non-iconized) topics in the data model. Use The Minimum Number Of Topics Specifies the use only of topics represented by items on the Request Line. Use All Referenced Topics Specifies the use only of topics represented by items on the Request or Limit lines. Changing join usage usually changes the number of rows retrieved from the database. It also introduces the possibility that novice users may create improperly joined queries. Use Defined Join Paths Specifies the use of a user predefined join path that groups the joins necessary to query from the data model. Click Configure to create a custom join path. Note that since most data models do not have the same predefined join paths, you cannot save the Use Defined Join Paths option as a default user preference. (For more information on default user preferences, see Saving Data Model Options as User Preferences on page -432.) Use Automatic Join Path Generation Instructs Hyperion Intelligence Clients to dynamically generate joins based on the context of user selections on the Request and Limit lines.
2 Click the Topic Priority tab to view the Topic Priority tab.
Topics in the data model appear listed in the Tables list in the order they were placed in the Content pane.
3 Rank the topics in the desired order. Click the arrow to move selected topics up or down in the list.
435
4 Click Auto-Order to automatically detect the magnitude of each topic and rank them accordingly in
descending order.
Note
Since most data models do not have the same set of topics, you cannot save changes to the topic priority as default user preferences. (For more information on default user preferences, see Saving Data Model Options as User Preferences.)
To set Auto-Process:
1 Display a standard query Interactive Reporting document open in the Content frame. 2 Select Query > Query Options.
The Query Properties dialog box is displayed.
3 Select the Auto-Process check box, and then click OK. 4 Select File > Save To Repository to upload the Interactive Reporting document to the repository.
The query automatically processes when a user opens the Interactive Reporting document from the repository.
436
The benefit is that any changes to the master data model get propagated to all dependent queries that are based on the master data model. Each time a new query is inserted into a Interactive Reporting document that contains a master data model, you are prompted to link the new query to the master data model.When a query is promoted to a master data model, it is added to the Section frame as a new section. Once you promote a query to a master data model, you cannot undo it.
automatically updated. The Sync With Database feature removes any altered items from metatopics, but preserves the remaining structure so that repairs are minor. Sync With Database works transparently with most other customized attributes of a data model.
437
Data Model Menu Commands Description Expands the Table catalog in the Catalog frame. Enables you to select combined, original (database-derived), or metaviews of topics. Enables you to select structure, detail, or icon views of topics. Creates a metatopic from an existing topic. Adds a metatopic to the data model. Enables you to add either a server or local metatopic item. Detects inconsistencies with the database, updates the data model, and provides an itemized list of the changes. Promotes the current query to a master data model. Enables you to specify options for General, Limits, Joins, Topic Priority, and Auditing. 4 4 Keyboard Shortcut [F9] Pop-up Menu
Command Table Catalog Data Model View Topic View Promote to Metatopic Add Metatopic Add Metatopic Item Sync With Database Promote To Master Data Model Data Model Options
438
Chapter
24
This section describes how to create and manage the document repository, including how to upload Interactive Reporting documents to, and open Interactive Reporting documents from, the repository, and how to control document versions. Note that most of the features described in this section are available only to advanced users of Interactive Reporting Studio.
In This Chapter
About the Document Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Administering a Document Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Working with Repository Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Document Repository Table Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
439
Data modelA basic data model that is a group of related topics designed as a starting point for building a query. A basic data model opens in the Content frame of the Query section in which a group of joined topics is displayed. Standard queryA data model with a query already assembled. After the query is downloaded, you simply process the query to retrieve data. Standard queries are ideal for users who use the same data on a regular basis; for example, to get inventory updates that fluctuate from day to day. A standard query opens in the Results section. If a standard query has the auto-process feature enabled, the query automatically runs when it is downloaded and populates the Results and report sections with data.
Standard query with reportsA standard query that includes preformed reports which enable you to process the query and view the data using customized report sections. A formatted standard query with reports is displayed in the Pivot, Chart, Dashboard, or Report sections.
The following sections describe the tasks associated with administering a document repository:
Creating Repository Tables on page 441 Confirming Repository Table Creation on page 442 Managing Repository Inventory on page 443 Managing Repository Groups on page 444
440
3 Click Create to open the Create Repository Tables dialog box. 4 Change the default configuration.
Owner NameEnter the database and owner names (if applicable) under which you want to create the tables. If both database and owner are specified, separate them with a period (for example, Sales.GKL). Grant Tables to PublicCheck Grant Tables to Public to grant general access to the repository tables at the database level. You must grant access to the repository tables in order for users to download data models; otherwise, you will need to manually grant access to all authorized users using a database administration tool. Do not grant tables to public if you need to maintain tight database security and upload privileges are only permitted for a small group of users.
Data Type FieldsChange default data types for column fields to match data types of the database server. If the DBMS and middle ware support a large binary data type, use it for VarData columns. If not, use the largest character data type.
5 Click Create All to create the repository tables under the specified user.
The All Tables Created dialog box is displayed.
Note: If table creation fails, make sure the database logon ID of the server has been granted Table Create privileges.
6 Click OK, and then click Close to close the Create All dialog box.
441
2 Select the Show Advanced Options check box, and then click Next. 3 Enter a user name and password to connect to the data source, and then click Next. 4 Clear the Exclude Hyperion Repository Tables check box and click Next. 5 Click Next through the rest of the wizard dialog boxes, and then click Finish. 6 Save the Interactive Reporting database connection file. 7 Select DataModel > Table Catalog or press F9 to view the Table catalog including the document repository
tables:
For detailed information on the document repository tables, see Document Repository Table Definitions on page 448.
442
2 Click the Inventory tab. 3 Select a model type from the Model Type drop-down box.
The Model Type drop-down box shows the model type folders that contain the repository objects. Interactive Reporting Studio supports three types of repository objects: Data Model, Standard Query and Standard Query with Reports. When you select a model type, the description for that model type becomes active.
Unique NameName of the object as it is displayed s in the repository. CreatorName of the person who created the object. CreatedDate on which the object was saved to the repository defaults. VersionVersion number of the object.
5 Click Update.
To modify the attributes of a document object itself, download the object, alter the document and upload it to the repository. For more information, see Modifying Repository Objects on page 446.
2 On the Inventory tab, select the model type to be deleted from the Model Type drop-down box. 3 Select a repository object from the Model List and click Delete.
The object is deleted from the repository.
443
2 Select the Groups Setup tab to display the Groups Setup tab. 3 Enter the group name that you want to add the repository structure in the Groups field and click Add.
Tip: If you enabled Grant Tables To Public when creating the repository, the default group, Public,
4 Select the group for which you want to associate a user name or names. 5 Enter the user name(s) in the Users field, and click Add to add the names to the group.
Add multiple users by delimiting with commas in the edit field, for example: user1, user2, and user3.
6 All users with access to the repository, regardless of other grouping affiliations, have default access to
documents placed in the Public group.
To remove a user group or user, select the user name in the Users list and click Remove.
444
Uploading Interactive Reporting Documents to the Repository on page 445 Modifying Repository Objects on page 446 Controlling Document Versions in Interactive Reporting Studio on page 448 Controlling Document Versions in Interactive Reporting Web Client on page 450
If necessary, click Select to launch the Select Connection File dialog box, navigate to the connection file that you want to use, and click OK. The Save To Repository dialog box is displayed showing the Model tab.
2 In the Model Type area, select the type of object you are saving to the repository.
Select between Data Model, Standard Query, and Standard Query with Reports.
3 In the Model Info area, enter information about the repository object.
Unique NameName of the object that you want to show for the object in the repository CreatorName of the person who created the object. This information is useful in tracing the document source for updates and so on CreatedDate on which the object was saved to the repository defaults Locked/Linked Object (Required For ADR)Toggles repository object locking. Previously, repository models were locked to maintain versions (see Controlling Document Versions in Interactive Reporting Studio on page 448), and could not be modified by the end user. Unlocked data models can be downloaded as usual and the query modified. However, once saved outside the repository, the unlocked model loses its automatic version-control.
445
Prompt For Sync On DownloadPrompts users with the request: A newer version of the object exists in the repository, downloading the changes may overwrite changes you have made to the local file. Would you like to make a copy of the current document before proceeding? If the user selects Yes, a copy of the locally saved object is made, Automatic Distributed Refresh is disabled for the copy, and the object is synchronized with the newer version of the object
DescriptionEnter a description of the repository object and what it can be used for. The maximum character length that you can add is 255 characters.
5 Use the arrow buttons to grant access to repository groups by adding them from the Available Groups list to
the Selected Groups List.
Available GroupsAvailable user groups from which access can be granted. Selected GroupsGroups added to the granted access list for the stored object.
Tip: You must move the PUBLIC group to the Selected Groups list if you want to provide general,
6 Click OK to save the object to the repository. 7 Distribute the connection file to end users as needed to access both the object source database, and if
necessary, the document repository used to store the object.
Caution! Modifications made to repository objects propagate throughout the user environment via
Automatic Distributed Refresh (ADR), which track objects by unique ID and version number. Each time the object is uploaded to the repository, it is also assigned a new version number. For ADR to work properly, you must upload a modified repository object with the same name as the original.
446
2 Select the connection file that you want to use and click OK. 3 In the Password dialog box, type your user name and password and click OK.
The Open From Repository dialog box is displayed.
4 Navigate through the repository tree and select the repository object that you want to use
The Open From Repository dialog box displays information about the selected object.
Unique NameName of repository object CreatorCreator of the repository object CreatedDate on which the repository object was created DescriptionGeneral description of the repository object, its contents, and the type of information that can be queried
5 Click Open.
The repository object is downloaded to the appropriate section.
6 Make the desired changes to the object, and then select File > Save To Repository. 7 Select the correct Interactive Reporting database connection for the repository object, and enter the user
name and password if prompted.
8 Select the Model tab and verify the correct document type in the Model Type field.
If the Model type is grayed out, the object has not been modified and it cannot be saved to the repository at this time.
9 Add any object information in the Model Info area and then click OK.
You are asked if you want to enter a unique name for the object. Click No to replace the current object with the object you just modified. Click Yes to save the modified object under a different name. For Automatic Distributed Refresh to work properly, you must save a modified object with the original object name and model type, and save it in the same userowned repository.
10 If you assigned another name to the object, you are prompted to associate the modified object with a
group. Click OK.
The Group tab is displayed automatically so that you can associate the object with a group.
11 Use the arrow buttons to grant access to repository groups by adding them from the Available Groups list to
the Selected Groups List.
12 Click OK.
447
Each object in the BRIOOBJ table has a unique ID number. Each object is assigned an iterative version number each time it is altered and uploaded.
Data model objects are typically downloaded from the document repository into Interactive Reporting documents that are used to analyze data through pivots, charts, and other reports. When a user saves work to a Interactive Reporting document on disk (either a local hard disk or file server), Interactive Reporting Studio stores both a link to the source object (which was downloaded from the document repository) and the connection information needed to reconnect to the repository.
BRIOCAT2 Document Repository Table on page 449 BRIOOBJ2 Document Repository Table on page 449 BRIOBRG2 Document Repository Table on page 450 BRIOGRP2 Document Repository Table on page 450
Note: The following tables, which were created in Hyperion Intelligence version 6.6 and prior, are no longer used nor will they be referenced by any aspect of Interactive Reporting Studio: BRIOOCE2, BRIODMQ2, BRIOUSR2, BRIOSVR2, and BRIOUNIQ.
448
BRIOCAT2 Table Datatype NUM CHAR CHAR DATE NUM CHAR CHAR CHAR CHAR CHAR NUM Description Unique identifier for a stored Repository object. Creator of the object. Version used to upload the object. Most recent date of upload for the object. Number of rows occupied by the stored object in the BRIOOBJ2 table. Indicates whether previous upload of the stored object was completed successfully. Descriptive name of the stored object. File type of the stored object, such as data model, locked query, locked report, LAN-based, folder. Description of the object. Latest version number of the object, used for ADR. Total size of the stored object in bytes.
Column UNIQUE_ID OWNER APP_VERSION CREATE_DATE ROW_SIZE READY FILE_NAME FILE_TYPE DESCRIPTION VERSION TOTAL_SIZE
BRIOOBJ2 Table Datatype NUM NUM BLOB or LONG RAW Description Unique identifier for a stored repository object. Sequence ID for segment of the object. Data Model object in binary chunk format.
449
BRIOBRG2 Table Datatype NUM CHAR Description Unique identifier for a repository document. Name of a repository group.
BRIOGRP2 Table Datatype CHAR CHAR Description Name of a repository group. Name of a document repository user assigned to the group.
450
ADR Global FlagThis flag controls the availability of the ADR feature. For a new installation of Interactive Reporting Studio, this flag defaults to enabled. For an upgrade installation, this flag is disabled. You system administration can enable or disable this feature as needed. ADR BQY MetadataThis flag is enabled or disabled when an Interactive Reporting document is published to the repository. If the flag is enabled, then only this particular document is allowed for ADR. For simple ADR, this flag is always enabled. ADR for job output defaults to a disabled flag when an Interactive Reporting document is published by a job action. In this case, a user can enable this flag by modifying the properties of the Interactive Reporting document. This flag is always disabled for a job output collection.
ADR Behavior
The following table shows how ADR behaves with documents in different scenarios.
Table 43
Simple ADR Behavior Interactive Reporting document section in local Interactive Reporting document Section does not exist. Section exists. Section does not exist. Section exists.
Interactive Reporting document section in Repository version Section does not exist. Section does not exist. Section exists. Section exists.
Action in Merged Interactive Reporting document No action Add from local document. Add from Repository document. Write Repository version.
451
452
Chapter
25
This section provides information on the Interactive Reporting Studio auditing features, including how to track and log who uses data model, how database resources are allocated, how database resources are consumed, and how to optimize the allocation and availability of data models. Note that most of the features described in this section are available only to advanced users of Interactive Reporting Studio.
In This Chapter
About Auditing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Creating an Audit Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Defining Audit Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Auditing Keyword Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Sample Audit Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
453
About Auditing
Auditing enables information to be collected about data models downloaded from the repository. You can use auditing features to track how long queries take to process, which tables and columns are used most often, and even record the full SQL statement that is sent to the database. Audit information can help the database administrator monitor not only the effectiveness of each distributed data model, but also the weaknesses and stress points within a database. The results are useful for performing impact analysis to better plan changes to the database. Auditing requires minimal additional setup and can be implemented entirely within Interactive Reporting Studio. The steps required for auditing data models are:
Create a document repository with an inventory of distributed data models. Create a database table in which to log audit events. Use data model options to define events that you want to audit for each data model. Save the audited data models to the document repository. Use Interactive Reporting Studio to query the audit table and to analyze the data it contains.
Special Considerations
The Audit log may fill up. Monitor it regularly and delete any entries that are no longer used. Before uploading your audited data model to the document repository, log in as a user and test each auditing event to verify that your SQL statements are not generating any errors. Auditing is not supported for the Process Results To Database Table feature, nor for Essbase data models. However, scheduled Interactive Reporting documents containing linked data models are audited normally.
454
Sample Structure for the BQAUDIT Table Data Source Text Explanation/Example Events which occur within the context of a query session, such as: Logon Logoff Post Process
Column EVENT_TYPE
USERNAME
SQL function
Database user information returned by a database SQL function, such as: user (Oracle) user_name (Sybase) CURRENT_USER (Red Brick)
DAY_EXECUTED
SQL function
Date, time, and duration information returned by a database SQL function, such as: sysdate (Oracle) getdate (Sybase) CURRENT_TIMESTAMP (Red Brick)
SQL_STMT
Interactive Reporting Studio keyword Interactive Reporting Studio keyword Interactive Reporting Studio keyword
SQL statements generated by the user and captured from the Interactive Reporting Studio SQL log, and returned by the keyword variable :QUERYSQL Data models accessed by the user and returned by the keyword variable :REPOSITORYNAME Query information returned by the keyword variable :ROWSRETRIEVED
DATAMODEL NUM_ROWS
455
For more information about creating a new data model, see Building a Data Model on page 417.
5 Enter one or more SQL statements to update the audit table when the event occurs, and click OK.
A check mark is displayed next to the event on the Auditing tab in the Data Model Options dialog box. You can use the check box to enable or disable the event definition without reentering the SQL statement. You can also click Define again at any time to modify the SQL statement.
6 Select File > Save to Repository to save the audited data model to the document repository.
The SQL statement is sent to the database whenever a user triggers the event while using the data model.
456
keyword text in uppercase. Other items in the SQL statement may also be case sensitive, depending on your database software.
Table 45
Auditing Keyword Description Number of rows retrieved by the most recently executed query. Name of the repository object in use (data model or standard query with reports). (Pre Process, Limit: Show Values, and Detail View only) Complete SQL text of the most recently executed query statement. Tip: Consider the maximum column length when using :QUERYSQL. You may want to use a substring function to limit the length of the SQL being logged. For example: SUBSTR(:QUERYSQL,200)
:SILENT
Restricts display of the audit-generated SQL statement within the users SQL Log. When the :SILENT keyword variable is included in the audit statement, the SQL Log output reads Silent SQL sent to server instead of the SQL statement. This keyword variable provides a security feature when the triggered SQL statement is sensitive or should remain undetected.
457
Sample Audit Events Description Executed each time a successful logon occurs.
insert into <owner>.bqaudit (username, day_executed, event_type) values (user, sysdate, 'Logon')
Note: The logon audit event fires for each action when used with the Data Access Service. Because of connection pooling, it is not possible, at the Data Model level, to determine when an actual logon event is required.
Logoff
Note: The logoff audit events fires for each action when used with the Data Access Service. A logoff event does not happen until the connection reaches the configured idle time.
Pre Process
Executed after Process is selected, but before the query is processed. It is useful to track the date and time of both Pre Process and Post Process in order to determine how long a query takes to process.
insert into <owner>.bqaudit (username, day_executed, event_type) values (user, sysdate, 'Pre Process')
Post Process
Executed after the final row in the result set is retrieved at the user's workstation. It is useful to track the date and time of both Pre Process and Post Process in order to determine how long a query takes to process.
insert into <owner>.bqaudit (username, day_executed, event_type, num_rows, sql_stmt) values (user, sysdate, 'Post Process', :ROWSRETRIEVED, SUBSTR(:QUERYSQL, 1, 200))
Limit:Show Values
Executed after selecting the Show Values button when setting a Limit.
insert into <owner>.bqaudit (username, day_executed, event_type, datamodel, sql_stmt) values (user, sysdate, 'Show Values', :REPOSITORYNAME, :QUERYSQL)
Detail View
This statement is executed when a user toggles a topic to Detail View and a sampling of data from the database is loaded. Remember that values are only loaded when you first toggle to Detail View, or when Cause Reload is selected in the Topic Properties dialog box. This statement is executed when the Data Model is downloaded from the document repository into a
458
Chapter
26
This section provides instructions for registering and managing client objects in the IBM Visual Warehouse Information Catalog.
Note: The information in this section applies only to Interactive Reporting Studio.
In This Chapter
About the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Registering Documents to the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Administering the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
459
Visual Warehouse must already be installed before you can register or administer this feature. Also, the client document object types must already exist before completing the following steps. For more information see Creating Object Type Properties on page 462.
3 Type the name of the Interactive Reporting document in the File Name field. 4 In the Save As Type field, leave the default .bqy file type and click Save.
The Connect To Information Catalog Repository dialog box is displayed.
5 Type your user identification in the User field. 6 Type your password in the Password field. 7 Type the ODBC data source name in the Database Alias field if it is different than the default database
alias value.
The Register To Information Catalog dialog box is displayed, showing the Properties and Subject Area tabs. Use these corresponding pages to describe the properties and subject matter of the Interactive Reporting documents.
460
9 In the Available Properties list, select a property of the Interactive Reporting document to which you want
to add a value.
10 In the Enter Value for Selected Property edit box, type a value for the property. 11 Repeat Step 8 through Step 9 for all properties. 12 Click the Subject Areas tab. 13 In the Specify The Subject Area list, use the plus (+) and minus () signs to navigate through the Subject
area structure (Grouping Category) and select the subject area folder to which you want to add the Interactive Reporting document.
The Subject Area displays a tree view of eligible subject area folders in which you can add the Interactive Reporting document.
14 Click Add to add the Interactive Reporting document or instance to the Subject Area specified in Step 12. 15 Click OK.
Defining Properties
You can define the values of selected properties for a document when registering to the catalog. Use the Properties tab to show and edit properties, data types, and lengths:
Available PropertiesDisplays a list of available properties that you can specify. Enter ValueEdit any available value by typing the information in this edit box. For a description of eligible values for the properties, see the Description field.
Specify The Subject AreaDisplays a tree view of eligible subject area folders in which you can add the document. Use the plus (+) and minus () signs to navigate through the folders. To add a document to folder, select the subject area folder and click Add. Subject Areas ContainingDisplays the subject area folder to which the document has been added.
Creating Object Type Properties on page 462 Deleting Object Types and Properties on page 462 Administering Documents on page 463 Setting Up Object Types on page 464
461
2 Type your user identification in the User field. 3 Type your password in the Password field. 4 Type the ODBC data source name in the Database Alias field if it is different than the default database
alias value.
5 Click OK.
The Administer Information Catalog dialog box is displayed.
6 Click the Setup Object Types tab. 7 In the Object Type drop-down box select Interactive Reporting document. 8 In the Name field, type the name of the property that you want to associate with the object type. 9 In the Short Name field, type an abbreviated version of the property name. 10 In the Datatype list, select the data type classification of the property (for example, character-based) from
the drop-down list box.
11 In the Length field, type the maximum length character of the property. 12 To require that the property be completed when a user registers a document, click the Entry Required
check box.
13 To add the object type property to the Properties for Object Type list box, click Set. 14 Repeat Step 8 through Step 12 for each property that you want to associate with the selected object type. 15 To create the object type, click Create Object Type.
462
2 Type your user identification in the User field. 3 Type your password in the Password field. 4 Type the ODBC data source name in the Database Alias field if it is different than the default database
alias value.
5 Click OK.
The Administer Information Catalog dialog box is displayed.
6 Click the Setup Object Types tab. 7 In the Object Type drop-down list box, select Interactive Reporting document. 8 Click Delete Object Type.
Administering Documents
Use the Administer Documents tab to search for a specific document based on an object type, property, and other selected criteria (see Table 47). After the document has been located, you can either delete or edit the associated properties.
Table 47
Object Type Search Criteria Description Interactive Reporting document object type. Property by which you want to search on the document from the pull-down list. Complete the search condition by selecting a value in the Search Criterion field below. For example, if you specify a Name property, type the name of the document in the Search Criterion field below. Use this field in conjunction with the Select Property field above. Once you have selected a property complete the search conditions by specifying the value of the property. For example, if you selected the Order Type property, you might type Interactive Reporting document in this field. If you want the search engine to distinguish between uppercase and lowercase letters when determining which documents to retrieve, click this field. A wildcard is a special symbol that represents one or more characters and expands the range of your searching capabilities. You can use the % wildcard symbol to match any value of zero or more characters. For example, to documents whose properties contains 1997 sales, type:
1997 Sales %
Search Criterion
in the Search Criterion field. Search Clear Search Retrieves the search results. Clears the results of the current search.
463
Table 47
Object Type Search Criteria (Continued) Description Results of the search. Deletes a selected document from the repository. Enables you to edit the value properties of a document through the Properties tab of the Register To IBM Information Catalog option.
Table 48
Object Types and Properties Description Interactive Reporting document object types. Name of the property that you want to associate with the object type. Short name of the property that you want to associate with the object type. Data type of the property. Length of the property. Requires a user to select a property when registering a document to the DataGuide repository. Adds a new object type property to the Properties for Object Type list. If an object type has already been created, this button is unavailable. Removes a new object type property from the Properties For Object Type list. If an object type has already been created, this button is unavailable. Once an object type has been created, you cannot remove its properties; the entire object type must be deleted. Properties defined for the object type. To show the entire definition for a property, click a property in the list. Creates an Interactive Reporting document (.bqy) object type. Once an object type has been created, you cannot modify its existing properties or add new properties. Deletes an Interactive Reporting document (.bqy) object type. You cannot delete the individual properties of a selected object type. Clears the definition fields of a property.
Field Object Type Name Short Name Datatype Length Entry Required Set Remove
Properties For Object Type Create Object Type Delete Object Type Clear
464
Chapter
27
In This Chapter
This appendix explains the row-level security feature: what it is and how to implement it for Interactive Reporting documents.
About Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Row-Level Security Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Row-Level Security Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Other Important Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
465
To effectively control access, the servers key off the users identification when connecting to it. This is the users logon name, used to establish a session with the Hyperion System 9 BI+ services. Beyond this user name, the servers make no assumptions about the users place within the organization. A security system can be built entirely independent of any existing grouping of users. New groupings can be defined in lieu of existing ones. This is especially important where groups were not defined with data security as a primary goal. Row-level security can also take full advantage of the existing structures where data security was built into the user and group structure. In many cases, row-level security will work within existing database role definitions and third-party software security systems.
Performance Issues
The system is designed to not impose any significant performance penalty. The security information is collected at the time the user opens a Interactive Reporting document from the servers repository, and only then if the server knows the security controls are enabled. When a user opens a locally saved Interactive Reporting document from a previous session with the Hyperion System 9 BI+ services, the security information is recollected when reconnecting to the server in case it has changed.
Publish without the detailed results of the queries, leaving only the summary charts and Pivots for the general audience. If they need to drill into the summary data, they will need to rerun the queries, at which time their particular security restrictions will be applied. (Even some charts and Pivots can reveal too much, so there is still a need for prudence when publishing these Interactive Reporting documents.) Create the Interactive Reporting documents with OnStartup scripts to reprocess queries as the Interactive Reporting document is opened. This will always give the user only the data to which they are entitled.
467
All users should take similar precautions when sharing information generated from Interactive Reporting. This includes exchanging the Interactive Reporting documents (.bqy extensions) themselves by e-mail or shared network directories, exporting the data as HTML files and publishing them to a web site, posting the data on FTP servers as the result of a job action, and creating PDF files from the reports.
Repository Connection Made as brioserver Sample Where Clause on CREATE VIEW WHERE USER = BRIOSERVER WHERE USER = BRIOSERVER WHERE USER = brioserver
Note: Be aware of case sensitivity with the user name and allow that, for SQL Server, the user might be dbo.
Each view has the same name as its underlying table, and all available columns from that table would be selected.
468
Implementing a secure data access environment using row-level security requires an understanding of SQL. First, knowing how the database relationships are defined is critical. Second, specifying the restrictions is directly translated into the SQL ultimately processed at the database.
469
This table is theoretically optional. Without it, however, all users exist as single individuals; they cannot be grouped to apply a single set of restrictions to all members. For example, Vidhya and Chi are members of the PAYROLL group. If this relationship is not defined in BRIOSECG, then any restrictions that apply to Vidhya that should also apply to Chi have to be defined twice. By defining the PAYROLL group and its members, Vidhya and Chi, the restrictions can be defined only once and applied to PAYROLL group. A group name cannot be used in BUSER; that is, groups cannot be members of other groups. Users, of course, can be members of multiple groups, and this can effectively set up a group/subgroup hierarchy. For example, a PAYROLL group might contain users Sally, Michael, Kathy, David, Bill, Paul, and Dan. Sally, Dan, and Michael are managers, and so they can be made members of a PAYROLL MANAGER group. Certain restrictions on the PAYROLL group can be overridden by the PAYROLL MANAGER group, and Dan, to whom Sally and Michael report, can have specific overrides to those restrictions placed explicitly on the PAYROLL MANAGER group. Where the database supports it, and if the users authentication name in Hyperion System 9 BI+ corresponds, this table can be a view created from the roles this user has in the database. For example, in Oracle:
CREATE VIEW BRIOSECG (BGROUP, BUSER) AS SELECT GRANTED_ROLE, GRANTEE FROM DBA_ROLE_PRIVS
DBA_ROLE_PRIVS is a restricted table. Since the server reads the view using a configured database logon, it would not be appropriate to use USER_ROLE_PRIVS instead of DBA_ROLE_PRIVS, because that user view will reflect only the servers roles, not the user on whose behalf the server is operating. Again, this is an Oracle example; other RDBMS may or may not provide a similar mechanism. In some cases, depending on the database, a stored procedure could collect the role information for the users and populate a BRIOSECG table if a simple SELECT is inadequate to collect the information. This would require some means to invoke the procedure each time role definitions were changed. When using the databases catalog or some other means to populate BRIOSECG, the sample Interactive Reporting document, row_level_security.bqy, cannot be used to maintain user and group information. A special group, PUBLIC, exists. It does not need to be explicitly defined in BRIOSECG. All users are members of the PUBLIC group. Any data access restriction defined against the PUBLIC group applies to every user unless explicitly overridden, as described later. All users can be made part of a group at once by inserting a row where BUSER is PUBLIC and BGROUP is that group name. While this may seem redundant, given the existence of the PUBLIC group, it offers some benefits:
It allows the database catalog technique described above to work. For example, in Oracle, a role can be granted to PUBLIC. It allows restrictions for a group other than PUBLIC to quickly be applied to or removed from everyone in an instant. It provides more flexibility when using override specifications as described later.
Note: Restrictions are never applied against a user named PUBLIC, but only the group PUBLIC. For this reason, do not use PUBLIC as a user name. Similarly, to avoid problems, do not name a group the same as a user name.
470
Columns in the BRIOSECR Table Column Type INT Functional Use This column contains an arbitrary numeric value. It should be unique, and it is useful for maintaining the table by whatever means the customer chooses. The servers do not rely upon this column, and the servers never access this column. To that extent, it is an optional column but recommended. (It is required when using the sample Interactive Reporting document, row_level_security.bqy.) When the RDBMS supports it, a unique constraint or unique index should be applied to the table on this column. The name of the user or the name of a group to which a user belongs. If PUBLIC, the restrictions are applied to all users. Used to identify a topic in the Data Model. (In Interactive Reporting, a topic typically corresponds to a table in the database, but it could be a view in the database.) If the physical name property of the topic is of the form name1.name2.name3, this represents name1. Most often, this represents the database in which the topic exists. This field is optional unless required by the connection in use. The most likely circumstance in which to encounter this requirement will be with Sybase or Microsoft SQL Servers where the Interactive Reporting database connection (the connection definition file) is set for access to multiple databases.
USER_GRP SRCDB
471
Table 50
Columns in the BRIOSECR Table (Continued) Column Type VARCHAR, can be null VARCHAR VARCHAR Functional Use Used to identify the owner/schema of the topic in the Data Model. This would be name2 in the three-part naming scheme shown above. If the topic property, physical name contains an owner, then it must be used here as well. Used to identify the table/relation identified by the topic in the Data Model. This is name3 in the three-part naming scheme. Used to identify a column in SRCTBL. This is a topic item in Data Model terminology, and is an item that might appear on the Request line in a query built from the Data Model. In the context of the security implementation, the item named here is the object of the restrictions being defined by this row of the security table BRIOSECR. If this column contains an *, all columns in SRCTBL are restricted. If present, defines the database name qualifier of a table/relation that must be joined to SRCTBL. If present, defines the schema/owner name qualifier of a table/relation that must be joined to SRCTBL. If present, names the table/relation that must be joined to SRCTBL. If present, names the column name from SRCTBL to be joined to a column from JOINTBL. If present, names the column name in JOINTBL that will be joined (always an equal join) to the column named in JOINCOLS. If present, identifies a table/relation to be used for applying a constraint (limit). This is a coded value. If the value in this column is S, the column to be limited is in SRCTBL. If J, a column in JOINTBL is to be limited. If the value in this column is O, it indicates that for the current user/group, the restriction on the source column for the group/user named in column OVRRIDEG is lifted, rendering it ineffective. If this value is NULL, then no additional restriction is defined. If the JOIN* columns are also all NULL, the column is not accessible at all to the user/group. This implements column level security. See the functional use description of CONSTRTV for more information on column level security. The column in the table/relation identified by CONSTRTT to which a limit is applied. The constraint operator, such as =, <> (not equal), etc. BETWEEN and IN are valid operators. Basically, any valid operator for the database can be supplied.
SRCTBL SRCCOL
VARCHAR, can be null VARCHAR, can be null VARCHAR, can be null VARCHAR, can be null VARCHAR, can be null CHAR(1), can be null
CONSTRTC CONSTRTO
472
Table 50
Columns in the BRIOSECR Table (Continued) Column Type VARCHAR, can be null Functional Use The value(s) to be used as a limit. The value(s) properly form a condition that together with the content of CONSTRTC and CONSTRTO columns create valid SQL syntax for a condition in a WHERE clause. Subquery expressions, therefore, are allowed. Literal values should be enclosed in single quotes or whatever delimiter is needed by the database for the type of literal being defined. If the operator is BETWEEN, the AND keyword would separate values. If :USER is used in the value, then the user name is the limit value. If :GROUP is used, all groups of which the user is a member are used as the limiting values. Both :USER and :GROUP can be specified, separated by commas. The public group must be named explicitly; it is not supplied by reference to :GROUP. When applying column level security, CONSTRTV provides the SQL expression that will effectively replace the column on the Request line. For example, the value zero (0) might appear to replace a numeric value that is used in the Interactive Reporting document but should not be accessible by the specified user/group. While any valid SQL expression that can be used in a SELECT list is permitted, pick a value that is acceptable for the likely use. For example, the word NULL is permitted, but note that in some cases, it might not be the appropriate choice, as it could also end up in a GROUP BY clause.
OVRRIDEG
The name of a group or user. Used when CONSTRTT is set to O. If the group named in OVRRIDEG has a restriction on the source element, then this restriction is effectively ignored for the user/group named in USER_GRP. SRCDB, SRCOWNER, SRCTBL, and SRCCOL as a collection must be equal between the row specifying the override and the row specifying the conditions to be overridden. (See examples.)
473
Figure 36
Figure 37 shows the data model in the published Interactive Reporting document.
474
Figure 37
In the sample Interactive Reporting document for maintaining row-level security information, once the information has been added, it would look something like Figure 38 when the AMERICAS group is selected.
Figure 38
475
Figure 39
(This is an example of column level security. All values from this table, if they appear on the Request line, will be substituted with NULL.) Where there are no extranet concerns, it might be appropriate for all the employees to know how their company is doing overall, such a blanket restriction is not recommended. Instead, restrict the use of the STORE_ID column, the only means by which the sales information can be tied back to any particular store, country, region, etc. This will look identical to the case above except that STORE_ID is specified instead of an asterisk for the Source Column Name.
Overriding Constraints
Obviously, members of the AMERICAS group are also members of PUBLIC. So, regardless of the way the PUBLIC group was restricted, those restrictions are not to be applied to the AMERICAS group for the sales information. That group might be restricted in different ways, or not at all, and the same mechanism ensures that happens while PUBLIC restrictions are in place. Figure 40 shows this when using the sample Interactive Reporting document, row _level_security.bqy.
476
Figure 40
This only overrides PUBLIC constraints for this particular column. Restrictions on PUBLIC against other columns are still enforced against members of the AMERICAS group as well. If the restriction is on all columns of a table, designated by an asterisk, the override must also be specified with an asterisk and then specific column constraints reapplied to groups as needed.
Cascading Restrictions
In order to give the members of the AMERICAS group access to sales information only for the appropriate region, the query includes references to columns in other tables which are not necessarily part of the existing data model. The row-level security will function the same whether or not the tables already existed in the data model. As seen in the table relationships pictured above, the region information is bridged to the sales information by the store table. To implement a constraint that makes only sales information available for a particular region requires two entries in the BRIOSECR table, one to join sales to stores, and one to join stores to regions. This latter case also requires a limit value for the region name. (A limit on REGION_ID could also accomplish the same goal, but is not as readable, especially in an example. See the discussion to follow about subqueries for another perspective on limits on ID type columns.) The first restriction required for this example is on the STORE_ID column. In order to use that column, a join must be made back to the STORES table. Figure 41 shows how this join would be specified.
477
Figure 41
Now, the join to the Regions table is added, with the appropriate constraining value, as shown in Figure 42.
Figure 42
The only remaining part of the example is letting user BRIO, also a member of the AMERICAS group, see the data in an unrestricted way. Handling this case is left as an exercise for the reader.
478
Custom SQL
Custom SQL is used to provide special SQL syntax that the software does not generate. In the absence of row-level security, users with proper permissions on the Interactive Reporting document can modify custom SQL to produce ad hoc results. When row-level security is in place, Custom SQL is affected in two ways:
If the published Interactive Reporting document contains an open Custom SQL window, it is used as is when the user processes a query. No restrictions are applied to the SQL. However, the user cannot modify the SQL. While this can be a handy feature, care should be taken when publishing Interactive Reporting documents that require custom SQL that they dont compromise the security requirements. If the user chooses the Reset button on the Custom SQL window, the SQL shown includes the data restrictions, and the original intent of the Custom SQL is lost and the user will not be able to get it back except by requesting the Interactive Reporting document from the server again.
Limits
The row-level security feature affects limits three ways: First, if a user is restricted from accessing the content of certain columns, and the user attempts to show values when setting a limit on the restricted column, the restrictions will be applied to the SQL used to get the show values list. That way, the user cannot see and specify a value they would not otherwise be permitted to access. Second, setting limits can result in some perhaps unexpected behavior when coupled with row-level security restrictions. This is best explained by example. In order to read the amount of sales, the user is restricted to a join on the STORE_ID column back to the stores table and in addition, the user can only see information for the STORE_ID when the state is Ohio. This user tries to set a limit on the unrestricted column STATE, and chooses something other than Ohio, thinking this a way to subvert the data restrictions. Unfortunately for that user, no sales amount information will be returned at all in this case. The SQL will specify where state = user selected value AND state = OH. Obviously, the state cannot be two different values at the same time, so no data will be returned. Of course, a user may try to set a limit on the CITY column instead of the STATE column, thinking the city name might exist in multiple states. As long as the need exists to access the amount of SALES column in the SALES table with identifying store information, though, the state limit will still be applied, and no data the user should not be able to see will be accessible to that user. It just will not prevent a user from getting a list of stores when sales data is not
479
part of that list. Generally speaking, restricting access to facts based on the foreign key in the fact table(s) works best. If it is necessary to restrict the users access to a list of stores, these dimension restrictions work best when applied to all columns in the dimension table with a limit on the source table. For example, using the requirements described above to restrict the amount of sales information in Ohio only, with the same restriction on the dimension-only queries, do not apply any limit on access of the amount sales information except that it must be joined back to the STORES table on STORE_ID. Then, add a restriction for all columns in the STORES table, limiting it to only stores in Ohio. This limits access to both fact and dimension data. Third, when setting a limit using Show Values, it has already been noted that any restrictions on the column to be limited are applied to the SQL that generates the show values list. For example, using the restrictions described in the previous paragraph, attempting to show the values list for the CITY column would be constrained to those cities in Ohio. Now, consider the following scenario. The SALES FACT table also has a TRANSACTION DATE and PRODUCT_ID column. The transaction date column is tied back to a PERIODS table, where dates are broken down into quarters, fiscal years, months, and so on. In this somewhat contrived example, a restriction is placed on the PERIODS table, where values there are joined back to the SALES TRANSACTION table and restricted by PRODUCT_ID values in a certain range. The user sets a limit on fiscal year in the PERIODS table and invokes show values in the Limit dialog box to pick the range. Because of the restrictions in place, only one fiscal year is available, and the user picks it. Now, the user builds a query that does not request the FISCAL YEAR column itself but does reference the PRODUCT_ID field and processes it. This query returns, for the sake of argument, 100 rows. Now the user decides there is a need to see the fiscal year value and adds it to the Request line. Reprocessing the query only returns 50 rows. Why? In the first case, PRODUCT_ID values outside of the range allowed when querying the FISCAL YEAR column will appear in the results. In the second case, the query will cause the restriction on PRODUCT_ID range to be included. Restrictions are only applied when a user requests to see data. There was no request to see the FISCAL YEAR column in the first case, except while setting the limit. There is no restriction on seeing PRODUCT_ID values. This example is contrived because restricting access to a dimension based on data in a fact table would be extremely unusual. Nevertheless, it illustrates a behavior that should be kept in mind when implementing restrictions.
Naming
Another way to set the restrictions described above is by a subquery. Instead of directly setting the limit on the STATE column, limit the values in the STORE_ID column in the STORES table. The constraint operator would be IN, and the constraint values field might look something like this:
(SELECT S.STORE_ID FROM STORES S WHERE S.STATE = 'OH')
Now, no matter what limit the user sets in the STORES table, they will always be constrained to the set of store IDs that are allowed based on their group memberships and their own user name. Even if a city outside of the allowed state is chosen, such as a city that exists in more than one state, any stores that other city has will not show up in the results.
480
Using a subquery can be useful when incorporating existing security systems into the row-level security feature of Interactive Reporting. When constructing constraints of this type, it is especially important to know SQL. For example, to specify a subquery, it helps to know that a subquery is always enclosed in parentheses. It is also important to know how the Workspace generates SQL and to follow its naming conventions to make sure the syntax generated is appropriate.
For table references in the FROM clause, use From.tablename, where tablename is the display name seen in the Interactive Reporting documents data model as the display name. If the display name contains a space, use the underscore to represent the space. For column names, use tablename.columnname, following the same rule as above, except that the From. should not be used.
Alias Names
By default, when processing user queries, table references in the SQL are always given alias names. Alias names are convenient shorthand for long table references, and they are required when trying to build correlated subqueries. These alias names take the form ALn, when the n is replaced by an arbitrary number. These numbers are usually based on the topic priority properties of the data model and can easily change based on several factors. For example, a user with the proper permissions can rearrange the topics, thus giving them different priorities. Because these numbers are dynamic, constraint specifications should never rely on them. Instead, by using the naming scheme above, the appropriate alias will be added to the constraints. So, if the requirement is a correlated subquery, the appropriate name will be given to the column in the outer query when referenced by the correlated subquery. In the example above, using a subquery to restrict STORE_ID values to those in a specific state, it was neither necessary nor desirable to use the Hyperion Solutions naming conventions. There, the set of values was to be derived in a subquery that operated independently of the main query. Consequently, the From. was not used in the FROM clause of the subquery, and the alias names were given in a way to not conflict with the alias names generated automatically by the software. To use a correlated subquery, then, consider syntax like the following:
FROM STORES S WHERE S.STORE_ID = Stores.Store_Id
The reference to the right of the equal sign will pick up the alias name from the outer query and thus provide the correct correlation requirements.
481
482
Chapter
28
In This Chapter
This section describes how to use the dbgprint tool to diagnose connectivity problems in the Interactive Reporting products.
Connectivity Troubleshooting with dbgprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and Interactive Reporting Studio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and the Interactive Reporting Web Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
483
If you are using Notepad, you first have to type a space or character before you can save the file. Do not save the file with a file extension. In the UNIX environment you need to create a file named DbgPrint. Please note the capitalization. This file will be placed in the bin directory for Interactive Reporting Studio. If you are operating in a Windows environment, make sure that no extensions are appended to the end of the file name. If you are using Notepad as the text editor, the .txt extension is automatically appended to the saved file. Make sure you remove any extension before you proceed to the next step.
4 Close the text editor and start Interactive Reporting Studio by opening the actual application file.
484
In some instances dbgprint does not log information if Interactive Reporting Studio was started through an alias or shortcut. Instead, start Interactive Reporting Studio using the Finder (Macintosh), or Windows Explorer. Clicking a shortcut only works if the Start In field in the Properties dialog box for the shortcut shows the path to the brioqry.exe file.
5 Once Interactive Reporting Studio is running, recreate the steps which resulted in the previous error
problem, or follow any instructions given to you by a Hyperion Solution customer support representative.
Connect to the database Retrieve a list of tables Add tables to the work space Create and process a query Set a limit
6 Once you have completed the above tasks, quit Interactive Reporting Studio and open the dbgprint file. 7 View the contents of the dbgprint file.
The file should contain status information detailing the Interactive Reporting Studio logon session. You will probably be asked to either fax or email the contents of the dbgprint file to Hyperion Solutions. If the file is blank, review the previous steps and repeat the process.
Note: If you need to run another dbgprint file, save the contents of the file with a unique name. Each time you run the brioqry.exe file, the existing dbgprint file is overwritten.
If you are using Notepad, you first have to type a space or character before you can save the file. If you are operating in a Windows environment, make sure that no extensions are appended to the end of the file name. If you are using Notepad as the text editor, the .txt extension is automatically appended to the saved file. Make sure you remove any extension before you proceed to the next step.
485
486
Chapter
29
Table 52
The Interactive Reporting Studio INI files are simple text files that are used to store system and application settings. Table 52 shows each INI used by application and the type of information it contains.
INI Files used in Interactive Reporting INI File BQFORMAT.INI BQMETA0.INI BQTOOLS.IN Content Stores locale and custom numeric formats. Stores OMI metadata settings for supported Metadata sources. Stores Custom Menu definitions (only present if custom menus are defined)
BRIOQPLG.INI
BQFORMAT.INI BQTOOLS.INI
Stores locale and custom numeric formats Stores Custom Menu definitions (only present if custom menus are defined ALL server INI files are stored in sub-directories off of the HYPERION_HOME directory, and NOT in the Windows OS directory. Internationalized versions of the BQFORMAT.INI and BQMETA0.INI files are also present.
Workspace
BQFORMAT.INI INTELLIGENCE.INI
Stores locale and custom numeric formats for use with the Workspace. Stores default configuration settings for the UI
SQR.INI
487
488
Part
489
490
Chapter
1
In This Chapter
Administrators use the WebAnalysis.properties file and related Web Analysis utilities to configure, maintain, and optimize Web Analysis behavior in BI+.
Web Analysis Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Web Analysis Utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Changing Web Analysis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
491
Controlling Result Sets on page 493 Configuring Java Plug-in Versions on page 493 Configuring the Repository on page 494 Configuring Hyperion System 9 BI+ Analytic Deployment Services on page 494 Considerations for Configuring Analytic Deployment Services on page 495 Resolving Analytic Services Subscriptions in Web Analysis on page 496 Configuring a Web Analysis Mail Server on page 496 Formatting Data Value Tool Tips on page 496 Setting Web Analysis to Log Queries on page 496 Exporting Raw Data Values to Excel on page 497
492
rows:
MaxDataCellLimitOLAP database connection query result set size; default is 50000 MaxJdbcCellCountRelational database connection query result set size; default is 50000
3 Edit these values and remove the preceding underscore to change the Sun Java Plug-in version.
Alphabetic characters at the beginning and end of the string are set by Sun. Do not change these characters. The first two sets of numeric digits indicate the Sun Java plug-in version. The third set of numeric digits indicates the patch number. For example, this is the class identifier (CLSID) value for Sun Java plug-in 1.3.1_10:
clsid:CAFEEFAC-0013-0001-0010-ABCDEFFEDCBA
493
Valid alternate values for each variable are commented out on the line after the variable. To use an alternate value, remove the pound sign (#) and place it before the old value. When moving, migrating, or upgrading the repository, you may need to edit these variables and re-encrypt the password (see Repository Password Encryption Utility on page 497).
which provides Analytic Services connection alternatives for administrators running Web Analysis on Solaris operating systems. Review Considerations for Configuring Analytic Deployment Services on page 495 before configuring Analytic Deployment Services. See the Hyperion System 9 BI+ Analytic Deployment Services Installation Guide for complete information on installing, configuring, and using this service.
Table 53
EDS Variables Description A value of true prompts ADM to use Analytic Deployment Services to access Analytic Services; a value of false enables ADM to use the default JNDI driver Analytic Deployment Services driver to ADM; do not modify Server running Analytic Deployment Services Locale for Analytic Deployment Services Domain for Analytic Deployment Services; do not modify this variable ORB type for Analytic Deployment Services; only TCP/IP is supported Analytic Deployment Services communication port
494
Table 53
EDS Variables (Continued) Description Method ADM uses for Analytic Deployment Services connection pooling A connection pool is a set of login sessions from Analytic Deployment Services to an Analytic Services server. Analytic Deployment Services uses a connection pool to process requests for Analytic Services services. There are three valid combinations of these properties:
EESUseConnPool=false EESConnPerOp is ignoredConnection pooling is not used EESUseConnPool=true EESConnPerOp=falseConnection pooling is used; connection is held from when cube view is opened until it is closed EESUseConnPool=true EESConnPerOp=trueConnection pooling is used; connection is released immediately after each operation
EESUseReportOption
User name and password used by the ADM Analytic Deployment Services driver must be valid on the Analytic Deployment Services server and the Analytic Services server. ChangePassword and SetPassword server actions attempt to modify both Analytic Deployment Services and Analytic Services OLAP server passwords. To be successful, olap.server.autoChangePassword must be set to true, and the administrator user ID specified in the EDS_ES_HOME/bin directory (olap.server.admin.name=admin) must differ from the user ID being passed by the action. Two archives installed with Analytic Deployment Services must be defined in Web Analysis classpath: ess_es_server.jar and ess_japi.jar. Hyperion does not recommend implementing Analytic Deployment Services in conjunction with AIX platforms.
495
3 Scroll to the end of the file. 4 For the MailServer=<localhost>, remove the pound signs (#) and enter a value for localhost. 5 Save the file. 6 Restart the application server.
which display as small boxes over data cells when the cursor triggers a float-over event. When the variable FormatToolTips=true, tooltips displays data values in scientific notation unformatted to up 1E7. When FormatToolTips=false, or when the variable is not specified, tooltips display data values in a format that matches the spreadsheet grid.
496
Repository Password Encryption Utility on page 497 Web Analysis Configuration Test Servlet on page 498
5 Change the db.password-encrypted value to false. 6 Save your changes. 7 Navigate to \\WebAnalysis\conf\ and run EncryptUtil.bat or EncryptUtil.sh.
You may use alternative methods to execute this file. EncryptUtil locates the user ID, password, and encryption variable, encrypts the password, and resets db.passwordencrypted to true. To review the changes, open WebAnalysis.properties.
497
To launch Configuration Test Servlet, open a Web browser and type this URL:
http://<hostname>/WebAnalysis/Config
Configuration Test Servlet provides links to configuration information as discussed in these topics:
List Environment Variables on page 498 View Web Analysis Property Files on page 498 Services Framework Test on page 498 Test Pages for Analytic Services, Financial Management, and SAP BW ODBO on page 499
Tip: Use the browsers Back button or the Available Tests link at the page bottom to return to the
498
Test Pages for Analytic Services, Financial Management, and SAP BW ODBO
The test pages for Analytic Services, Financial Management, and SAP BW ODBO provide the configuration information:
ADM Environment Variables ADM Property File Locations (click a link to view the property file) ADM Jar Locations Version Information
You use these pages to test your connectivity (using ADM) to Analytic Services, Financial Management, and SAP BW ODBO.
b. Change values for shutdown and service connector ports: (At the top) Server port =port_number (At the bottom) Connector port=port_number c. Save and close Server.xml.
2 Update the HTTP Server with the Web Analysis port number:
a. In Windows Explorer, navigate to
%HYPERION_HOME%\common\httpServers\Apache\2.052\conf and open HYSLWorkers.properties for editing.
b. Change the Web Analysis port number to match the AJP port specified by the Service Connector port parameter in Server.xml (Connector port=). c. Save and close HYSLWorkers.properties.
499
3 Update the Hyperion Apache application server HTTP with the Web Analysis port number:
a. In Windows Explorer, navigate to
%HYPERION_HOME%\common\httpServers\Apache\2.052\conf and open HTTP.conf for editing.
b. Change the Listen port number to match the port specified by the Service Connector port parameter in Server.xml (Connector port=). c. Save and close HTTP.conf.
4 Restart Hyperion Apache HTTP Server and any applications using the HTTP Server.
500
APPENDIX
Backup Strategies
A
Standard data center policies for database backups include incremental daily backups and weekly full backups with off-site storage to protect an organizations investment. When you back up Workspace, you should plan the backup in the same way that you plan other database backups.
In This Chapter
What to Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 General Backup Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Backing Up the Workspace File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Sample Backup Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Backing Up the Repository Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Backing Up Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
Backup Strategies
501
What to Backup
You must back up the following items in your system:
File system, which contains Workspace content and other system information (including files in other directories and on other hosts) Repository database, which contains user and item metadata Report registry keys from the same point in time (Windows only) Shared Services
Note:
For information about backing up Shared Services, see the Hyperion System 9 Shared Services Installation Guide.
Workspace maintains an item repository in the native file system and stores metadata, or descriptive information, about each user and object in an RDBMS.
Note: To recover data, restore the database and file system backups (and registry if required), and restart the services.
CompleteBacks up the entire system. Your organizations policies and procedures determine whether and how often you perform a complete backup. Post-installationBacks up certain directories, performed after completing an installation and before using the system.
502
Backup Strategies
Daily incrementalBacks up only files that are new or modified since the previous day. Daily incremental backups involve directories that contain frequently changing information, such as repository content and log files. Weekly fullBacks up all files in the directories for which you do incremental backups on a daily basis. As NeededBacks up data only after changes are made, rather than on a regular schedule. As-needed backups involve directories containing files that are customizable but are not modified regularly.
The Hyperion Home directory contains the Workspace products you installed on the host. Subdirectories of Hyperion Home include \BIPlus and \common, among others.
Complete Backup
To back up your system comprehensively, back up the Hyperion Home directory. This is the default installation directory for all Hyperion Solutions products on a given host.
Post-Installation
Immediately after installing, back up these directories:
Directory BIPlus\Install BIPlus\bin Contents All configuration information defined during installation; back up on all hosts and compress each backup Start batch scripts for each service, and the ConfigFileAdmin utility used by the administrator to decode and change passwords (typically, the only password of interest is the RDBMS login password) Service configuration files used at service startup:
BIPlus\common\config
JAR files required by one or more Workspace components and library files for Job Utilities, LSC, and RSC JDBC drivers required to run the Workspace services Required ODBC drivers Files necessary to manipulate the metadata for versions of Production Reporting
503
As Needed
Back up the following directories as needed:
Directory BIPlus\bin Description Start batch scripts for each service, and the ConfigFileAdmin utility used by the administrator to decode and change passwords (typically, the only password of interest is the RDBMS login password) Service configuration files used at service startup, server.xml, and config.dat Directories associated with services
BIPlus\common\config BIPlus\data
Backup Reference Contents Library files for Job Utilities Workspace startup batch scripts for each service, and the ConfigFileAdmin utility used by the administrator to decode and change passwords (typically, the only password of interest is the RDBMS login password) On Windows systems: Setup.exe program file, used to create or delete services running as Windows Services, and to update the Windows Registry information Backup Requirements After initial installation After initial installation and after any changes are made to start scripts
504
Backup Strategies
Table 54
Backup Reference (Continued) Contents Content (repository files) Service configuration files used at service startup, server.xml, and config.dat. Configuration information defined during installation JAR files required by Hyperion components Log files for services operating on a computer Files necessary to manipulate the metadata for versions of Production Reporting Backup Requirements Daily incremental, weekly full (consistent with company backup policy) After initial installation, before and after subsequent service configuration changes that focus on adding and removing services to a given domain Perform after initial installation on each host; back up on each host and compress each backup After initial installation Daily incrementals, weekly fulls (consistent with company backup policy) After initial installation
BIPlus\Install
505
Backing Up Clients
The backup needs of Workspace client installations are minimal. You should perform a standard post-installation full backup according to your company policy. Thereafter, the only files you need to backup are these servlet files:
Servlet configuration file, ws.conf on Windows, or wsrun_platform on UNIX, located in the /WEB-INF/config directory of your servlet engine deployment (under BIPLUS/Appserver)
/WEB-INF/conf/BpmServer.properties
506
Backup Strategies
Glossary
access control A security mechanism that manages a users privileges or permissions for viewing, modifying, and importing files or system resources. access privileges The level of access-for example, view, modify, run, full control-that the importer of an item grants to others. accountability map A visual, hierarchical representation of the responsibility, reporting, and dependency structure of your organization. An Accountability map depicts how each accountability team in your organization interacts to achieve strategic goals. An accountability team is also known as a critical business area (team, department, office, and so on. action A task or group of tasks executed to achieve one or more strategic objectives. In a Hyperion Performance Scorecard application, each action box represents an activity or task that helps to accomplish a strategic objective. Each action is usually assigned measures. actions Job output definitions for an Interactive Reporting job is defined in terms of a series of actions. active group A group that is entitled to access the system. active service A service whose Run Type is set to Start rather than Hold. active user A user who is entitled to access the system. active user/user group The user or user group identified as the current user by user preferences. Determines default user preferences, dynamic options, access, and file permissions. You can set the active user to your user ID or any user group to which you belong. adaptive states Interactive Reporting level of permission. There are six levels of permission: view only, view and process, analyze, analyze and process, query and process, and datamodel and analyze.
aggregate cell A cell comprising several cells. For example, a data cell that uses Children(Year) expands to four cells containing Quarter 1, Quarter 2, Quarter 3, and Quarter 4 data. aggregate limit A limit placed on an aggregated request line item or aggregated metatopic item. alias An alternative name. Analysis Server Web Analysis Server. An application server program that distributes report information and enables Web client communication with data sources. Analyze The main Web Analysis interface for analysis, presentation and reporting. appender A Log4j term for destination. application A program running within a system. application server A middle-tier server that is used to deploy and run Web-based application processes. asymmetric analysis A report characterized by groups of members that differ by at least one member across groups. The number and names of members can differ. attribute Characteristics of dimension members that are not stored in the data source but calculated on demand. You can select, group, or calculate members that have a specified attribute. For example, an Employee Number dimension member may have attributes of Name, Age, or Address. attribute dimension A type of dimension that enables analysis based on the attributes or qualities of dimension members. authentication service A core service that manages one authentication system.
Glossary
507
authentication service repository (ASR) A database that contains a complete model of users/groups in an external system. authentication system A security measure designed to validate and manage users and groups. axis A two-dimensional report aspect used to arrange and relate multidimensional data, such as filters, pages, rows, and columns. bar chart A chart that can consist of one to 50 data sets, with any number of values assigned to each data set. Data sets are displayed as groups of corresponding bars, stacked bars, or individual bars in separate rows. batch POV A collection of all the dimensions on the user POV of every report and book in the batch. While scheduling the batch, you can set the members selected on the batch POV. book A container that holds a group of similar Financial Reporting documents. Books may specify dimension sections or dimension changes. book POV The dimension members for which a book is run. A book is a collection of Financial Reporting documents that may have dimensions on the User POV. Any dimension on a reports user POV is added to the book POV and defined there. The member for a dimension on the book POV can be one of the following items: (a) User POV. This means the member is set by the end user just before the book is run. (b) A specific member. If a specific member is chosen, then the selection is stored in the book definition and can only be altered in the Book Editor. (c) A set of member selections. A dimension left on the user POV of a report may be iterated over within the book. For example, a report may be run for four entities within one book. bookmark A link to a reporting document or a Web site, displayed on a personal page of a user. The two types of bookmarks are My Bookmarks and image bookmarks. bounding rectangle The perimeter that encapsulates the Interactive Reporting document content when embedding Interactive Reporting document sections in a personal page. It is required by the Interactive Reporting to generate HTML and is specified in pixels for height and width or row per page. calculation The process of aggregating data, or of running a calculation script on a database.
calculation script A set of instructions telling Hyperion Essbase how to aggregate and extrapolate the values of a database. Catalog pane A pane displaying a list of elements available to the active section. For example, if Query is the active section, the Catalog pane displays a list of database tables. If Pivot is the active section, the Catalog pane displays a list of results columns. If Dashboard is the active section, the Catalog pane displays a list of embeddable sections, graphic tools, and control tools. categories Groupings by which data is organized (for example, month). cause and effect map A map that depicts how the elements that form your corporate strategy are interrelated and how they work together to meet your organizations strategic goals. A Cause and Effect map tab is automatically created for each of your Strategy maps. cell A unit of data representing the intersection of dimensions in a multidimensional database; the intersection of a row and a column in a worksheet. chart A graphical representation of spreadsheet data. The visual nature of charts expedites analysis, color-coding, and visual cues that aid comparisons. There are many different chart types. chart cell value Appears in the lower right corner of a chart on pages in the Monitor and Investigate Sections. The Editor defines the chart cell value that you see in Enterprise Metrics. The chart cell value might display a metric on the chart, such as Booking $, or a calculation based on the metrics displayed on the chart, such as ratio of Booking $ to Forecast $. chart column Enterprise Metrics Detail charts are displayed in columns below each Summary chart. Chart section With a varied selection of chart types, and a complete arsenal of OLAP tools like group and drill-down, the Chart section is built to support simultaneous graphic reporting and ad hoc analysis. Chart Spotlighter A feature that enables you to colorcode charts based on some condition in Interactive Reporting Studio. chart template A template that defines the metrics to display in Workspace charts.
508
Glossary
child A member that has a parent above it in the database outline. choice list A list of members that a report designer can specify for each dimension when defining the reports point of view. A user who wants to change the point of view for a dimension that uses a choice list can select only the members specified in that defined member list or those members that meet the criteria defined in the function for the dynamic list. client A client interface, such as Web Analysis Studio or a workstation on a local area network. clustered bar charts Charts in which categories are viewed side-by-side within a given category; useful for side-byside category analysis. Clustering is only done with vertical bar charts. column A vertical display of information in a grid or table. A column can contain data from a single field, derived data from a calculation, or textual information. column heading A part of a report that lists members across a page. When columns are defined that report on data from more than one dimension, nested column headings are produced. A member that is listed in a column heading is an attribute of all data values in its column. computed item A virtual column (as opposed to a column that is physically stored in the database or cube) that can be calculated by the database during a query, or by Interactive Reporting Studio in the Results section. Computer items are calculations of new data based on functions, data items, and operators provided in the dialog box and can be included in reports or reused to calculate other data. connection file A file used to connect to a data source. console The console is displayed on the left side of the Enterprise Metrics workspace. The console is context sensitive, depending on the page displayed. content Information stored in the repository for any type of file. content area The Contents pane appears on the right side of the Workspace and provides specific information for the page that you are using. cookie A small piece of information placed on your computer by a Web site.
correlated subqueries Subqueries that are evaluated once for every row in the parent query. A correlated subquery is created by joining a topic item in the subquery with one of the topic items in the parent query. critical business area (CBA) An individual or a group organized into a division, region, plant, cost center, profit center, project team, or process; also called accountability team or business area. critical success factor (CSF) A capability that must be established and sustained to achieve a strategic objective. A CSF is owned by a strategic objective or a critical process and is a parent to one or more actions. cube The query result set from a multidimensional (OLAP) data source; a logically organized subset of OLAP database dimensions and members. custom calendar Any calendar created by an administrator. custom report A complex report from the Design Report module, composed of any combination of components. cycle A Interactive Reporting job parameter that is used when scheduled Interactive Reporting jobs need to process and produce different job output with one job run. Dashboard A collection of metrics and indicators that provide an interactive summary of your business. Dashboards enable you to build and deploy analytic applications. Dashboard Home A button that returns you to the Dashboard section designated as the Dashboard Home section. If you have only one Dashboard section, Dashboard Home returns to that section. If you have several Dashboard sections, the default Dashboard Home is the top Dashboard section in the Catalog pane. In Design mode, you can specify another Dashboard section to be the Dashboard Home section. data The values (monetary or non-monetary) associated with the query intersection. data function A function that computes aggregate values including averages, maximums, counts, and other statistics, that summarize groupings of data. You can use data functions to aggregate and to compute data from the server before it reaches the Results section, or compute different statistics for aggregated totals and items in the other analysis sections.
Glossary
509
data layout The data layout interface is used to edit a query, arrange dimensions, make alternative dimension member selections, or specify query options for the current section or data object. data model Any method of visualizing the informational needs of a system. data object A report component that displays the query result set. The display type of a single conventional data object can be set to spreadsheet, chart, or pinboard, and it displays OLAP query result sets. A SQL spreadsheet data object displays the result set of a SQL query, and the freeform grid data object displays the result set of any data source included in it. data source 1. A data storage application. Varieties include multidimensional databases, relational databases, and files. 2. A named client-side object connecting report components to databases. Data source properties include database connections and queries. database A repository within Essbase Analytics that contains a multidimensional data storage array. Each database consists of a storage structure definition (outline), data, security definitions, and optional scripts. database connection A file that stores definitions and properties used to connect to data sources. Database connections enable database references to be portable and widely used. database function A predefined formula in a database. default folder A users home folder. descendant Any member below a parent in the database outline. For example, in a dimension that includes years, quarters, and months, the members Qtr2 and April are descendants of the member Year. Design Report An interface in Web Analysis Studio for designing custom reports, from a library of components. Desktop An interface that presents the icons to open items. detail chart A chart that provides the detailed information that you see in a Summary chart. Detail charts appear in the Investigate Section in columns below the Summary charts. For example, if the Summary chart shows a Pie chart, then the Detail charts below represent each piece of the pie.
dimension A data category used to organize business data for retrieval and preservation of values. Each dimension usually contains a hierarchy of related members grouped within it. For example, a Year dimension often includes members for each time period, such as quarters and months. dimension tab In the Pivot section, the tab that enables you to pivot data between rows and columns. dimension table 1. A table that includes numerous attributes about a specific business process. 2. In Enterprise Metrics, a table in a star schema with a single part primary key. display type One of three Web Analysis formats saved to the repository: spreadsheet, chart, and pinboard. dog-ear The flipped page corner in the upper right corner of the chart header area. You can click the dog-ear to display a shortcut menu. The dog-ear is displayed only on charts in the Investigate Section. drill Allows you to investigate results reflected by a chart in the Investigate Section. You can click a chart that hyperlinks to a lower (more detailed) level in the Investigate Section. This concept is called drilling. drill anywhere A feature that enables you to drill into and add items to pivot reports residing in the Results section without returning to the Query section or trying to locate the item in the Catalog pane. Drill Anywhere items are broken out as new pivot label items. drill target The data to which you are drilling. Specifying a drill target automatically creates a hyperlink enabling you to click the chart to obtain additional detail. drill to detail A feature that enables you to retrieve items from a data model that are not in the Results section without rerunning the original query. This feature provides the ability to query the database interactively and filter the data that is returned. Drill-to-detail sets a limit on the query based on your selection and adds the returned value as a new pivot label item automatically.
510
Glossary
drill-down Navigation through the query result set using the organization of the dimensional hierarchy. Drilling down moves the user perspective from general aggregated data to more detailed data. While default drill down typically refers to parent-child navigation, drilling can be customized to use other dimension member relationships. For example, drilling down can reveal the hierarchical relationships between year and quarters or between quarter and months. drill-through The navigation from a data value in one cube to corresponding data in another cube. For example, you can access context-sensitive transactional data. Drill through occurs usually from the lowest point of atomicity in a database (detail) to a next level of detail in an external data source. dynamic report A report containing current data. A report becomes a dynamic report when you run it. Edit Data An interface for changing data values and sending edits back to Essbase Analytics. employee Users responsible for, or associated with, specific business objects. Employees do not necessarily work for an organization, such as an analyst or consultant. An employee must be associated with a user account for authorization purposes. ending period The ending chart period allows you to adjust the date range shown in the chart. For example, an ending period of month produces a chart that shows information through the end of the current month. exceptions Values that satisfy predefined conditions. You can define formatting indicators or notify subscribing users when an exception has been generated. external authentication Logging on to Hyperion applications by means of user information stored outside the application, typically in a corporate authentication provider such as LDAP or Microsoft Windows NTLM. externally triggered events Non-time-based events that are used to schedule job runs. Extract, Transform, and Load Data source-specific programs that are used to extract and migrate data to an application.
extrapolation A means of showing projected figures. Extrapolation from the current date to the end of the current period is displayed on Enterprise Metrics charts with a white area of the bar. If a line chart shows extrapolation, the line that is extrapolated is dotted. fact table The central table in a star join schema, characterized by a foreign key and elements drawn from a dimension table. This table typically contains numeric data that can be related to all other tables in the schema. filter A filter is used to limit data. While every dimension in the cube must participate in every intersection, you can make filter selections that focus the intersections on a smaller portion of the cube. For example, in Interactive Reporting Studio use a filter to exclude certain tables or data values. In Enterprise Metrics Studio implement a filter by adding a where clause on a join statement. folder A file that contains other files for the purpose of ordering and structuring a hierarchy. footer The text or images that are displayed at the bottom of each page in a report. A footer can contain a page number, date, company logo, document title or file name, author name, and so on. Footers can contain dynamic functions as well as static text. format The visual characteristics of a document or a report object. free-form grid A data object that present OLAP, relational, and manually entered data together and enables you to leverage all these data sources in integrated dynamic calculations. generic jobs Jobs that are neither Production Reporting nor Interactive Reporting jobs. grid POV A means for specifying members for a dimension on a grid without placing the dimension on the row, column, or page intersection. A report designer can set the POV values at the grid level, preventing the user POV from affecting that particular grid. If a dimension has only one value for the entire grid, the dimension should be put into the grid POV instead of the row, column, or page. group A construct that enables the assignment of users with similar system access requirements.
Glossary
511
grouping columns A feature in the Results and Table sections that creates a new column in a dataset by grouping data from an already existing column. Grouping columns consolidate nonnumeric data values into more general group values and map the group values to a new column in the dataset. header The text or images that are displayed at the top of each page in a report. A header can contain a page number, date, company logo, document title or file name, author name, and so on. Headers can contain dynamic functions as well as static text. highlighting Depending on your configuration, you may see highlighting applied to a chart cell value or ZoomChart detail values. A value can be highlighted in red (indicating the value is bad), yellow (indicating that the value is a warning), or green (indicating the value is good). host A server on which applications and services are installed. host properties Properties pertaining to a host, or if the host has multiple Install_Homes, to an Install_Home. The host properties are configured from LSC. hyperlink A link to a file, Web page, or an HTML page on an intranet. Hypertext Markup Language A programming language of tags that specify how Web browsers display data. image bookmarks Graphic links to Web personal pages or repository items. implied share A member with only one child, or a member with multiple children of which only one child is consolidated. For this reason the parent and child share the same value. inactive group A group that cannot access the system because an administrator has inactivated it. inactive service A service that has been placed on hold or excluded from the list of services to be started. inactive user A user who cannot access the system because an administrator has inactivated the user account. Install_Home A variable name for the path and directory where Hyperion applications are installed. Refers to a single instance of a Hyperion application when multiple applications have been installed on the same machine.
Interactive Reporting document sections Divisions of a Interactive Reporting document that are used to display and analyze information in different formats (such as Chart section and Pivot section). Interactive Reporting files or jobs Files created by Interactive Reporting and published into the repository as files or as jobs. Files and jobs have different capabilities. intersection A unit of data representing the intersection of dimensions in a multidimensional database; also, a worksheet cell. Java Database Connectivity A client-server communication protocol used by Java based clients and relational databases. The JDBC interface provides a calllevel API for SQL-based database access. job output Files or reports produced from running a job. job parameters The compile time and runtime values necessary to run a job. job parameters Reusable, named job parameters that are accessible only to the user who created them. jobs A collection of documents that have special properties and can be executed to generate output. A job can contain Interactive Reporting documents, Production Reporting documents or generic documents. join A link between two relational database tables based on common content in a column or record or a relational database concept indicating a link between two topics. A join typically occurs between identical or similar items within different topics. Joins enable row records in different tables to be linked on the basis of shared information in a column field. For example, a row record in the Customer table is joined to a related record in the Orders table when the Customer ID value for the record is the same in each table. This enables the order record to be linked with the record of the customer who placed the order. If you request items from unjoined topics, the database server has no way to correlate the information between the two tables and leads to awkward datasets and run-on queries. join path A predetermined join configuration for a data model. Administrators create join paths for users to select the type of data model needed in a user-friendly prompt upon processing a query. Join paths ensure that the correct tables in a complex data model are used in a query.
512
Glossary
JSP Java Server Pages layer Stack a single object in relative position (sends back and front, or brings forward or backward) to other objects. legend box An informative box containing color-keyed labels to identify the data categories of a given dimension. level A hierarchical layer within the database outline or tree structure. line chart A chart that displays one to 50 data sets, with automatic, uniform spacing along the X-axis. Each data set is rendered by a line. A line chart can optionally shows each line set stacked on the preceding ones, using either the absolute value or a normalized value from 0 to 100 percent. link Link files are fixed references to a specific object in the repository. Links can reference folders, files, shortcuts, and other links using unique identifiers. Links present their targets in the current folder, regardless of where the targets are located or how the targets are renamed. linked data model Documents that are linked to a master copy in a repository. When changes are made to the master, users are automatically updated with the changes when they connect their duplicate copy to the database. linked reporting object A cell-based link to an external file in the Analytic Services database. Linked reporting objects can be cell notes, URLs, or files that contain text, audio, video, or pictures. Note that support of Analytic Services LROs in Financial Reporting applies only to cell notes at this time (by way of Cell Text functions). local report object A report object that is not linked to a Financial Reporting report object in Explorer. local results Results of other queries within the same data model. These results can be dragged into the data model to be used in local joins. Local results are displayed in the catalog when requested. locked data model Data models that cannot be modified by a user. logger Log4j term for where the logging message originates; The class or component of the system in which a log message originated.
LSC services The services that are configured with the Local Service Configurator. They include Global Services Manager (GSM), Local Services Manager (LSM), Session Manager, Authentication Service, Authorization Service, Publisher Service, and in some contexts, Data Access Service (DAS) and Interactive Reporting Service. Map Navigator A feature that displays your current position on a Strategy, Accountability or Cause and Effect map. Your current position is indicated by a red outline on the Map Navigator. master data model A data model that exists independently and has multiple queries that reference it as a source. When you use a master data model, the text Locked Data Model is displayed in the Content pane of the Query section. This means that the data model is linked to the master data model displayed in the Data Model section, which may be hidden by an administrator. MDX (multidimensional expression) The language used to give instructions to OLE DB for OLAP- compliant databases (MS Plato), as SQL is the language used for relational databases. When you build the OLAPQuery sections Outliner, Intelligence Clients translate your requests into MDX instructions. When you process the query, MDX is sent to the database server. The server returns a collection of records to your desktop that answer your query. measures Numeric values in an OLAP database cube that are available for analysis. Measures may be margin, cost of goods sold, unit sales, budget amount, and so on. member A discrete component within a dimension. A member identifies and differentiates the organization of similar units. For example, a time dimension might include such members as Jan, Feb, and Qtr1. member list A named group that references members, functions, or other member lists within a dimension. A member list can be system- or user-defined. metadata A set of data that defines and describes the properties and attributes of the data stored in a database or used by an application. Examples of metadata are dimension names, member names, properties, time periods, and security.
Glossary
513
metric A numeric measurement computed from your business data. Metrics help you assess the performance of your business and analyze trends in your company. For immediate and intuitive understanding, Enterprise Metrics metrics display visually in charts. MIME Type (Multipurpose Internet Mail Extension) An attribute that describes the format of data in an item, so that the system knows which application to launch to open the object. A files mime type is determined either by the file extension or the HTTP header. Plug-ins tell browsers what mime types they support and what file extensions correspond to each mime type. minireport A minireport is a component of a report, and includes layout, content, hyperlinks, and the actual query or queries to load the report. Each report can include one or more minireports. missing data A marker indicating that data in the labeled location either does not exist, contains no meaningful value, or was never entered. model In Shared Services, a file or string of content containing an application-specific representation of data. Models are the basic data managed by Shared Services. Models are of two types: dimensional hierarchies, and nondimensional application objects. Dimensional hierarchies include information such as entities and accounts. Nondimensional application objects include security files, member lists, calculation scripts, and web forms. multidimensional database A method of organizing, storing, and referencing data through three or more dimensions. An individual value is the intersection of a point for a set of dimensions. multithreading A client-server process that enables multiple users to work on the same applications without interfering with each other. native authentication The process of authenticating a user ID and password from within the server or application. note Additional information associated with a box, measure, scorecard or map element. null value A value that is absent of data. Null values are not equal to zero.
OLAPQuery section A document section that analyzes and interacts with data stored in an OLAP cube. When you use Intelligence Clients to connect to an OLAP cube, the document immediately opens an OLAPQuery section. The OLAPQuery section displays the structure of the cube as a hierarchical tree in the Catalog pane. online analytical processing (OLAP) A multidimensional, multiuser, client-server computing environment for users who analyze consolidated enterprise data in real time. OLAP systems feature drilldown, data pivoting, complex calculations, trend analysis, and modeling. Open Catalog Extension Files (OCE) files Files that encapsulate database connection information. OCE files specify the database API (ODBC, SQL*Net, etc.), database software, the network address of the database server, and your database username. Administrators create and publish OCE files. origin The intersection of two axes. page A display of information in a grid or table often represented by the Z-axis. A page can contain data from a single field, derived data from a calculation, or text. page member A member that is displayed on the page axis. palette A JASC compliant file with an extension of PAL. Each palette contains 16 colors that complement each other and can be used to set the color elements of a dashboard. performance indicator An image file used to represent measure and scorecard performance based on a range you specify; also called a status symbol. You can use the default performance indicators or create an unlimited number of your own. period A time interval that is displayed along the x-axis of a chart. Periods might be days, weeks, months, quarters or years. personal pages Your personal window to information in the repository. You select what information to display, as well as its layout and colors. personal recurring time events Reusable time events that are accessible only to the user who created them. personal variable A named selection statement of complex member selections.
514
Glossary
perspective A category used to group measures on a scorecard or strategic objectives within an application. A perspective can represent a key stakeholder (such as a customer, employee, or shareholder/financial) or a key competency area (such as time, cost, or quality). pie chart A chart that shows one data set segmented in a pie formation. pinboard One of the three data object display types. Pinboards are graphics, composed of backgrounds and interactive icons called pins. Pinboards require traffic lighting definitions. pins Interactive icons placed on graphic reports called pinboards. Pins are dynamic. They can change images and traffic lighting color based on the underlying data values and analysis tools criteria. plot area The area bounded by the X, Y, and Z axes; For pie charts, the rectangular area immediately surrounding the pie. predefined drill paths Paths that enable you to drill directly to the next level of detail, as defined in the data model. presentation A playlist of Web Analysis documents. Playlists enable reports to be grouped, organized, ordered, distributed, and reviewed. Presentations are not reports copied into a set. A presentation is a list of pointers referencing reports in the repository. primary measure A high-priority measure that is more important to your company and business needs than many other measures. Primary measures are displayed in the Contents frame and have Performance reports. private application An application for the exclusive use of an product to store and manage Shared Services models. A private application is created for a product during the registration process. Production Reporting A specialized programming language for data access, data manipulation, and creating Production Reporting documents. property Characteristics of an object, such as size, color, type. proxy server A server that acts as an intermediary between a workstation user and the Internet to ensure security.
public job parameters Reusable, named job parameters created by an administrator and accessible to users who have the requisite access privileges. public recurring time events Reusable time events created by an administrator and accessible through the access control system. range A set of values that includes an upper and lower limit, and the values that fall between the limits. A range can consist of numbers, amounts, or dates. reconfigure URL URL used to reload servlet configuration settings dynamically when a user is already logged in to the Workspace. recurring time event An event that specifies a starting point and the frequency for running a job. relational database A database that stores its information in tables related or joined to each other by common pieces of information called keys. Tables are subdivided into column fields that contain related information. Column fields have parents and children. For example, the Customer table may have columns including Name, Address, and ID number. Each table contains row records that describe information about a singular entity, object, or event, such as a person, product, or transaction. Row records are segmented by column fields. Rows contain the data that you retrieve from the database. Database tables are linked by Joins. (See also join.) report footer See footer. report header See header. report object A basic element in report designs. Report objects have specific properties that define their behavior or appearance. Report objects include text boxes, grids, images, and charts. Reports section A dynamic, analytical report writer, that provides users with complex report layouts and easy-touse report-building tools. Pivot tables and charts can be embedded in a report. The report structure is divided into group headers and body areas, with each body area containing a table of data. Tables are created with dimension columns and fact columns. These tables are elastic structures. Multiple tables can be ported into each band, each originating from the same or different result sets.
Glossary
515
request line A line that holds the list of items requested from the database server and that will appear in the users results. request line items Columns listed in the request line. resources Objects or services that the system manages. Examples of a resource include a role, user, group, file, job, publisher service, and so on. result A value that an application collects for measures. If you have the required permissions, you can use the Result Collection report to enter or modify measure results. result frequency The algorithm used to create a set of dates for either the collection of data (collection frequency) or the display of data (result frequency). The result frequencys algorithm is defined by: Major type (for example, weekly, monthly, and so on.) Minor type (for example, first, last, last Friday, 5th day of period, and so on.) Interval (for example, every one, every two, every 5, and so on.) Results section A section in an Interactive Reporting document that contains the dataset derived from a query. Data is massaged in the Results section for use in the report sections. role A construct that defines the access privileges granted in order to perform a business function; for example, the job publisher role grants the privilege to run or import a job. row heading A report heading that lists members down a report page. The members are listed under their respective row names. RSC services The services that are configured with the Remote Service Configurator. They include Repository Service, Service Broker, Name Service, Event Service, and Job Service. scale The range of values on the Y axis of a chart. scale code Specification of how an individual metric or minireport field is scaled. It may be displayed in thousands, or multiplied by 100 in conjunction with a percent format. schedule Specify the job that you want to run as well as the time and job parameter list for running the job.
score The level at which specified targets are being achieved. It is usually expressed as a percentage of the target for a given time period. scorecard Business Object used to represent the progress of an employee, strategy element, or accountability element toward specific goals. Scorecards ascertain this progress based on the data collected for each measure and child scorecard you add to the scorecard. scorecard report A report that presents the results and detailed information about scorecards attached to employees, strategy elements, and accountability elements. secondary measure A low-priority measure that is less important to you than primary measures. Secondary measures do not have Performance reports but can be used on scorecards and to create dimension measure templates. Section pane Lists all the sections that are available in the current Intelligence Client document. security agent A Web access management solutions provider employed by companies to protect Web resources; also known as Web security agent. The Netegrity SiteMinder product is an example of a security agent. security platform A framework enabling Hyperion applications to use external authentication and single signon using the security platform driver. security rights Rights defined by a users data access permissions and activity-level privileges as explicitly defined for a user and as inherited from other user groups. services Resources that provide the ability to retrieve, modify, add, or delete business items. Some services are Authorization, Authentication, Global Service Manager (GSM). servlet A piece of compiled code executable by a Web server. Servlet Configurator A software utility for configuring all locally installed servlets. shortcut A pointer to an actual program or file that is located elsewhere. You can open the program or file through the shortcut, if you have permission.
516
Glossary
shortcut menu A menu that is displayed when you rightclicks a selection, an object, or a toolbar. A shortcut menu lists commands pertaining only to that screen region or selection. sibling A child member at the same generation as another child member and having the same immediate parent. For example, the members Florida and New York are both children of East and siblings of each other. Single Sign-On A feature that enables you to access multiple Hyperion products after logging on just once using external credentials. SmartCut A link to an item in the repository in the form of a special URL. snapshot Read-only data from a specific point in time. See snapshot report. snapshot report A report that has been generated and that stores static data. Any subsequent change of the data in the data source does not affect the report content. A snapshot report is portable and can be stored on the network, locally, or e-mailed. See snapshot. sort Reorder or rank result sets in ascending or descending order. sort order An indicator specifying the method by which you want your data to be presented. Data is typically shown in one of two sort orders. Ascending sort order presents data from lowest to highest, earliest to latest, first to last, A to Z, and so on. Descending sort order presents data from highest to lowest, latest to earliest, last to first, Z to A, and so on. SPF files Printer-independent files created by an Production Reporting server that contains a representation of the actual formatted report output, including fonts, spacing, headers, footers, and so on. spreadsheet One of the three data object display types. Spreadsheets are tabular reports of rows, columns, and pages. SQL spreadsheet A data object that displays the result set of a SQL query.
stacked charts A chart where the categories are viewed on top of one another for visual comparison. This type of chart is useful for subcategorizing within the current category. Stacking can be used from the Y and Z axis in all chart types except pie and line. When stacking charts the Z axis is used as the Fact/Values axis. Start in Play The quickest method for creating a Web Analysis document. The Start in Play process requires you to specify a database connection, then assumes the use of a spreadsheet data object. Start in Play uses the highest aggregate members of the time and measures dimensions to automatically populate the rows and columns axes of the spreadsheet. strategic objective (SO) A long-term goal defined for an organization that is stated in concrete terms whose progress is determined by measuring results. Each strategic objective is associated with one perspective in your application, has one parent, the entity, and is a parent to critical success factors or other strategic objectives. It also has measures associated with it. Strategy map A detailed representations of how your organization translates its high-level mission and vision statements into lower-level, constituent strategic goals and objectives. structure view A view that displays a topic as a list of component items allowing users to see and quickly select individual data items. Structure view is the default view setting. Structured Query Language The language used to give instructions to relational databases. When you build the Query sections Request, Limit, and Sort lines, Interactive Reporting translate your requests into SQL instructions. subscribe Register an interest in an item or folder, in order to receive automatic notification whenever the item or folder is updated. subset A group of members selected by specific criteria. substitution variable A variable that acts as a global placeholder for information that changes regularly. You set the variable and a corresponding string value; the value can be changed at any time.
Glossary
517
Summary chart A chart that is displayed at the top of each chart column in the Investigate Section and plots metrics at the summary level, meaning that it rolls up all Detail charts shown below in the same column. All colors shown in a stacked bar, pie, or lines Summary chart also appear above each Drill button of the Detail charts and extend across the row, acting as the key. super service A special service used by the startCommonServices script to start RSC services. table The basic unit of data storage in a database. Database tables hold all user-accessible data. Table data is stored in rows and columns. Table catalog A display of the tables, views, and synonyms to which users have access. Users drag tables from the Table catalog to the Content pane to create data models in the Query section. Table section The section used to create tabular-style reports. It is identical in functionality to the Results section, including grain level (table reports are not aggregated). Other reports can stem from a Table section. target The expected result for a measure for a specified period of time, such as a day, quarter, month and so on. You can define multiple targets for a single measure. time events Triggers for execution of jobs. time scale A scale that enables you to see the metrics by a specific period in time, such as monthly or quarterly. token An encrypted identification of one valid user or group existing on an external authentication system. toolbar A series of shortcut buttons providing quick access to the most frequently used commands. top and side labels In the Pivot section, the column and row headings on the top and sides of the pivot. These define categories by which the numeric values are organized. top-level member A dimension member at the top of the tree in a dimension outline hierarchy, or the first member of the dimension in sort order if there is no hierarchical relationship among dimension members. The top-level member name is generally the same name as the dimension name if a hierarchical relationship exists. trace level A means of defining the level of detail captured in the log file.
traffic lighting Color-coding of report cells, or pins based on a comparison of two dimension members, or on fixed limits. Traffic lighting definitions are created using the Web Analysis Traffic Light Analysis Tool. transparent login A mechanism that enables users who have been previously authenticated by external security criteria to log in to a Hyperion application, bypassing the login screen. trend How the performance of a measure or scorecard has changed since the last reporting period or a date that you specify. trusted password A password that enables users who have been previously authenticated in another system to have access to other applications without reentering their passwords. trusted user A user authenticated by some mechanism in the environment. Uniform Resource Locator The address of a resource on the Internet or an intranet. variable A value that can be modified when you run a report. String variables are useful for concatenating two or more database columns. Numeric variables can calculate values based on other values in the database. Encode variables are string variables that contain nondisplay and other special characters. variable limits Limits that prompt users to enter or select limit values before the queries are processed on the database. Web server Software or hardware hosting intranet or Internet Web pages or Web applications. This term often refers to the Interactive Reporting servlets host, because in many installations, the servlets and the web server software reside on a common host. This configuration is not required, however; the servlets and the web server software may reside on different hosts. weight A value assigned to an item on a scorecard that indicates the relative importance of that item in the calculation of the overall scorecard score. The weighting of all items on a scorecard accumulates to 100%. For example, to recognize the importance of developing new features for a product, the measure for New Features Coded on a developers scorecard would be assigned a higher weighting than a measure for Number of Minor Defect Fixes.
518
Glossary
ws.conf A configuration file for Windows platforms. wsconf_platform A configuration file for UNIX platforms. Y axis scale The range of values on the Y axis of the charts displayed in the Investigate Section. You can use a unique Y axis scale for each chart, the same Y axis scale for all Detail charts, or the same Y axis scale for all charts in the column. Often, using a common Y axis improves your ability to compare charts at a glance. Zero Administration A software tool that identifies the version number of the most up-to-date plug-in on the server. zoom A feature that sets the magnification of a report. The report can be magnified to fit the whole page, page width or a percentage of magnification based on 100%. ZoomChart A feature that makes it easy to view detailed information by enlarging a chart displayed on a page in the Monitor or Investigate Section. Zooming in on a chart enables you to see detailed numeric information on the metric that is displayed in the chart. You can click the + (plus sign) in the lower right corner of the chart or rightclick anywhere on the chart to enlarge it.
Glossary
519
520
Glossary
Index
Symbols
.bqy, 460 :COLALIAS, 408, 410 to 411, 413 :COLUMN, 408 :LOOKUPID, 408, 412 :OWNER, 408, 410 to 411, 413 :QUERYSQL, 457 :REPOSITORYNAME, 457 :ROWSRETRIEVED, 457 :SILENT, 457 :TABALIAS, 408, 410 to 411, 413 :TABLE, 408, 410 to 411, 413
process, 245 server, 258 administration tasks, 246 administrator password, 192 Agg Usage Analysis pivot table, 313 Aggregating Local Results tables, 426 aliases, specifying table and column in SQL, 408, 410 to 411, 413 Allow Drill Anywhere option, 433 Allow Drill To Detail option, 433 Analytic Bridge Service, 196 Analyzer.properties file editing, 492 maximum query result set size settings, 493 overview, 492 analyzing performance, 326 Apache, 224 API software, 383 APIs exceptions, 62 APIs, triggering an event with, 159 Append Query command, local results and, 427 appenders, 224 Application Data tables, 241 application management. See Shared Services applications. application-level security, security, application level, 251 applications command strings, 181 enterprise-reporting applications, 189 running jobs against enterprise applications, 190 URL property, 219 Applications properties (of servlets), 218 applying metadata to limit values, 411 metatdata names to data model topic items, 409 metatdata names to data model topics, 408
A
access, topic view, setting, 430 accessing Open Metadata Interpreter, 406 updated documents, 78 activating or inactivating services, 198 Add Meta Topic Item command, 438 adding metatdata definitions, 407 remarks from stored metadata, 412 topics to data models, 417 Administer module, 53 common tasks, 51 to 52 introduced, 38 Administer Repository dialog box, 440 administering documents, in IBM Information Catalog, 463 IBM Information Catalog, 461 Interactive Reporting repositories, 440 public job parameters, 159 repository groups, 444 administration
Index Symbols
521
metatdata to limit lookup values, 411 archives. See backing up. ASMTP, 63 Assessment Service, 196 associating interactive reports with Interactive Reporting database connections, 79 attachments enabling, 62 maximum size, 62 attributes, modifying, 372 audit events defining, 456 examples, 458 samples, 457 to 458 testing, 454 audit events, defining, 456 audit log, monitoring, 454 audit table creating, 455 sample structure, 455 auditing keyword variables, 457 where not supported, 454 authorization, 251 Auto Alias Tables option, 433 Auto Join Tables option, 420, 433 Auto Logon command, 394 Auto-Process command, 436
Best Guess join strategy, 421 blank documents, 393 bookmarks generated Personal Pages, 165 setting up graphics for, 168 BQAUDIT table, sample structure, 455 bqmeta0.ini, 407 to 408 bridge tables, 423 brioqry.exe, installation location, 484 BRIOSECP table, 471 Broadcast Messages changing default Personal Pages and, 166 generated Personal Pages and, 165 to 166 overview, 166 push content, 167 renaming folder, 55, 168 specifying categories for, 55 subfolders, 166 understanding, 166 Browse servlet Personal Page preconfiguration, 165 to 166 Web application deployment name, 64 Browser properties, 218 building queries, confusing aspects, 402 business analysis, 274
C
Cache properties group Browser subgroup, 218 described, 216 Notification subgroup, 218 Objects subgroup, 217 System subgroup, 218 caches of content listings, 203 for services, 199 of user interface elements, 203 calculation iterations, default, 359 calendars creating, 154 default name, 206 deleting, 155 end dates, 157 managing, 154 modifying, 155
B
backing up clients, 506 overview, 502 procedures, 502 Repository database, 506 restoring data, 502 scripts, 505 servlet configuration files, 506 servlets, 506 batch files, and application command strings, 181 batch input file creating, 370 launching, 371 BeginLoad program, 271 benefits of data models, 416
522
Index B
non-working days, 156 periods and years, 156 properties of, 155 user-defined weeks, 155 week start, 155 carpooling, 324 CDB_USER, 250 changing data model views, 431 database passwords, 396 join types, 422 server settings, 264 topic views, 428 charts, missing in client, 297 classpath, 177, 204 client, tools, 244 client.prefs settings, 347 clips authentication, 254 external authentication, 254 overview, 254 preference settings, 255 requirements, 253 security, 254 COLALIAS, 408, 410 to 411, 413 color properties, described, 212 color schemes customizing on Personal Pages, 169 properties, 214 COLUMN, 408 column data type changes, 96 delete, 90 rename, 84 column aliases, specifying in SQL, 408, 410 to 411, 413 columns, usage statistics, 454 combined view, of data models, 431 combining limit local joins with local joins, 427 Command Line Scheduler XML tags, 374 command strings for applications described, 181 example, 181 commands, DataModel menu, 438 commands, Help menu, xx components, Enterprise Metrics, 240
computed items, and local results, 427 computed metatopic items, creating, 404 config.dat file distributing services and, 191 editing, 172, 192 encryption and, 191 services startup and, 191 startup process, 47 sync host properties, 204 ConfigFileAdmin utility, 191 to 192 configfileadmin.bat. See ConfigFileAdmin utility. ConfigFileAdmin.sh. See ConfigFileAdmin utility. configuration ConfigFileAdmin utility, 192 information in startup process, 47 Configuration file Hyperion Analytic Services, 369 configuration files See also notification.properties, services.properties, output.properties, config.dat file backing up, 506 server.xml, 204 ws.conf, 208 Configuration Test Servlet, 498 configuration_server.prefs settings, 346 configuring Essbase XTD Deployment Services, 496 for Microsoft Excel, 496 Metadata Export tool, 305 Shared Services, 492 confirming repository table creation, 442 connecting databases, 392 Essbase or DB2 OLAP, 390 OLE DB Provider, 390 with data model, 393 without data model, 393 connection files creating OLAP, 390 default directory, 383 definition, 382 modifying, 396 connection information, 383 connection parameters, 382 to 383 connection preferences modifying, 391
Index C
523
setting, 385 connections directory, accessing, 394 monitoring, 392 Connections Manager, 395 connections pool, 202 connectivity issues, diagnosing and resolving, 498 connectivity type, 201 connectivity, defining for a database server, 185 connectivity-related problems, troubleshooting, 484 consulting services, xxii content windows, headings, 169 content, providing optional Personal Page to users, 168 controlling document versions, 448, 450 conventions, naming, 176 copying Personal Pages, 169 topic items to metatopics, 403 crashes, troubleshooting in Enterprise Metrics, 295 creating data models, 84 Interactive Reporting database connections, 78, 383 log tables, 455 metatopics, 403 object type properties, 462 OLAP connection files, 390 repository objects, 445 repository tables, 441 credentials, user, 205 CSS Config File URL, 205 custom formats, server date, 389 custom join strategy, 421 custom login implementation, 210 Custom Values limit option, 434 customizing metatopics, 404
data integrations. See Shared Services data integrations. Data Model menu commands, 384 to 385, 438 data model options auditing, 436 design, 433 general, 433 joins, 435 limits, 434 topic priority, 435 Data Model Refresh audit event, 458 Data Model Synchronization dialog box, 437 data models adding topics to, 417 automaticall processing, 436 automatically processing, 436 benefits, 416 BRIOCAT2 table, 449 BRIOOBJ2 table, 449 changing topic views, 428 connecting with or without, 393 creating, 84 definition, 440 ensuring integrety, 437 governors, 433 joins, 418 looking up metadata definitions, 411 master, 436 normalized and denormalized, 88 removing topics from, 417 simplifying, 403 synchronizing, 437 topic priority options, 435 uploading to repository, 445 version-controlled, 440 viewing at metatopic level, 403 data sources connectivity type, 201 listing, 201 maximum connections, 202 name, 65, 202 ODBC, 383 properties of DAS, 201 data type column changes, 96 database administrator, 274 Database Connection Wizard, 383
D
daily administration tasks, 246 DAS response timeout, 219 DAT files. See services.dat., 43 Data Access Service configuring, 78 data sources, adding, 202 OCE properties, 203 starting with process monitor, 48
524
Index D
database joins, 418 database properties (of host or Install_Home), 204 database security, 250 database servers adding, 184 associating with a Job Service, 185 changing driver, 187 deleting, 186 environmental variables for Production Reporting, 185 managing, 184 database tables in data models, 417 metadata definitions, 408 database tables and columns, usage statistcs, 454 database variables, 408, 410 to 413 databases aliases, 383 changing password, 187, 396 connecting, 392 connectivity, 65 logging off, 396 logging on, 395 overview, 241 planning changes, 454 type, 65, 201 user IDs, 250 user name, 383 using joins in, 418 data-level security, 251 DATAMODEL column, in sample BQAUDIT table, 455 DAY_EXECUTED column, in sample BQAUDIT table, 455 DB_USER, 250 DB2 OLAP, connecting to, 390 dbgprint connectivity troubleshooting with, 484 Insight and, 485 Intelligence Clients and, 484 overwriting files, 485 default Interactive Reporting databse connections, setting, 394 default Personal Pages, changing, 166 default settings, simple joins, 422 defined join paths, using, 423 defining
audit events, 456 metadata, 407 properties in IBM Information Catalog, 461 deleting joins, 422 MIME types, 60 object types and properties in IBM Information Catalog, 462 Remarks tabs, 413 repository objects, 443 services, 174 design options, data model, 433 Detail view audit event, 458 changing topic views, 428 diagnostics properties, 218 differences between Hyperion Analytic Services ports and connections, 367 dimension name, Essbase, 390 dimensions, setting topics as, 429 directories, naming conventions for, 176 directories. See output directories. displayable items. See file content windows. displaying HTML file on a Personal Page, 168 icon joins, 421 document versions, controlling, 448 documents accessing Hyperion Download Center, xix Hyperion Solutions Web site, xix Information Map, xix online help, xix administering, 463 blank, 393 BRIOBRG2 table, 450 conventions used, xxi feedback, xxii registering to the IBM information catalog, 460 structure of, xviii tracking, 67 uploading to repository, 445 drill anywhere, allowing, 433 drill to detail, allowing, 433 drill-down paths, defining, 429 drivers, database, 177, 187, 204
Index D
525
E
Education Services, xxii enabling users to apply limits, 432 encoding of URLs, 64 UTF-8, 64 enrich program, 277 enrichment restrictions, 274 versus ETL, 276 enrichment jobs after a failure, example, 278 enrichment process, 274 to 275 ensuring data model integrity, 437 Enterprise Metrics components, 240 database overview, 241 Editor, 274 environment variables, 498 environments,system information, 292 Essbase auditing, 454 connecting to, 390 subscriptions,resolving, 496 Essbase XTD Deployment Services, configuring, 496 ETL tools, versus enrichment functionality , 276 Event Service event log, 55 service type in server.dat file, 43 EVENT_TYPE column, in sample BQAUDIT table, 455 events creating externally triggered event, 158 defining audit, 456 time events, managing, 158 tracking, 66 triggering, 159 examples, enrichment jobs after a failure, 278 exceptions described, 62 Exceptions Dashboard described, 62 Exceptions Dashboard, generated Personal Pages and, 165 exceptions, configuring, 169 exiting, Server Console, 265 expiration times, 63 export table list file, 303 exporting
registry keys, 502 settings, 264 Extended Access for Hyperion Interactive Reporting Service, 196 externally triggered events creating, 158 polling for, 206
F
fact security, 252 facts, setting topic items as, 430 failures during enrichment job processing, 278 initialization, 295 file content window, 168 file size, of attachments, 62 file systems backing up, 502, 505 restoring data, 502 files adding to folders, 167 creating OLAP connection, 390 in e-mail attachments, 62 modifying connection, 396 filtering Informatica tables, 409 tables, 387 FinishLoad program, 271 firewalls, 216 folders administrator-only System folder, 164 Broadcast Messages, 55 importing items in, 167 organizing, 164 pre-configured, 167 foreign key tables, in table of joins, 410 formulas calculating for Hyperion Analytic Services ports, 369 formulas, calculating for Hyperion Analytic Services ports, 369 frequently used stars, 325 From Server data formats, 389
G
Generated Personal Page properties (of servlets), 214
526
Index E
generating automatic join paths, 435 global limit options, 434 Global Service Manager (GSM), 55, 203 to 204 governors data model, 433 in local results, 426 Grant Tables To Public option, 441 graphics. See images. groups, administering repository, 444 groups, repository, BRIOBRG2 table, 450
I
IBM Information Catalog administering documents, 463 creating an object type, 462 definition, 460 registering documents to, 460 setting up object types, 464 icon joins, showing, 421 Icon view definition, 429 metatopics and, 403, 405 icons See also toolbars. DBCS, 168 files, 164 for HTML output of Production Reporting, 212 LSC, 197 on Exceptions Dashboard, 59, 62 on RSC toolbar, 197 RSC, 172, 197 in Servlet Configurator, 209 on Servlet Configurator toolbar, 208 View Job Execution Log Entries, 156 images, for bookmarks, setting up, 168 impact analysis, data, 454 impact management assessment services, 70 impact management assessment services, impact management metadata, 70 impact management metadata, 70 impact management metadata service, 70 impact management services about, 70 accessing, 73 impact management update services, 71 to 72 implementation process, Enterprise Metrics, 245 implementation tasks, Enterprise Metrics, 245 importing Interactive Reporting database connections, 79 models. See Shared Services models. inactivating, MIME types, 60 Informatica tables, filtering, 409 initialization failures, 295 Install, 43 Install Home See also hosts. described, 38, 40
H
hangs, 295 Harvester Service. See Assessment Service., 196 headings (within Personal Pages), 169 Help menu commands, xx hiding icon joins, 421 hierarchical security, 252 Hierarchy Levels and Column Reference pivot table, 317 hints, for reading log lines, 294 hosts adding, 182 deleting, 183 managing, 182 to 183 modifying, 183 properties of, 198, 203 HTML files and customizing generated Personal Page, 166 displaying on Personal Pages, 168 HTTP protocol, SmartCut for e-mail notification, 63 HTTPS protocol, 63 Hyperion Analytic Services reducing connection time outs, 369 Hyperion Analytic Services, ports and connections, 369 Hyperion Consulting Services, xxii Hyperion Download Center, accessing documents, xix Hyperion Education Services, xxii Hyperion Hub applications. See Shared Services applications. Hyperion Hub data integrations. See Shared Services data integrations. Hyperion Hub models. See Shared Services models. Hyperion product information, xxii Hyperion System 9 BI+, assigning default preferences for application users and groups, 55 Hyperion Technical Support, xxii
Index H
527
Install Home directory, 203, 503 installation, 245 backing up immediately after, 502 to 503, 506 config.dat file, 192 installed services, 40 installed servlets, 208, 506 installed system, 55 location of components, 172, 197, 201, 208 location of installed files, 206 new host, 182 Servlet Configurator, 208 of Zero Administration, 221 installation directory, 203 installation program, 52, 174 installed services deleting a host with, 183 Install Home, 196, 203 Interactive Reporting services, 44 LSC displays, 197 recommendation for Job Service, 189 replicate job service, 189 RSC toolbar, 173, 197 installed servlets, 194 integrating data. See Shared Services data integrations. Interactive Reporting load testing, 221 zero administration, 220 Interactive Reporting database connections associate with interactive reports, 79 choosing, 392 creating, 78, 393 default directory, 383 explicit access property, 159 importing, 79 managing, 159 modifying, 391 options, 385 setting default, 394 Interactive Reporting documents, changing user ID and password in, 97 Interactive Reporting Service physical resources and, 57 starting with process monitor, 48 Interactive Reporting Studio dbgprint and, 484
troubleshooting connectivity, 484 Interactive Reporting Studio repository administering, 440 administering groups, 444 uploading documents to, 445 Interactive Reporting Web Client, dbgprint and, 485 interactive reports, connecting, 78 Internal properties (in Servlet Configurator) Job subgroup, 216 Transfer subgroup, 216 Upload subgroup, 216 interpreter, open metadata, 411 IP addresses, 383 IP ports. See ports. items creating computed metatopic, 404 and generated Personal Page, 165 headings on, 169 organizing in folders, 164, 167
J
JDBC driver, 177, 204 JDBC URL, 177, 188, 204 job execution, job process explained, 190 Job log columns in, 157 dates, 157 deleting entries, 157 marking entries for deletion, 157 sorting, 157 start dates and times, 157 suppressing, 189 user displays for, 157 job output property, 212 job parameters, administering, 159 Job Service application, 179 to 180 applications configuring, 179 executable of application, 180 properties Application, 179 database, 178 Executable, 182 Production Reporting, 179
528
Index J
running jobs against enterprise applications, 190 service type in server.dat file, 43 shutting down, 46 user name for running Production Reporting jobs, 65 jobs e-mail output attachments, 62 Job Log, 156 jobs property, 216 join paths, using defined, 423 join strategies, 420 join types, specifying, 422 joining topics automatically, 420 manually, 421 using metadata join information, 410 joins definition, 418 hiding from users, 402 limit local, 425 limitations of local, 426 local, 423 manual, 421 metadata definitions, 410 removing, 422 showing in icon view, 421 specifying strategies, 420 usage preferences, 435 using defined paths, 423
Limit Show Values audit event, 458 limitations of local results and local joins, 426 limiting values, 434 Limits tab, 434 limits, enabling users to apply, 432 linear joins, 422 List Environment Variables page, 498 load process, overview, 268 load support logs, reviewing, 283 load support programs, optional preference settings , 270 local joins, 423, 426 Local Service Configurator. See LSC. Local Service Manager (LSM), 203 to 204 localization properties, 211 locating, logs, 288 log files analyzing, 233 configuration, 228 configuration log, 228 Enterprise Metrics, 304 file formats, 227 for Interactive Reporting troubleshooting, 484 for Interactive Reporting document output, 206 location, 225 naming convention, 226 notification log, 63 log formats, 291 Log Management Helper, 224 log tables, 455 log4j, 224 loggers, 224 logging on and off databases, 396 on to databases, troubleshooting difficulties, 484 logging events, 156 logging levels, 229 Logging Service configuration, 228 usage, 224 Logoff audit event, 458 Logon audit event, 458 logs format, 291 locating, 288 reading, 291 reading tips, 294
K
keys, modifier, 403 kill commands, 46
L
launching Performance Statistics Utility, 309 Server Console, 258 left joins, 422 limit browse level preferencs, 434 limit local joins combining with local joins, 427 number allowed, 427 limit lookup values, applying metatdata to, 411 limit options, 434
Index K
529
system environment information, 292 viewing, 288 LOOKUPID, 408, 412 LSC described list of services, 196 modify host properties, 203 server.xml file, 196, 204 starting, 197
configuring, 305 files, 302 log levels , 304 preference file settings, 352 metadata initialization failed message, 296 metadata interpreter, open, 411 metadata join information, joining topics using, 410 metadata names applying to data model topic items, 409 applying to data model topics, 408 Metadata_export.prefs file, 302, 352 metatopics copying items to, 403 creating, 403 creating items, 404 definition, 402 in local results, 427 viewing, 405 metrics_server.prefs settings, 331 metrics_server.prefs settings , 331 Microsoft Excel, configuring Hyperion Analyzer for, 496 Microsoft Windows. See Windows. MIME types creating, 59 deleting, 60 inactivating or re-activating a, 60 modifying, 59 working with, 59 missing charts or reports, in Enterprise Metrics client, 297 model management. See Shared Services models. modifier keys, 403 modifying connection files, 396 connection preferences, 391 join types, 422 metatopics, 403 OCEs, 391 OCEs with Connections Manager, 396 repository objects, 446 request dialog, 436 server date formats, 389 topic item properties, 430 topic properties, 429 modules, Administer, 53 monitoring
M
magnifying glass icon, 209 mail server host names, 63 maintenance tasks, 246 managing Interactive Reporting repositories, 439 time events, 158 managing Interactive Reporting database connections, 159 manually joining topics, 421 master data models, promoting to, 436 mb.Enrich.log, 285 mb.Loads.log, 283 mb.Publish.log, 284 MDB_USER, 250 menu commands, DataModel, 438 Meta Connection Wizard, automatic join strategies and, 420 meta view, of data models, 405, 431 metadata adding remarks, 412 applying, 411 applying to limit lookup values, 411 defining, 407 definition, 402 in Interactive Reporting, 405 SQL entry fields, 407 Metadata Definition dialog box, 406 metadata definitions adding, 407 columns, 409 joins, 411 limit lookup values, 411 remarks, 412 tables, 408 Metadata Export tool
530
Index M
connections, 392 server settings, 263 server statistics, 259 users, 265
OLE DB provider, connecting, 390 Open Catalog Extension. See Interactive Reporting database connections. Open Metadata Interpreter, 411 options data model, 431 Interactive Reporting database connections, 385 OR Logic Between Groups, 473 Oracle Reports, command string example, 181 Organization tab, accessing, 66 organizing items and folders, 164 original view, of data models, 405, 431 outer joins, 422 output directories adding, 57 deleting, 58 modifying, 57 purpose, 56 output file, 304 output.properties, 62 OWNER, 408, 410 to 411, 413
N
Name Service config.dat file and, 191 service type in server.dat file, 43 in startup process, 192 to 193 startup process, 47 naming conventions, directories, 176 naming topics using stored metadata, 408 New Data Model audit event, 458 notification property, servlets, 218 notifications See also subscriptions ASMTP, 63 attachments, 62 e-mail account for sending, 63 enabling attachments, 62 server host name, 63 events that trigger, 61 other, 61 subscriptions and, 61 types of, 62 nts, 506 NUM_ROWS column, in sample BQAUDIT table, 455
P
parameters, job administering public, 159 pass-through, 205 credentials, 160 definition, 160 password, Interactive Reporting, changing, 97 passwords database, changing, 187, 396 encrypted, 191 Interactive Reporting database connections and, 382 of Job Service, for running Production Reporting jobs, 65 RDBMS password, 193 Repository Service, 187 service, modifying, 192 of services for database access, 177, 204 of services, 191 setting, 264 system, 191 passwords, Interactive Reporting, changing, 97 paths, using defined join, 423 performance poor, 298
O
object descriptions, updating repository, 443 object properties, 217 object type properties, creating, 462 object types, setting up, 464 objects deleting repository, 443 modifying repository, 446 OCEs, See Interactive Reporting database connections. OCEs. See Interactive Reporting database connections. ODBC data sources, 383 table filters and, 387 OLAP connection file, creating, 390
Index N
531
troubleshooting, 298 Performance Statistics Utility, 310 periodic maintenance tasks, 246 Personal Pages Broadcast Messages on, 166 configuration tool, 169 customized graphics, 168 default Personal Pages, 166 generated customizing, 165 properties, 214 setting up, 165 graphic files on, 168 importing, 169 importing other pages, 169 multiple, 166 optional content to users, 168 optional content, providing to users, 168 properties, configuring, 169 setting up items in folders, 167 viewing new users, 169 Personal Pages properties Color Scheme, 214 Publish, 214 physical resources See also printers, output directories. access control on, 57 adding, 57 deleting, 58 modifying, 57 pinging services, 175 pivot tables, in Performance Statistics Utility , 310 portal.properties file, 206 ports and connections, differences, 367 to 369 ports, Browse servlet, 64 Post Process audit event, 458 Pre Process audit event, 458 pre-configured folders, setting up, 167 preference file settings client_server.prefs, 347 configuration_server.prefs, 346 load support, 269 metrics_server.prefs, 331 performance and, 327 troubleshooting, 293
preference files Enterprise metrics, 329 exporting settings, 264 preferences connection, setting, 385 join usage, 435 limit browse level, 434 pre-SQL and post-SQL files, 304 primary key items and tables, in table of joins, 410 printers adding, 57 deleting, 58 modifying, 57 properties of, 58 purpose of, 56 priorities, topics, 435 priority setting in administration module, 55 private applications. See Shared Services applications. process monitors, 48 process overview administration, 245 implementation, 245 processed enrichment, overview, 273 processing queries, automatically, 436 Production Reporting jobs, data sources for, 179 Production Reporting servers, properties of, 179 Production Reporting, environment variables for, 185 programs, running jobs against enterprise applications, 190 Promote To Master Data Model command, 436 Promote To Meta Topic command, 438 promoting queries to master data models, 436 topics to metatopics, 403 properties configuring Personal Pages, 169 creating object type, 462 defining, 461 generated Personal Page, 166 Job Service, 178 Servlet Configurator, 209 Shared Services, 205 standard, of LSC service, 198 topic, 429 viewing in the Servlet Configurator, 209 protocols for SmartCuts, 64 publish program, 273
532
Index P
publish properties, 214 publishing Personal Pages, 169 pushed content, 164, 167 pushing content. See Broadcast Messages.
adding from stored metadata, 412 showing in Query section, 412 Remarks tabs, reordering, 413 Remote Service Configurator. See RSC. removing joins, 422 metatopics and metatopic items, 404 topics, 417 renaming Broadcast Messages folders, 168 reordering Remarks tabs, 413 replicating servlets, 208 report registry keys, 502 reports, missing in client, 297 repositories administering, 440 definition, 441 uploading documents, 445 repository database, backing up, 506 repository objects creating, 445 deleting, 443 modifying, 446 updating descriptions, 443 Repository Service service type in server.dat file, 43 stopping, 46 repository tables confirming creation, 442 creating, 440 creation failure, 441 granting access to, 441 repository, BRIOGRP2 table, 450 REPOSITORYNAME, 457 request lines, in master data models, 436 responding, to a finish load failure, 280 restarting servers, 260 restoring data, 502 restricting topic views, 430 result set size, setting maximum size in the Analyzer.properties file, 492 results, limitations of local, 426 Return First __Rows governor, 433 reviewing, load support logs, 283 right joins, 422 roles, 273 root directories, 64
Q
queries automatically processing, 436 maximum cells in results, 201 maximum rows in results, 201 promoting to master data models, 436 standard with reports, definition, 440 standard, definition, 440 tracking processing time, 454 query building, confusing aspects of, 402 query limits, local results and, 426 Query Performance Analysis Over Publish Time pivot table, 316 Query Performance Analysis Over Time pivot table, 313 Query Performance Analysis pivot table, 312 Query Performance Using Parameter pivot table, 317 query result set, setting maximum size in the Analyzer.properties file, 493 querying databases, troubleshooting difficulties, 484 query-processing time, tracking, 454 QUERYSQL, 457
R
ranking topics, 435 RDBMS passwords, 193 starting, 41 reactivating MIME types, 60 reading log files, 291 log lines , 294 rebooting machines, 46 recovering data, 502 reducing available values, 434 Reference of Bursted Supported Levels Pivot pivot table, 319 registering documents to IBM Information Catalog, 460 registry keys, 502 rejected stars, 324 relational database management system. See RDBMS. remarks
Index Q
533
row-level security, 64 ROWSRETRIEVED keyword variable, 457 RSC config.dat file and database password, 187 described, 39 pinging a service, 175 Storage tab, 187 toolbar icons, 197 what it does, 172 RSC services configuring, 39 setting properties, 204 super service, 198 run now, synchronizing, 74 Run Type property, 198 Run Type property of services, 42 running Configuration Utilities in stand-alone mode, 281 Metadata Export tool, 305
servers log, 261 mail, 63 restarting, 260 settings, monitoring, 263 shutting down, 260 statistics, monitoring, 259 server-side software components. See services. Service Broker, service type in server.dat file, 43 service configuration parameters, 98 services See also specific service names. adding, 196 BP_host.dat file, 43 common tasks, 51 to 52 deleting, 174 deleting message during deletion, 174 modifying properties, 175 names in BrioPlatformxxx.dat, 43 names in server.dat, 43 pinging, 175 properties Advanced, 176 General, 176 removing one or more, 172, 196 Run type property, 42 running as separate processes, 46 starting individually, 43, 45 starting Intelligence and DAS services with process monitor, 48 starting subset of, 42 starting under UNIX, 41 starting under Windows, 41 stopping, with scripts, 46 types of, 43 user name for database access, 177, 204 viewing properties, 175, 198 Servlet Configurator defined, 208 described, 40 making new settings effective, 209 starting, 208 toolbar, 208 servlet engine, session timeout value, 215 servlets
S
samples audit events, 458 audit log structure for BQAUDIT table , 455 Save To Repository dialog box, 445 schedules, synchronizing, 74 scheduling, load support programs, 269 scripts extension, 44 start_Common Services, 41 stop scripts, 46 security data level, 251 data-level, 251 hierarchical, 252 selecting subject areas, in IBM Information Catalog, 461 Server Console exiting, 265 launching, 258 Settings tab, 263 server statistics, in Server Console , 259 server.dat file, 42 to 43 server.xml file, 204 Server-Defined join strategy, 421
534
Index S
backing up, 506 configuring, 40 Enterprise Metrics, 244 replicating, 208 session time-out value, 215 setting connection preferences, 385 data model options, 432 default OCEs, 394 object types up in IBM Information Catalog, 464 passwords, 264 setting topic priorities, 435 settings exporting, 264 preference file, 293 shared models. See Shared Services models. Shared Service, configuring, 492 Shared Services applications common shared application, 104 creating, 104 deleting, 105 naming restrictions, 105 overview, 102 overview of private applications, 102 overview of shared applications, 103 process for sharing, 103 sharing, 105 stopping sharing, 106 Shared Services data integrations accessing, 134 accessing functions, 134 assigning access, 102, 134 Create Integrations user role, 102 creating, 137 deleting, 144 described, 101 editing, 137 filtering integration lists, 135 overview, 134 prerequisites, 134 Run Integrations user role, 102 scheduling group integrations, 151 user roles, 102 viewing integrations, 134 Shared Services models
access permissions, 127 application system members, 118 assigning permissions, 128 compare operations, 113 comparing, 112 configuring for external authentication, 100 deleting, 120 deleting permissions, 131 described, 100 dimensional hierarchies, 100 editing content, 115 editing member properties, 117 editing permissions, 130 filtering content, 122 Manage Models user role, 101 managing permissions, 126 naming restrictions, 112 non-dimensional hierarchies, 100 overview, 100 permissions, 126 to 127 private, 103 properties, viewing and setting, 132 registering applications, 100 renaming, 119 setting properties, 132 shared, 103 shared applications, 101 sharing, 105, 120 sharing data, 101 sharing metadata, 101 sync operations, 110 synchronizing, 108 system members, 118 tracking version history, 125 types of permission, 127 user authentication, 126 user roles, 101 versioning, 125 viewing, 106 viewing properties, 132 Shared Services properties, 205 shell scripts. See scripts. Show All Values limit option, 434 Show Icon Joins option, 433 show impact of change, interactive report, 82
Index S
535
Show Minimum Value Set limit option, 434 show task status, interactive report, 80 Show Values limit option, 434 Show Values Within Topic limit option, 434 showing icon joins, 421 remarks in Query section, 412 shutting down, servers, 260 shutting down. See services. SILENT keyword variable, 457 simple joins, 422 slow queries, 322 Slowest Queries pivot table, 315 SmartCuts e-mail notifications, 54, 63 in notifications, 62 to 63 servlet property, 212 system properties, 63 Solaris (Sun), 505 See also UNIX systems. specifying automatic join strategies, 420 join strategies, 420 join types, 422 join usage preferences, 435 SQL coding limits withCustom SQL limit option, 434 database variables in Where clauses, 408, 410 to 413 default values in metadata, 408 entering, 408 From clauses in metadata, 407 functions in audit log, 455 recording statements, 454 Select statements in metadata, 407 specifying table and column aliases, 408, 410 to 411, 413 table filters and, 387 testing for errors in statements, 454 topic priorities and, 435 Where clauses in metatdata, 407 SQL_STMT column, in sample BQAUDIT table, 455 staging, database tables, 242 standard query with reports, definition, 440 standard query, definition, 440 star and aggregate performance, 322 Star Levels and Columns Reference pivot table, 319
Star Stats Summary pivot table, 311 Star Supported Levels Reference pivot table, 318 stars frequently used, 325 picked but not used or rejected, 324 start scripts, 41, 204 startCommonServices script, 41, 204 starting RDBMS, 41 starting system or components administrative tools Local Service Configurator, 197 Remote Service Configurator, 172 Servlet Configurator, 208 services config.dat file and, 191 RSC services, 198 starting individually, 43, 45 starting subset of, 42 starting under UNIX, 41 starting under Windows, 41 servlets, 47 starting under UNIX, 41 statistics reporting, background, 308 stored metadata, 411 strategies, join, 420 Structure view, of topics, 428 structure, BQAUDIT sample audit log, 455 stylesheets, 212 subject areas, selecting, 461 Subscribe page, 62 subscription property, 212 subscriptions, 61 See also notifications Sun ONE servlet engine, 64 super service, 198 Sybase, table filters and, 387 Sync With Database command, 437 to 438 synchronize metadata, 73 run now, 74 schedule, 74 synchronizing data models, 437 system administration tasks, 358 system environment information, 292 System folder, described, 164 System folder, viewing, 164 system properties
536
Index S
applying metatdata names to, 409 modifying, 430 topic priorities, 435 topic properties in local results, 427 modifying, 429 Topic View command, 438 topic views changing, 428 restricting, 430 topics adding to data models, 417 applying metatdata names to, 408 joining automatically, 420 manually, 421 promoting to metatopics, 403 ranking, 435 removing from data models, 417 specifying join strategies for, 420 tracking documents, 67 events, 66 tracking query-processing time, 454 transfer property, 216 Transformer Service. See Update Service., 196 triggering events, 159 troubleshooting crashes, 295 hangs, 295 initialization failures, 295 Interactive Reporting Studio, 484 missing charts, 297 performance, 298 Workspace, 223 trusted password, 205 trusted password, configuring, 210 types, setting up object, 464
T
TABALIAS, 408, 410 to 411, 413 TABLE, 408, 410 to 411, 413 table aliases, specifying in SQL, 408, 410 to 411, 413 Table catalog definition, 417 filtering tables from, 387 refreshing, 388 repository tables in, 442 Table Catalog command, 438 table, rename, 84 tables bridge, 423 filtering, 387 filtering Informatica, 409 log, 455 metadata definitions, 408 OnDemand Server tables BRIOBRG2 table, 450 BRIOCAT2 table, 449 BRIOGRP2 table, 450 BRIOOBJ2 table, 449 usage statistics, 454 tasks, system administration, 358 technical support, xxii Technical Utilities, Metadata Export tool, 301 testing auditing events, 454 time events, managing, 158 time formats, 389 Time Limit ___Minutes governor, 433 times, notification expiration, 63 timestamp formats, 389 tips, for reading log lines, 294 titles on items, 169 To Server date formats, 389 tool tips, data value, formatting, 496 toolbars Service Configurator, 172, 197 Servlet Configurator, 208 tools, Metadata Export, 305 topic items
U
UMDB_USER, 251 UNIX systems backing up the clients, 506 backup procedure, 502 kill command shutdowns, 46
Index T
537
maximum number of file descriptors, 176 start Servlet Configurator, 172, 197, 208 startCommonServices method, 41 system backup procedures, 502 terminate the Job Service, 46 using kill command, 46 unknown file type message, 60 unreported periods, security, 252 update data model link data models and queries, 72 specify data model, 75 transformation, 72 view candidates to update, 76 Update Service, 196 updated documents, accessing, 78 updating distributed data models, 437 Remarks tabs, 413 repository object descriptions, 443 updating, data models, 75 upload property, 216 uploading documents to the repository, 445 URL properties, 219 Usage Service defined, 65 managing, 66 reports, 67 usage statistics, tables and columns, 454 usage tracking, properties, 66 Use All Joined Topics option, 435 Use All Referenced Topics join option, 435 Use Automatic Join Path Generation option, 435 Use Defined Join Paths option, 435 Use The Minimum Number Of Topics join option, 435 User, 55 user complaints, 326 user IDs, Interactive Reporting, changing, 97 user information , 265 user name, 383 User Performance Analysis pivot table, 314 USERNAME column, in sample BQAUDIT table, 455 users common tasks, 51 to 52 Job Log displays for, 157 using Connections Manager, 395
dedicated servlet JVM logs, 301 defined join paths, 423 local joins, 423 local joins as limits, 425 log files, for tuning and troubleshooting, 288 Metadata Export tool, 301 metatopics and metadata, 401 Open Metadata Interpreter, 406 Performance Statistics tool, 321 UTF-8 encoding, 64 utilities Configuration, 279 Performance Statistics, 310
V
values limiting, 434 variables, database, 408, 410 to 413 version-controlled data models, 440 versioning. See Shared Services models. versions, controlling document, 448 View Manager, pushed content, 164 viewing metrics library metadata, 280 server logs, 261 viewing metatopics, 405 views restricting topic, 430 topic, 428 to 429 virus protection, 164 Visual Warehouse, IBM, 460
W
Web modules. See servlets. WebAnalysis.Properties ExportDataFullPrecision, 497 WebSphere, 64 Where clauses, SQL, 408, 410 to 413 Windows (Microsoft) backup procedure, 502 exporting Registry key, 502 plugins, 221 Task Manager shutdowns, 46 terminating a process, 46 Windows 2000, 504
538
Index V
Z
Zero Administration client processing, 221 server processing, 220 starting download of, 220
Index Z
539
540
Index Z