FDM 92000 Architecture Analysis Guide

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Pyperon

5ystem

9 Inanca|
0ata ua|ty Management

Archtecture Ana|yss Uude


Hyperion FDM 9.2.0
i
Table of Contents
PurPose............................................................................................................................................................... 1
Audience.............................................................................................................................................................. 1
document.structure........................................................................................................................................... 1
Overview............................................................................................................................................................................. 1
TesT.Lab.envirOnmenT.......................................................................................................................................................... 1
DaTa.viewing.COnCurrenCy.TesT.......................................................................................................................................... 1
DaTa.LOaDing.by.DaTabase.Type.TesT.................................................................................................................................... 1
DaTa.LOaD.perfOrmanCe.Over.Time.TesT.............................................................................................................................. 1
DaTa.LOaDing.COnCurrenCy.TesT.......................................................................................................................................... 1
sysTem.reCOmmenDaTiOns.................................................................................................................................................... 1
overview.............................................................................................................................................................. 2
AdditionAl.Architecture.considerAtions............................................................................................................... 2
test.lAb.environment.......................................................................................................................................... 2
server.hArdwAre................................................................................................................................................. 3
webserver.(DeLLTesT2)....................................................................................................................................................... 3
appLiCaTiOn.server.(DeLLTesT1)........................................................................................................................................... 3
appLiCaTiOn.server.(DeLLTesT3)........................................................................................................................................... 3
appLiCaTiOn.server.(DeLLTesT4)........................................................................................................................................... 3
DaTa.server.(DeLLTesT5)..................................................................................................................................................... 3
DaTa.server.(DeLLTesT6)..................................................................................................................................................... 3
dAtA.viewing.concurrency.test.(test.#1)........................................................................................................... 4
meThODOLOgy....................................................................................................................................................................... 4
resuLTs............................................................................................................................................................................... 4
dAtA.loAding.by.dAtAbAse.tyPe.test.(test.#2)..................................................................................................... 6
TesT.meThODOLOgy.............................................................................................................................................................. 6
TesT.resuLTs....................................................................................................................................................................... 6
dAtA.loAd.PerformAnce.over.time.(test.#3)........................................................................................................ 9
TesT.meThODOLOgy.............................................................................................................................................................. 9
TesT.resuLTs....................................................................................................................................................................... 9
dAtA.loAding.concurrency.test.(test.#4)......................................................................................................... 12
TesT.meThODOLOgy............................................................................................................................................................ 12
TesT.resuLTs..................................................................................................................................................................... 12
system.recommendAtions................................................................................................................................... 14
DaTabase.server............................................................................................................................................................... 14
appLiCaTiOn.server............................................................................................................................................................ 14
web.server....................................................................................................................................................................... 14
appLiCaTiOn.seTTings.......................................................................................................................................................... 15
fiLe.sharing.pOrTs............................................................................................................................................................ 15
firewaLL./.DCOm.seTTings................................................................................................................................................ 16
TCp.Keep.aLive.seTTings................................................................................................................................................... 16
OraCLe.iniTiaLizaTiOn.parameTers....................................................................................................................................... 17
ii

1
Purpose
The purpose of this document is to provide assistance in determining the required infrastructure and optimal
deployment architecture for a given number of concurrent Hyperion System 9 Financial Data Quality Management
(Hyperion FDM) users.
Audience
This guide is intended for IT system designers and business decision makers who are responsible for determining
application scalability and functionality.
Document Structure
This document contains the following major topics of information:
Overview
Introduces the product architecture considerations.
Test Lab Environment
Details the Hyperion test lab environment that was used during the stress testing detailed in this document.
Data Viewing Concurrency Test
Provides testing results for a simulation that simulates a high volume of concurrent users viewing data in the
Hyperion FDM application and the resulting impact on the web server.
Data Loading by Database Type Test
Provides testing results for a simulation that simulates loading increasing fle sizes for all supported databases
and the resulting impact on performance.
Data Load Performance Over Time Test
Provides testing results from a simulation that simulates high volume loads for several concurrent months and
the resulting impact on database size and processing time.
Data Loading Concurrency Test
Provides testing results for a simulation that simulates an increasing number of concurrent users loading data
and the resulting impact on the application and data servers.
System Recommendations
Provides users with recommendations for designing and optimizing their own Hyperion FDM application.
Pyperon I0M Archtecture Ana|yss Uude
2
Overview
You can use the test results described in this document to assist in designing and optimizing your Hyperion System
9 Financial Data Quality Management hardware architecture.
Additional Architecture Considerations
Hyperion FDM can be deployed over multiple architectural environments. The most common architecture consists
of one web server, one database server, and one or more application servers. The optimal architecture depends
on the types of processing conducted in your Hyperion FDM application. As different types of processing are done
on each system component, it is necessary to determine what types of processing will be used most extensively
in your system to determine the optimal confguration. For example, you may need more web servers if you have
a large volume of users viewing data, or you may need to upgrade your database server if you use extensive
wild cards in your mapping tables. Below are the major Hyperion FDM processes you will need to consider when
designing your system architecture.
Types of System Processes in Hyperion FDM:
Database Server Intensive Tasks Deleting previously imported Hyperion FDM data prior to a load,
executing bulk inserts during a load, mapping calculations after a load, report querying
Application Server Intensive Tasks Parsing and reading source fles during the load (import text
process), serving Hyperion FDM reports.
Hyperion FDM Web Server Intensive Tasks Serving web pages
Test Lab Environment
The test environment architecture consists of a single web server, three application servers, and one data server.
All servers are dual processor machines with a minimum of 2 GB of RAM.


3
Server Hardware
Note: All servers were Dell Power Edge 2650
Webserver (DellTest2)
Processor 2.8 GHz Intel Xeon Dual processor
OS Win 2003 Server
Memory 2 GB RAM
HDD 2 x 30 GB SCSI HDD
Application Server (Delltest1)
OS Win 2003 Server
Processor 2.8 GHz Intel Xeon Dual processor
Memory 2 GB RAM
HDD 2 x 30 GB SCSI HDD
Application Server (Delltest3)
OS Win 2003 Server
Processor 2.8 GHz Intel Xeon Dual processor
Memory 2 GB RAM
HDD 2 x 30 GB SCSI HDD
Application Server (Delltest4)
OS Win 2003 Server
Processor 2.8 GHz Intel Xeon Dual processor
Memory 2 GB RAM
HDD 2 x 30 GB SCSI HDD
Data Server (Delltest5)
OS Win 2003 Server
Processor 3.2 GHz Intel Xeon Dual processor
Memory 4 GB RAM
HDD 5 x 30 GB SCSI HDD
Database SQL 2000, Oracle 9i
Data Server (Delltest6)
OS Win 2003 Server
Processor 3.2 GHz Intel Xeon Dual processor
Memory 4 GB RAM
HDD 5 x 30 GB SCSI HDD
Database SQL 2005, Oracle 10g
Pyperon I0M Archtecture Ana|yss Uude
4
Data Viewing Concurrency Test (Test #1)
Methodology
1. This testing method is focused on web server processing for an increasing numbers of concurrent users.
2. The test demonstrates the Web Servers ability to interpret and send a high volume of requests and the
Application/Data Servers ability to respond to these requests.
3. The Browser Requests for this test focus on data viewing/retrieval and not on data updating/inserting.
4. The Hyperion FDM Concurrency test comprises the following operations:
a. Log into application
b. Select Maps link
c. Remain logged onto the application
Results
Test #1 Concurrent Data Viewing Open 500 data consumer connections simultaneously.
This test produced 500 connections during the entire test run. The frst login occurred at 4:21:20 PM and the last
login occurred at 4:25:08 PM. The entire test took 5 minutes and 52 seconds to complete.
Web Server Connection Results
Number User Connected over Time
0
5
10
15
20
25
30
5
2
0
3
5
5
0
6
5
8
0
9
5
1
1
0
1
2
5
1
4
0
1
5
5
1
7
0
1
8
5
2
0
0
2
1
5
2
3
0
Time Interval (Sec)
N
u
m
b
e
r

o
f

U
s
e
r
s
Number User Connected
Interpreting the Results
Although the test is meant to represent 500 concurrent users, not all of the users are beginning the login
process at the same instant. A delay of approximately .5 - 2 seconds was randomly inserted between requests
sent to the Hyperion FDM application. This is meant to more accurately simulate real world concurrency. The
single web server was the bottleneck in this testing scenario. The Web Server processor averaged 90%
utilization during the test runs. The addition of another web server would have improved overall performance.
Performance could also be improved by deploying a more powerful web server.

5
Microsoft Application Center Test Results
Test Name: Web Server Concurrency Test Run
Test Started: 3/9/2006 4:21:14 PM
Test Duration: 00:00:05:52
Test Iterations: 500
Properties
Simultaneous browser connections: 500
Test duration: 00:00:05:52
Summary
Total number of requests: 42,500
Total number of connections: 42,500
Average requests per second: 120.74
Average time to frst byte (msecs): 1,110.59
Average time to last byte (msecs): 1,111.21
Average time to last byte per iteration (msecs): 94,452.44
Errors Counts
HTTP: 0
DNS: 0
Socket: 0
Additional Network Statistics
Average bandwidth (bytes/sec): 879,063.63
Number of bytes sent (bytes): 21,398,412
Number of bytes received (bytes): 288,031,986
Average rate of sent bytes (bytes/sec): 60,790.94
Average rate of received bytes (bytes/sec): 818,272.69
Pyperon I0M Archtecture Ana|yss Uude
6
Data Loading by Database Type Test (Test #2)
Test Methodology
1. This testing method focuses on application and data server update/insert processing for increasing fle
size loads.
2. This test demonstrates the performance of each of the databases supported as fle sizes increase.
3. The Hyperion FDM Import Workfow task comprises the following operations:
a. Login to application
b. Delete existing data records
c. Delete existing data archive records and fles
d. Read and parses the specifed import text fle
e. Write the cleansed import fle to database
f. Add new fle to Hyperion FDM data archive
g. Execute Bulk Load
h. Execute Hyperion FDM Mapping Rules
i. Update Workfow Status
j. Logoff application
4. This test process is repeated using SQL 200, SQL 2005, Oracle 9i & Oracle 10g.
Test Results
Data Loading by Database Type Files ranging from 5000 records to 2 million records were loaded into each
of the supported databases. Load time for the 1 million record loads were as follows: SQL 2005 6.18 minutes,
SQL 2000 6.90 minutes, Oracle 10g 11.15 minutes, Oracle 9i 13.43 minutes.
Data Load Duration Results
SQL 2005 SQL 2000 Oracle 10g Oracle 9i
200K 74 88 123 127
500K 174 209 299 343
1000K 371 414 669 806
2000K 808 909 1693 3122
The frst graph (next page) shows database import times as fle sizes grow. For fle sizes 200,000 records
and above both SQL databases perform signifcantly better than the Oracle databases. As the fle sizes grow
the performance differences become more evident. For fle sizes below 200,000 records the performance
differences between all database types was minimal.
The second graph depicts the breakdown of the individual steps of the import process. Although most of the
individual steps have comparable times for Oracle & SQL, Oracle mapping takes signifcantly longer than
the same mapping process for SQL. This is primarily due to the fact that Oracle does not support updates on
joined tables. This action requires Oracle to perform cursor updates which take signifcantly longer.

7
Import Process Breakdown for One Million Records
(seconds)
59
37
69
40
132
124
135
37
36
51
18
34
21
46
103
93
313
319
49
60
148
106
18
15
161
23
0 100 200 300 400 500 600 700 800
SQL-2000
SQL-2005
Oracle-9i
Oracle-10g
D
a
t
a
b
a
s
e

T
y
p
e
Import Process Duration (seconds)
Delete Data
Import
File Archive
Create WT Index
Map
Post WorkData
Drop Work Table
Pyperon I0M Archtecture Ana|yss Uude
8
Interpreting the Results
SQL databases run signifcantly faster than Oracle for high-volume fles. Hyperion FDM has been tuned to
optimize performance for data loading, this results in data scanning during loads for Oracle. Performance
testing has resulted in this being the optimal method for Oracle as indexing the required tables resulted in
decreased performance especially as the Oracle database grows in size.

9
Data Load Performance over Time (Test #3)
Test Methodology
1. This testing method focuses on overall processing time and database size for a typical high-volume
month-end scenario with a small number of users loading a high volume of data.
2. The test was run 12 times (simulating 12 months) to show the database sizing as well as any possible
performance degradation as database size increases.
3. The Hyperion FDM Import Workfow task comprises the following operations:
a. Log into application
b. Read and parses the specifed Import text fle
c. Write the cleansed import fle to database
d. Add new fle to Hyperion FDM data archive
e. Execute Bulk Load
f. Execute Hyperion FDM Mapping Rules
g. Update Workfow Status
h. Logoff application
4. The test was run once for SQL 2005 and once for Oracle 10g.
Test Results
Data Load Performance over Time Loading one million records each month for 12 months. This test was
run for both SQL 2005 & Oracle 10g. To enhance performance, separate hard drives on the data base server
were used to store the mdf/log fles for SQL and the table spaces for Oracle.
Data Load Duration & Sizing Results
File Structure Size
(GB)
Database Size (GB) Load Time
(hh:mm:ss)
Archive Logs SQL 2005 Oracle 10g SQL 2005 Oracle 10g
Month1 .11 .02 .59 .30 00:05:56 00:08:21
Month2 .23 .05 1.16 .60 00:05:49 00:08:48
Month3 .34 .07 1.74 .90 00:05:57 00:09:40
Month4 .46 .10 2.11 1.19 00:05:53 00:08:46
Month5 .57 .12 2.49 1.49 00:05:51 00:09:11
Month6 .69 .14 2.46 1.79 00:05:48 00:09:07
Month7 .80 .17 2.87 2.09 00:05:53 00:10:52
Month8 .92 .19 3.63 2.39 00:05:57 00:11:58
Month9 1.03 .21 3.54 2.68 00:05:47 00:12:08
Month10 1.14 .24 3.91 2.98 00:05:52 00:11:39
Month11 1.26 .26 4.28 3.28 00:06:29 00:13:02
Month12 1.37 .29 4.66 3.58 00:06:37 00:12:30
Pyperon I0M Archtecture Ana|yss Uude
10
Import Process Breakdown for One Million Records
(SQL 2005)
0 100 200 300 400 500 600 700 800
Mth1
Mth2
Mth3
Mth4
Mth5
Mth6
Mth7
Mth8
Mth9
Mth10
Mth11
Mth12
M
o
n
t
h
Time (Seconds)
Import: Text File
Import: Create Work
Table Indexes
Import: Process Map
Import: Delete
Import: Post Work
Data
Check Trial Balance
Import: Drop Work
Table
Import Process Breakdown for One Million Records
(Oracle 10g)
0 100 200 300 400 500 600 700 800
Mth1
Mth2
Mth3
Mth4
Mth5
Mth6
Mth7
Mth8
Mth9
Mth10
Mth11
Mth12
M
o
n
t
h
Time (Seconds)
Import: Text File
Import: Create Work
Table Indexes
Import: Process Map
Import: Delete
Import: Post Work Data
Check Trial Balance
Import: Drop Work
Table

11
Interpreting the Results
Time to complete the load process for SQL averaged 6 minutes 59 seconds, Oracle averaged 10 minutes 30
seconds. The load time for SQL increased a total of 41 seconds over the 12 months. The load time for Oracle
increased a total of 4 minutes 9 seconds over 12 months. This showed SQLs performance remained more
stable as more data is stored in the Hyperion FDM application.
Database sizing estimates calculated during the testing results: Average total disk space used was .526 GB
per month for SQL and .436 GB per month for Oracle (File size loaded: 95 MB). This showed that over time
Oracle required less total disk space.
Dimensions used during the testing were: Entity, Account, and ICP & CUSTOM1-3. On average one additional
dimension added .025 GB to the growth rate per month.
Note: Average disk size recommendations should be adjusted based on estimated average fle size to be
loaded, number of custom dimensions in use, and the number of attribute dimensions in use.
Pyperon I0M Archtecture Ana|yss Uude
12
Data Loading Concurrency Test (Test #4)
Test Methodology
1. This testing method focuses on application and data server update/insert processing for an increasing
numbers of concurrent users.
2. This test demonstrates the products scalability from a data architecture perspective by executing an
increasing number of simultaneous Import workfow tasks.
3. The Hyperion FDM Import Workfow task comprises the following operations:
a. Login to application
b. Delete existing data records
c. Delete existing data archive records and fles
d. Read and parses the specifed import text fle
e. Write the cleansed import fle to database
f. Add new fle to Hyperion FDM data archive
g. Execute Bulk Load
h. Execute Hyperion FDM Mapping Rules
i. Update Workfow Status
j. Logoff application
Test Results
Data Load Concurrency Test Concurrent data loading (100, 200, 300 users, loading 6,000 records). Time
to complete the 100 concurrent user load process using three application servers was 1:40 minutes for 100
users, 3:16 minutes for 200 users, and 4:43 minutes for 300 users.
The fles that were loaded during this test run were 600KB. The load requests were sent with an average delay
of 2 seconds between users to more realistically represent a normal monthly process.
The graph on the next page represents average duration for the main import process tasks. By running the test
multiple times for increasing number of users, we can see the linear increase in duration for the import text task.
The time to import the text into the Hyperion FDM database becomes larger as more users are simultaneously
attempting to write records to the database.

13
Average Import Process Duration (by Type)
0
50
100
150
200
250
300
100 Users 200 Users 300 Users
Number of Users Loading
T
i
m
e

(
S
e
c
o
n
d
s
)
Delete
Import Text
Create WT Index
Map
Post WD
Drop WT
Total Import
Interpreting the Results
During this test, the database server was the bottleneck. As can be seen from the graph above, as more
users were added the items that are database-intensive (Mapping, Deleting, and Indexing) increase linearly.
This is due to the database server being overwhelmed with requests to write to the database. This shows the
necessity of ensuring that the database server is properly sized. Adding database processors in this scenario
would not necessarily improve performance as much as using faster hard drives for the database server.
This test was performed on SQL 2005 with the database log and indexing/worktables each set up on separate
hard disk drives. In this instance, the recommended database server confguration would be to spread the
database tasks (data storage, logging, and indexing) over multiple data server hard disk drives to improve
the overall database server performance.
Pyperon I0M Archtecture Ana|yss Uude
14
System Recommendations
Database Server
Recommended Hardware
Quad P4 processor
1 GB RAM per 75 concurrent users (2 GB Minimum)
Multiple HDD to spread the processing (especially important with Oracle).
Selecting the appropriate data server hardware is critical for optimal performance for the Hyperion
FDM application. As can be seen from the Concurrent Data Loading Test, the database server can
become a bottleneck if not properly sized. Even though you can always add more application servers
and improve performance for selected tasks, doing so can quickly overpower the database server.
Database Sizing
Average total disk space per location = .526 GB per month of storage (based on average fle size of
95MB & use of 3 custom dimensions). Each additional dimension increases this on average by .025
GB. Note: Average disk size recommendations should be adjusted based on estimated average fle
size to be loaded, number of custom dimensions in use, and the number of attribute dimensions in
use.
The database fle and log fle should be on separate physical drives for SQL. For Oracle each
tablespace (Work, Index, Map Seg, and Data Seg) should be on a separate drive.
For improved performance always use the fastest HDD available.
Document attachments will affect total disk space. All attached documents are archived in the
application fle structure and will occupy the same amount of disk space as the original fle.
Exported archive fles will also affect total disk space. All exported archives reside in the Archive
Restore directory and will occupy the same amount of disk space as the original archive.
Application Server
Recommended Hardware
Dual P4 processor per 75 concurrent users
1 GB RAM per 75 concurrent users
Web Server
Recommended Hardware
Dual P4 processor per 100 concurrent users
2 GB RAM
.NET Process Confguration
MaxWorkerThreads should be increased to 100 in the Machine.Confg fle
MaxIOThreads should be increased to 100 in the Machine.Confg fle
Memory Recycling
By default UpStream WebLink version 8.0.6 and beyond (including Hyperion FDM 9.0.2 and
Hyperion FDM 9.2.0) use IIS memory recycling to force the Microsoft.Net process to recycle memory


15
when the process reached 250MB (Windows 2003) or 25% of Web Server memory (Windows 2000).
These values are based on a Web Server running 2 GB of RAM. For servers running 4 Gig of RAM
these settings should be adjusted to 150 - 200MB (Windows 2003) and 15% of memory (Windows
2000). The default memory limit is extremely important to adjust when your server has 4 GB or more
of RAM.
By default version UpStream WebLink 8.0.6 and beyond (including Hyperion FDM 9.0.2 and
Hyperion FDM 9.2.0) use aspnet_state service to store session values while running the IIS memory
recycling process. This service must be running on the Web Server for Hyperion FDM to operate
correctly.
Please refer to the Hardware and Software Requirements document for detailed information on hardware
and software.
Application Settings
Data Segments
Prior to creating the Hyperion FDM locations, it is possible to update Hyperion FDM data segments in the
Hyperion FDM Confguration options. This represents the number of data tables within the Hyperion FDM
application. These data tables are shared by the data load locations. To avoid data locking, this should be
set to 5-8 concurrent data loaders per segment (e.g., 600 concurrent data loaders in one Hyperion FDM
application would result in data segments of 75-120). Note: This setting applies for each Hyperion FDM
application.
File Sharing Ports
A working folder is defned for each application created in Hyperion FDM. All Hyperion FDM application servers
and the data server share this folder. Because of this, fle sharing must be enabled between all Hyperion FDM
application servers and the data server. The ports required are listed below. The account that the data server
service runs under must be able to read from this share. In the case of SQL Server the account running the
MSSQLSERVER service must be able to read from this share.
For File sharing enable ports:
Application protocol Protocol Ports
NetBIOS Datagram Service UDP 138
NetBIOS Name Resolution UDP 137
NetBIOS Session Service TCP 139
If NET BIOS is turned off then use:
Application protocol Protocol Ports
SMB TCP 445
Note: The fle sharing ports need to be opened on all the application servers and the data server. These
ports do not have to be opened on the Web server.

Pyperon I0M Archtecture Ana|yss Uude


16
Firewall / DCOM Settings
Port 135 must be open on all application servers and web servers to allow for two-way DCOM
communication.
Unlike most Internet applications which have fxed TCP and/or UDP ports, DCOM dynamically
assignsat run timeone TCP port and one UDP port to each executable process serving DCOM
objects on a computer. Because DCOM (by default) is free to use any port between 1024 and 65535
when it dynamically selects a port for an application. Confguring your frewall to leave such a wide
range of ports open would present a potential security hole. Open DCOM port range may be restricted
by changing the following registry key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc\Internet
Details on restricting DCOM ports can be found in the Microsoft knowledge base article:
http://support.microsoft.com/kb/154596
Details on other issues that may arise when going through a frewall can be found in the Microsoft knowledge
base article:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndcom/html/msdn_dcomfrewall.asp
TCP Keep Alive Settings
Determines how often TCP sends keep-alive transmissions. TCP sends keep-alive transmissions to verify that
an idle connection is still active. Hyperion recommends that this setting be reduced to hour. Details on
updating the KeepAlive settings can be found in the Microsoft knowledge base article:
http://www.microsoft.com/resources/documentation/Windows/2000/server/reskit/en-us/Default.asp?url=/
resources/documentation/Windows/2000/server/reskit/en-us/regentry/58768.asp


17
Oracle Initialization Parameters
Oracle 9i
Below are the Oracle 9i Initialization Parameters used the tests in this document.
NAME VALUE DESCRIPTION
O7_DICTIONARY_
ACCESSIBILITY
FALSE Version 7 Dictionary Accessibility Support
active_instance_count Number of active instances in the cluster database
aq_tm_processes 1 Number of AQ Time Managers to start
archive_lag_target 0
Maximum number of seconds of redos the standby
could lose
audit_sys_operations FALSE Enable sys auditing
audit_trail NONE Enable system auditing
background_core_dump partial Core Size for Background Processes
background_dump_dest
E:\oracle\admin\
StressDM\bdump
Detached process dump directory
backup_tape_io_slaves FALSE BACKUP Tape I/O slaves
bitmap_merge_area_size 1048576 Maximum memory allow for BITMAP MERGE
blank_trimming FALSE Blank trimming semantics parameter
buffer_pool_keep
Number of database blocks/latches in keep buffer
pool
buffer_pool_recycle
Number of database blocks/latches in recycle buffer
pool
circuits 555 Max number of circuits
cluster_database FALSE If TRUE startup in cluster database mode
cluster_database_
instances
1
Number of instances to use for sizing cluster db SGA
structures
cluster_interconnects Interconnects for RAC use
commit_point_strength 1
Bias this node has toward not preparing in a two-
phase commit
compatible 9.2.0.0.0
Database will be completely compatible with this
software version
control_fle_record_keep_
time
7 Control fle record keep time in days
control_fles
E:\oracle\oradata\
StressDM\CONTROL01.
CTL, E:\oracle\oradata\
StressDM\CONTROL02.
CTL, E:\oracle\oradata\
StressDM\CONTROL03.
CTL
Control fle names list
core_dump_dest
E:\oracle\admin\
StressDM\cdump
Core dump directory
Pyperon I0M Archtecture Ana|yss Uude
18
NAME VALUE DESCRIPTION
cpu_count 4 Initial number of CPUs for this instance
create_bitmap_area_size 8388608 Size of create bitmap buffer for bitmap index
cursor_sharing SIMILAR Cursor sharing mode
cursor_space_for_time FALSE Use more memory in order to get faster execution
db_16k_cache_size 0 Size of cache for 16K buffers
db_2k_cache_size 0 Size of cache for 2K buffers
db_32k_cache_size 0 Size of cache for 32K buffers
db_4k_cache_size 0 Size of cache for 4K buffers
db_8k_cache_size 0 Size of cache for 8K buffers
db_block_buffers 0 Number of database blocks cached in memory
db_block_checking FALSE Data and index block checking
db_block_checksum TRUE Store checksum in db blocks and check during reads
db_block_size 8192 Size of database block in bytes
db_cache_advice ON Buffer cache sizing advisory
db_cache_size 838860800
Size of DEFAULT buffer pool for standard block size
buffers
db_create_fle_dest Default database location
db_create_online_log_
dest_1
Online log/controlfle destination #1
db_create_online_log_
dest_2
Online log/controlfle destination #2
db_create_online_log_
dest_3
Online log/controlfle destination #3
db_create_online_log_
dest_4
Online log/controlfle destination #4
db_create_online_log_
dest_5
Online log/controlfle destination #5
db_domain
Directory part of global database name stored with
CREATE DATABA
db_fle_multiblock_read_
count
8 DB block to be read each IO
db_fle_name_convert
Datafle name convert patterns and strings for
standby/clone db
db_fles 200 Max allowable # db fles
db_keep_cache_size 0
Size of KEEP buffer pool for standard block size
buffers
db_name StressDM Database name specifed in CREATE DATABASE
db_recycle_cache_size 0
Size of RECYCLE buffer pool for standard block size
buffers
db_writer_processes 2
Number of background database writer processes to
start

19
NAME VALUE DESCRIPTION
dblink_encrypt_login FALSE
Enforce password for distributed login always be
encrypted
dbwr_io_slaves 0 DBWR I/O slaves
dg_broker_confg_fle1
%ORACLE_HOME%\
DATABASE\
DR1%ORACLE_SID%.
DAT
Data guard broker confguration fle #1
dg_broker_confg_fle2
%ORACLE_HOME%\
DATABASE\
DR2%ORACLE_SID%.
DAT
Data guard broker confguration fle #2
dg_broker_start FALSE Start Data Guard broker framework (DMON process)
disk_asynch_io TRUE Use asynch I/O for random access devices
dispatchers
(PROTOCOL=TCP)
(SERVICE=StressDMOra
cleXDB)
Specifcations of dispatchers
distributed_lock_timeout 60
Number of seconds a distributed transaction waits
for a lock
dml_locks 2440
DML locks - one for each table modifed in a
transaction
drs_start FALSE Start DG Broker monitor (DMON process)
enqueue_resources 2660 Resources for enqueues
event Debug event control - default null string
fal_client FAL client
fal_server FAL server list
fast_start_io_target 0 Upper bound on recovery reads
fast_start_mttr_target 300 MTTR target of forward crash recovery in seconds
fast_start_parallel_
rollback
LOW
Max number of parallel recovery slaves that may be
used
fle_mapping FALSE Enable fle mapping
flesystemio_options IO operations on flesystem fles
fxed_date Fixed SYSDATE value
gc_fles_to_locks
Mapping between fle numbers and global cache
locks (DFS)
global_context_pool_size Global Application Context Pool Size in Bytes
global_names FALSE
Enforce that database links have same name as
remote database
hash_area_size 1048576 Size of in-memory hash work area
hash_join_enabled TRUE Enable/disable hash join
hi_shared_memory_
address
0
SGA starting address (high order 32-bits on 64-bit
platforms)
Pyperon I0M Archtecture Ana|yss Uude
20
NAME VALUE DESCRIPTION
hs_autoregister TRUE
Enable automatic server DD updates in HS agent
self-registration
ifle Include fle in init.ora
instance_groups List of instance group names
instance_name StressDMOracle Instance name supported by the instance
instance_number 0 Instance number
java_max_sessionspace_
size
0 Max allowed size in bytes of a Java sessionspace
java_pool_size 8388608 Size in bytes of the Java pool
java_soft_sessionspace_
limit
0
Warning limit on size in bytes of a Java
sessionspace
job_queue_processes 10 Number of job queue slave processes
large_pool_size 8388608 Size in bytes of the large allocation pool
license_max_sessions 0
Maximum number of non-system user sessions
allowed
license_max_users 0
Maximum number of named users that can be
created in the databas
license_sessions_
warning
0
Warning level for number of non-system user
sessions
local_listener Local listener
lock_name_space
Lock name space used for generating lock names
for standby/clone
lock_sga FALSE Lock entire SGA in physical memory
log_archive_dest Archival destination text string
log_archive_dest_1 Archival destination #1 text string
log_archive_dest_10 Archival destination #10 text string
log_archive_dest_2 Archival destination #2 text string
log_archive_dest_3 Archival destination #3 text string
log_archive_dest_4 Archival destination #4 text string
log_archive_dest_5 Archival destination #5 text string
log_archive_dest_6 Archival destination #6 text string
log_archive_dest_7 Archival destination #7 text string
log_archive_dest_8 Archival destination #8 text string
log_archive_dest_9 Archival destination #9 text string
log_archive_dest_state_1 enable Archival destination #1 state text string
log_archive_dest_state_
10
enable Archival destination #10 state text string
log_archive_dest_state_2 enable Archival destination #2 state text string
log_archive_dest_state_3 enable Archival destination #3 state text string
log_archive_dest_state_4 enable Archival destination #4 state text string

21
NAME VALUE DESCRIPTION
log_archive_dest_state_5 enable Archival destination #5 state text string
log_archive_dest_state_6 enable Archival destination #6 state text string
log_archive_dest_state_7 enable Archival destination #7 state text string
log_archive_dest_state_8 enable Archival destination #8 state text string
log_archive_dest_state_9 enable Archival destination #9 state text string
log_archive_duplex_dest Duplex archival destination text string
log_archive_format ARC%S.%T Archival destination format
log_archive_max_
processes
2 Maximum number of active ARCH processes
log_archive_min_
succeed_dest
1
Minimum number of archive destinations that must
succeed
log_archive_start FALSE Start archival process on SGA initialization
log_archive_trace 0 Establish archivelog operation tracing level
log_buffer 7024640 Redo circular buffer size
log_checkpoint_interval 0 # redo blocks checkpoint threshold
log_checkpoint_timeout 1800
Maximum time interval between checkpoints in
seconds
log_checkpoints_to_alert FALSE Log checkpoint begin/end to alert fle
log_fle_name_convert
Logfle name convert patterns and strings for
standby/clone db
log_parallelism 1 Number of log buffer strands
logmnr_max_persistent_
sessions
1 Maximum number of threads to mine
max_commit_
propagation_delay
700 Max age of new snapshot in .01 seconds
max_dispatchers 5 Max number of dispatchers
max_dump_fle_size UNLIMITED Maximum size (blocks) of dump fle
max_enabled_roles 30 Max number of roles a user can have enabled
max_rollback_segments 122 Max. number of rollback segments in SGA cache
max_shared_servers 20 Max number of shared servers
mts_circuits 555 Max number of circuits
mts_dispatchers
(PROTOCOL=TCP)
(SERVICE=StressDMOra
cleXDB)
Specifcations of dispatchers
mts_listener_address Address(es) of network listener
mts_max_dispatchers 5 Max number of dispatchers
mts_max_servers 20 Max number of shared servers
mts_multiple_listeners FALSE Are multiple listeners enabled?
mts_servers 1 Number of shared servers to start up
mts_service StressDM Service supported by dispatchers
Pyperon I0M Archtecture Ana|yss Uude
22
NAME VALUE DESCRIPTION
mts_sessions 550 Max number of shared server sessions
object_cache_max_size_
percent
10
Percentage of maximum size over optimal of the
user sessions ob
object_cache_optimal_
size
102400
Optimal size of the user sessions object cache in
bytes
olap_page_pool_size 33554432 Size of the olap page pool in bytes
open_cursors 500 Max # cursors per session
open_links 4 Max # open links per session
open_links_per_instance 4 Max # open links per instance
optimizer_dynamic_
sampling
1 Optimizer dynamic sampling
optimizer_features_
enable
9.2.0 Optimizer plan compatibility parameter
optimizer_index_caching 0 Optimizer percent index caching
optimizer_index_cost_adj 100 Optimizer index cost adjustment
optimizer_max_
permutations
2000
Optimizer maximum join permutations per query
block
optimizer_mode CHOOSE Optimizer mode
oracle_trace_collection_
name
Oracle TRACE default collection name
oracle_trace_collection_
path
%ORACLE_HOME%\
OTRACE\ADMIN\CDF\
Oracle TRACE collection path
oracle_trace_collection_
size
5242880 Oracle TRACE collection fle max. size
oracle_trace_enable FALSE Oracle Trace enabled/disabled
oracle_trace_facility_
name
oracled Oracle TRACE default facility name
oracle_trace_facility_path
%ORACLE_HOME%\
OTRACE\ADMIN\FDF\
Oracle TRACE facility path
os_authent_prefx OPS$ Prefx for auto-logon accounts
os_roles FALSE Retrieve roles from the operating system
parallel_adaptive_multi_
user
TRUE
Enable adaptive setting of degree for multiple user
streams
parallel_automatic_tuning TRUE
Enable intelligent defaults for parallel execution
parameters
parallel_execution_
message_size
4096 Message buffer size for parallel execution
parallel_instance_group Instance group to use for all parallel operations
parallel_max_servers 40 Maximum parallel query servers per instance
parallel_min_percent 0
Minimum percent of threads required for parallel
query

23
NAME VALUE DESCRIPTION
parallel_min_servers 0 Minimum parallel query servers per instance
parallel_server FALSE If TRUE startup in parallel server mode
parallel_server_instances 1
Number of instances to use for sizing OPS SGA
structures
parallel_threads_per_cpu 2 Number of parallel execution threads per CPU
partition_view_enabled FALSE Enable/disable partitioned views
pga_aggregate_target 1078984704
Target size for the aggregate PGA memory
consumed by the instanc
plsql_compiler_fags INTERPRETED PL/SQL compiler fags
plsql_native_c_compiler PL/SQL native C compiler
plsql_native_library_dir PL/SQL native library dir
plsql_native_library_
subdir_count
0 PL/SQL native library number of subdirectories
plsql_native_linker PL/SQL native linker
plsql_native_make_fle_
name
PL/SQL native compilation make fle
plsql_native_make_utility PL/SQL native compilation make utility
plsql_v2_compatibility FALSE PL/SQL version 2.x compatibility fag
pre_page_sga FALSE Pre-page SGA for process
processes 500 User processes
query_rewrite_enabled FALSE
Allow rewrite of queries using materialized views if
enabled
query_rewrite_integrity ENFORCED
Perform rewrite using materialized views with
desired integrity
rdbms_server_dn RDBMSs Distinguished Name
read_only_open_delayed FALSE
If TRUE delay opening of read only fles until frst
access
recovery_parallelism 0
Number of server processes to use for parallel
recovery
remote_archive_enable true Remote archival enable setting
remote_dependencies_
mode
TIMESTAMP
Remote-procedure-call dependencies mode
parameter
remote_listener Remote listener
remote_login_
passwordfle
EXCLUSIVE Password fle usage parameter
remote_os_authent FALSE
Allow non-secure remote clients to use auto-logon
accounts
remote_os_roles FALSE Allow non-secure remote clients to use os roles
replication_dependency_
tracking
TRUE
Tracking dependency for Replication parallel
propagation
resource_limit FALSE Master switch for resource limit
Pyperon I0M Archtecture Ana|yss Uude
24
NAME VALUE DESCRIPTION
resource_manager_plan Resource mgr top plan
rollback_segments Undo segment list
row_locking always Row-locking
serial_reuse DISABLE Reuse the frame segments
serializable FALSE Serializable
service_names StressDM Service names supported by the instance
session_cached_cursors 0
Number of cursors to save in the session cursor
cache
session_max_open_fles 10 Maximum number of open fles allowed per session
sessions 555 User and system sessions
sga_max_size 1114854220 Max total SGA size
shadow_core_dump partial Core Size for Shadow Processes
shared_memory_address 0
SGA starting address (low order 32-bits on 64-bit
platforms)
shared_pool_reserved_
size
5452595 Size in bytes of reserved area of shared pool
shared_pool_size 109051904 Size in bytes of shared pool
shared_server_sessions 550 Max number of shared server sessions
shared_servers 1 Number of shared servers to start up
sort_area_retained_size 0
Size of in-memory sort work area retained between
fetch calls
sort_area_size 524288 Size of in-memory sort work area
spfle
%ORACLE_HOME%\
DATABASE\
SPFILE%ORACLE_
SID%.ORA
Server parameter fle
sql92_security FALSE Require select privilege for searched update/delete
sql_trace FALSE Enable SQL trace
sql_version NATIVE
SQL language version parameter for compatibility
issues
standby_archive_dest
%ORACLE_HOME%\
RDBMS
Standby database archivelog destination text string
standby_fle_
management
MANUAL
If auto then fles are created/dropped automatically
on standby
star_transformation_
enabled
FALSE Enable the use of star transformation
statistics_level TYPICAL Statistics level
tape_asynch_io TRUE Use asynch I/O requests for tape devices
thread 0 Redo thread to mount
timed_os_statistics 0 Internal os statistic gathering interval in seconds
timed_statistics TRUE Maintain internal timing statistics

25
NAME VALUE DESCRIPTION
trace_enabled FALSE Enable KST tracing
tracefle_identifer Trace fle custom identifer
transaction_auditing TRUE
Transaction auditing records generated in the redo
log
transactions 610 Max. number of concurrent active transactions
transactions_per_
rollback_segment
5 Number of active transactions per rollback segment
undo_management AUTO
Instance runs in SMU mode if TRUE, else in RBU
mode
undo_retention 10800 Undo retention in seconds
undo_suppress_errors FALSE Suppress RBU errors in SMU mode
undo_tablespace UNDOTBS1 Use/switch undo tablespace
use_indirect_data_buffers FALSE
Enable indirect data buffers (very large SGA on 32-
bit platforms
user_dump_dest
E:\oracle\admin\
StressDM\udump
User process dump directory
utl_fle_dir utl_fle accessible directories list
workarea_size_policy AUTO
Policy used to size SQL working areas (MANUAL/
AUTO)
Pyperon I0M Archtecture Ana|yss Uude
26
Oracle 10g
Below are the Oracle 10gi Initialization Parameters used the tests in this document.
NAME VALUE DESCRIPTION
O7_DICTIONARY_
ACCESSIBILITY
FALSE Version 7 Dictionary Accessibility Support
active_instance_count Number of active instances in the cluster database
aq_tm_processes 0 Number of AQ Time Managers to start
archive_lag_target 0
Maximum number of seconds of redos the standby
could lose
asm_diskgroups Disk groups to mount automatically
asm_diskstring Disk set locations for discovery
asm_power_limit 1 Number of processes for disk rebalancing
audit_fle_dest
E:\ORACLE\
PRODUCT\10.2.0\
ADMIN\STRESS10\
ADUMP
Directory in which auditing fles are to reside
audit_sys_operations FALSE Enable sys auditing
audit_trail NONE Enable system auditing
background_core_dump partial Core Size for Background Processes
background_dump_dest
E:\ORACLE\
PRODUCT\10.2.0\
ADMIN\STRESS10\
BDUMP
Detached process dump directory
backup_tape_io_slaves FALSE BACKUP Tape I/O slaves
bitmap_merge_area_size 1048576 Maximum memory allow for BITMAP MERGE
blank_trimming FALSE Blank trimming semantics parameter
buffer_pool_keep
Number of database blocks/latches in keep buffer
pool
buffer_pool_recycle
Number of database blocks/latches in recycle buffer
pool
circuits Max number of circuits
cluster_database FALSE If TRUE startup in cluster database mode
cluster_database_
instances
1
Number of instances to use for sizing cluster db SGA
structures
cluster_interconnects Interconnects for RAC use
commit_point_strength 1
Bias this node has toward not preparing in a two-
phase commit
commit_write Transaction commit log write behaviour
compatible 10.2.0.1.0
Database will be completely compatible with this
software versio
control_fle_record_
keep_time
7 Control fle record keep time in days

27
NAME VALUE DESCRIPTION
control_fles
E:\ORACLE\
PRODUCT\10.2.0\
ORADATA\STRESS10\
CONTROL01.
CTL, E:\ORACLE\
PRODUCT\10.2.0\
ORADATA\STRESS10\
CONTROL02.
CTL, E:\ORACLE\
PRODUCT\10.2.0\
ORADATA\STRESS10\
CONTROL03.CTL
Control fle names list
core_dump_dest
E:\ORACLE\
PRODUCT\10.2.0\
ADMIN\STRESS10\
CDUMP
Core dump directory
cpu_count 4 Number of CPUs for this instance
create_bitmap_area_size 8388608 Size of create bitmap buffer for bitmap index
create_stored_outlines Create stored outlines for DML statements
cursor_sharing SIMILAR Cursor sharing mode
cursor_space_for_time FALSE Use more memory in order to get faster execution
db_16k_cache_size 0 Size of cache for 16K buffers
db_2k_cache_size 0 Size of cache for 2K buffers
db_32k_cache_size 0 Size of cache for 32K buffers
db_4k_cache_size 0 Size of cache for 4K buffers
db_8k_cache_size 0 Size of cache for 8K buffers
db_block_buffers 0 Number of database blocks cached in memory
db_block_checking FALSE Header checking and data and index block checking
db_block_checksum TRUE Store checksum in db blocks and check during reads
db_block_size 8192 Size of database block in bytes
db_cache_advice ON Buffer cache sizing advisory
db_cache_size 0
Size of DEFAULT buffer pool for standard block size
buffers
db_create_fle_dest Default database location
db_create_online_log_
dest_1
Online log/controlfle destination #1
db_create_online_log_
dest_2
Online log/controlfle destination #2
db_create_online_log_
dest_3
Online log/controlfle destination #3
db_create_online_log_
dest_4
Online log/controlfle destination #4
Pyperon I0M Archtecture Ana|yss Uude
28
NAME VALUE DESCRIPTION
db_create_online_log_
dest_5
Online log/controlfle destination #5
db_domain
Directory part of global database name stored with
CREATE DATABA
db_fle_multiblock_read_
count
8 DB block to be read each IO
db_fle_name_convert
Datafle name convert patterns and strings for
standby/clone db
db_fles 200 Max allowable # db fles
db_fashback_retention_
target
1440
Maximum Flashback Database log retention time in
minutes.
db_keep_cache_size 0
Size of KEEP buffer pool for standard block size
buffers
db_name Stress10 Database name specifed in CREATE DATABASE
db_recovery_fle_dest
E:\oracle\product\10.2.0/
fash_recovery_area
Default database recovery fle location
db_recovery_fle_dest_
size
2147483648 Database recovery fles size limit
db_recycle_cache_size 0
Size of RECYCLE buffer pool for standard block size
buffers
db_unique_name Stress10 Database Unique Name
db_writer_processes 2
Number of background database writer processes to
start
dbwr_io_slaves 0 DBWR I/O slaves
ddl_wait_for_locks FALSE Disable NOWAIT DML lock acquisitions
dg_broker_confg_fle1
E:\ORACLE\
PRODUCT\10.2.0\
DB_1\DATABASE\
DR1STRESS10.DAT
Data guard broker confguration fle #1
dg_broker_confg_fle2
E:\ORACLE\
PRODUCT\10.2.0\
DB_1\DATABASE\
DR2STRESS10.DAT
Data guard broker confguration fle #2
dg_broker_start FALSE Start Data Guard broker framework (DMON process)
disk_asynch_io TRUE Use asynch I/O for random access devices
dispatchers
(PROTOCOL=TCP)
SERVICE=Stress10XDB)
Specifcations of dispatchers
distributed_lock_timeout 60
Number of seconds a distributed transaction waits
for a lock
dml_locks 748
DML locks - one for each table modifed in a
transaction
drs_start FALSE Start DG Broker monitor (DMON process)
event Debug event control - default null string

29
NAME VALUE DESCRIPTION
fal_client FAL client
fal_server FAL server list
fast_start_io_target 0 Upper bound on recovery reads
fast_start_mttr_target 0 MTTR target of forward crash recovery in seconds
fast_start_parallel_
rollback
LOW
Max number of parallel recovery slaves that may be
used
fle_mapping FALSE Enable fle mapping
fleio_network_adapters Network Adapters for File I/O
flesystemio_options IO operations on flesystem fles
fxed_date Fixed SYSDATE value
gc_fles_to_locks
Mapping between fle numbers and global cache
locks
gcs_server_processes 0 Number of background gcs server processes to start
global_context_pool_size Global Application Context Pool Size in Bytes
global_names FALSE
Enforce that database links have same name as
remote database
hash_area_size 131072 Size of in-memory hash work area
hi_shared_memory_
address
0
SGA starting address (high order 32-bits on 64-bit
platforms)
hs_autoregister TRUE
Enable automatic server DD updates in HS agent
self-registration
ifle Include fle in init.ora
instance_groups List of instance group names
instance_name stress10 Instance name supported by the instance
instance_number 0 Instance number
instance_type RDBMS Type of instance to be executed
java_max_sessionspace_
size
0 Max allowed size in bytes of a Java sessionspace
java_pool_size 0 Size in bytes of java pool
java_soft_sessionspace_
limit
0
Warning limit on size in bytes of a Java
sessionspace
job_queue_processes 10 Number of job queue slave processes
large_pool_size 0 Size in bytes of large pool
ldap_directory_access NONE RDBMSs LDAP access option
license_max_sessions 0
Maximum number of non-system user sessions
allowed
license_max_users 0
Maximum number of named users that can be
created in the databas
license_sessions_
warning
0
Warning level for number of non-system user
sessions
Pyperon I0M Archtecture Ana|yss Uude
30
NAME VALUE DESCRIPTION
local_listener Local listener
lock_name_space
Lock name space used for generating lock names
for standby/clone
lock_sga FALSE Lock entire SGA in physical memory
log_archive_confg Log archive confg parameter
log_archive_dest Archival destination text string
log_archive_dest_1 Archival destination #1 text string
log_archive_dest_10 Archival destination #10 text string
log_archive_dest_2 Archival destination #2 text string
log_archive_dest_3 Archival destination #3 text string
log_archive_dest_4 Archival destination #4 text string
log_archive_dest_5 Archival destination #5 text string
log_archive_dest_6 Archival destination #6 text string
log_archive_dest_7 Archival destination #7 text string
log_archive_dest_8 Archival destination #8 text string
log_archive_dest_9 Archival destination #9 text string
log_archive_dest_state_1 enable Archival destination #1 state text string
log_archive_dest_state_
10
enable Archival destination #10 state text string
log_archive_dest_state_2 enable Archival destination #2 state text string
log_archive_dest_state_3 enable Archival destination #3 state text string
log_archive_dest_state_4 enable Archival destination #4 state text string
log_archive_dest_state_5 enable Archival destination #5 state text string
log_archive_dest_state_6 enable Archival destination #6 state text string
log_archive_dest_state_7 enable Archival destination #7 state text string
log_archive_dest_state_8 enable Archival destination #8 state text string
log_archive_dest_state_9 enable Archival destination #9 state text string
log_archive_duplex_dest Duplex archival destination text string
log_archive_format ARC%S_%R.%T Archival destination format
log_archive_local_frst TRUE Establish EXPEDITE attribute default value
log_archive_max_
processes
2 Maximum number of active ARCH processes
log_archive_min_
succeed_dest
1
Minimum number of archive destinations that must
succeed
log_archive_start FALSE Start archival process on SGA initialization
log_archive_trace 0 Establish archivelog operation tracing level
log_buffer 7024640 Redo circular buffer size
log_checkpoint_interval 0 # redo blocks checkpoint threshold

31
NAME VALUE DESCRIPTION
log_checkpoint_timeout 1800
Maximum time interval between checkpoints in
seconds
log_checkpoints_to_alert FALSE Log checkpoint begin/end to alert fle
log_fle_name_convert
Logfle name convert patterns and strings for
standby/clone db
logmnr_max_persistent_
sessions
1 Maximum number of threads to mine
max_commit_
propagation_delay
0 Max age of new snapshot in .01 seconds
max_dispatchers Max number of dispatchers
max_dump_fle_size UNLIMITED Maximum size (blocks) of dump fle
max_enabled_roles 150 Max number of roles a user can have enabled
max_shared_servers Max number of shared servers
object_cache_max_size_
percent
10
Percentage of maximum size over optimal of the
user sessions ob
object_cache_optimal_
size
102400
Optimal size of the user sessions object cache in
bytes
olap_page_pool_size 0 Size of the olap page pool in bytes
open_cursors 300 Max # cursors per session
open_links 4 Max # open links per session
open_links_per_instance 4 Max # open links per instance
optimizer_dynamic_
sampling
2 Optimizer dynamic sampling
optimizer_features_
enable
10.2.0.1 Optimizer plan compatibility parameter
optimizer_index_caching 0 Optimizer percent index caching
optimizer_index_cost_adj 100 Optimizer index cost adjustment
optimizer_mode ALL_ROWS Optimizer mode
optimizer_secure_view_
merging
TRUE
Optimizer secure view merging and predicate
pushdown/movearound
os_authent_prefx OPS$ Prefx for auto-logon accounts
os_roles FALSE Retrieve roles from the operating system
parallel_adaptive_multi_
user
TRUE
Enable adaptive setting of degree for multiple user
streams
parallel_automatic_tuning FALSE
Enable intelligent defaults for parallel execution
parameters
parallel_execution_
message_size
2148 Message buffer size for parallel execution
parallel_instance_group Instance group to use for all parallel operations
parallel_max_servers 80 Maximum parallel query servers per instance
Pyperon I0M Archtecture Ana|yss Uude
32
NAME VALUE DESCRIPTION
parallel_min_percent 0
Minimum percent of threads required for parallel
query
parallel_min_servers 0 Minimum parallel query servers per instance
parallel_server FALSE If TRUE startup in parallel server mode
parallel_server_instances 1
Number of instances to use for sizing OPS SGA
structures
parallel_threads_per_cpu 2 Number of parallel execution threads per CPU
pga_aggregate_target 419430400
Target size for the aggregate PGA memory
consumed by the instanc
plsql_ccfags PL/SQL ccfags
plsql_code_type INTERPRETED PL/SQL code-type
plsql_compiler_fags
INTERPRETED, NON_
DEBUG
PL/SQL compiler fags
plsql_debug FALSE PL/SQL debug
plsql_native_library_dir PL/SQL native library dir
plsql_native_library_
subdir_count
0 PL/SQL native library number of subdirectories
plsql_optimize_level 2 PL/SQL optimize level
plsql_v2_compatibility FALSE PL/SQL version 2.x compatibility fag
plsql_warnings DISABLE:ALL PL/SQL compiler warnings settings
pre_page_sga FALSE Pre-page sga for process
processes 150 User processes
query_rewrite_enabled TRUE
Allow rewrite of queries using materialized views if
enabled
query_rewrite_integrity enforced
Perform rewrite using materialized views with
desired integrity
rdbms_server_dn RDBMSs Distinguished Name
read_only_open_delayed FALSE
If TRUE delay opening of read only fles until frst
access
recovery_parallelism 0
Number of server processes to use for parallel
recovery
recyclebin off Recyclebin processing
remote_archive_enable true Remote archival enable setting
remote_dependencies_
mode
TIMESTAMP
Remote-procedure-call dependencies mode
parameter
remote_listener Remote listener
remote_login_
passwordfle
EXCLUSIVE Password fle usage parameter
remote_os_authent FALSE
Allow non-secure remote clients to use auto-logon
accounts
remote_os_roles FALSE Allow non-secure remote clients to use os roles

33
NAME VALUE DESCRIPTION
replication_dependency_
tracking
TRUE
Tracking dependency for Replication parallel
propagation
resource_limit FALSE Master switch for resource limit
resource_manager_plan Resource mgr top plan
resumable_timeout 0 Set resumable_timeout
rollback_segments Undo segment list
serial_reuse disable Reuse the frame segments
service_names Stress10 Service names supported by the instance
session_cached_cursors 20 Number of cursors to cache in a session.
session_max_open_fles 10 Maximum number of open fles allowed per session
sessions 170 User and system sessions
sga_max_size 1660944384 Max total SGA size
sga_target 1048576000 Target size of SGA
shadow_core_dump partial Core Size for Shadow Processes
shared_memory_address 0
SGA starting address (low order 32-bits on 64-bit
platforms)
shared_pool_reserved_
size
5452595 Size (in bytes) of reserved area of shared pool
shared_pool_size 0 Size in bytes of shared pool
shared_server_sessions Max number of shared server sessions
shared_servers 1 Number of shared servers to start up
skip_unusable_indexes TRUE Skip unusable indexes if set to TRUE
smtp_out_server utl_smtp server and port confguration parameter
sort_area_retained_size 0
Size of in-memory sort work area retained between
fetch calls
sort_area_size 65536 Size of in-memory sort work area
spfle
E:\ORACLE\
PRODUCT\10.2.0\DB_1\
DBS\SPFILESTRESS10.
ORA
Server parameter fle
sql92_security FALSE Require select privilege for searched update/delete
sql_trace FALSE Enable SQL trace
sql_version NATIVE
SQL language version parameter for compatibility
issues
sqltune_category DEFAULT Category qualifer for applying hintsets
standby_archive_dest
%ORACLE_HOME%\
RDBMS
Standby database archivelog destination text string
standby_fle_
management
MANUAL
If auto then fles are created/dropped automatically
on standby
star_transformation_
enabled
FALSE Enable the use of star transformation
Pyperon I0M Archtecture Ana|yss Uude
34
NAME VALUE DESCRIPTION
statistics_level TYPICAL Statistics level
streams_pool_size 0 Size in bytes of the streams pool
tape_asynch_io TRUE Use asynch I/O requests for tape devices
thread 0 Redo thread to mount
timed_os_statistics 0 Internal os statistic gathering interval in seconds
timed_statistics TRUE Maintain internal timing statistics
trace_enabled FALSE Enable KST tracing
tracefle_identifer Trace fle custom identifer
transactions 187 Max. number of concurrent active transactions
transactions_per_
rollback_segment
5 Number of active transactions per rollback segment
undo_management AUTO
Instance runs in SMU mode if TRUE, else in RBU
mode
undo_retention 900 Undo retention in seconds
undo_tablespace UNDOTBS1 Use/switch undo tablespace
use_indirect_data_
buffers
FALSE
Enable indirect data buffers (very large SGA on 32-
bit platforms
user_dump_dest
E:\ORACLE\
PRODUCT\10.2.0\
ADMIN\STRESS10\
UDUMP
User process dump directory
utl_fle_dir utl_fle accessible directories list
workarea_size_policy AUTO
Policy used to size SQL working areas (MANUAL/
AUTO)

35
Pyperon I0M Archtecture Ana|yss Uude
36

You might also like