Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Product: OpenText™ Archive Center

Version: 16.2.1
Task/Topic Performance
Audience: Administrators, Decision makers
Platform: RHEL 7.4, Oracle 12.1.0.2
Document ID: 900002
Updated: June 11, 2019

Performance White Paper

OpenText™ Archive Center 16.2.1


Cluster with Oracle 12.1.0.2
OpenText Performance Engineering Team
Archive Center 16.2.1 Cluster Document Ingestion

Contents
Audience ...................................................................................................................... 3
Disclaimer .................................................................................................................... 3
Executive summary .................................................................................................... 4
Assessment overview ................................................................................................ 5
Objectives ............................................................................................................... 5
Testing methodology .................................................................................................. 6
Test setup ............................................................................................................... 6
Test strategy ........................................................................................................... 6
Test types ............................................................................................................... 6
Test results .................................................................................................................. 7
Test with four nodes and 10/20/30/40 clients ......................................................... 7
Client-side statistics ......................................................................................... 7
Observations ............................................................................................. 7
Server metrics .................................................................................................. 8
Observations ............................................................................................. 9
Test with differently sized documents ..................................................................... 9
Client-side statistics ....................................................................................... 10
Conclusions............................................................................................................... 11
Appendices ................................................................................................................ 12
Appendix A - Test environment ...................................................................... 12
Hardware and software resources .......................................................... 13
Data set ................................................................................................... 13
Appendix B - Application and system tuning guide ........................................ 14
Appendix C - References ............................................................................... 15

The Information Company™ 2


Archive Center 16.2.1 Cluster Document Ingestion

Audience
The document is intended for a technical audience that is planning an implementation
of OpenText™ products. OpenText recommends consulting with OpenText
Professional Services, who can assist with the specific details of individual
implementation architectures.

Disclaimer
The tests and results described in this document apply only to the OpenText
configuration described herein. For testing or certification of other configurations,
contact OpenText for more information.
All tests described in this document were run on equipment located in the OpenText
Performance Laboratory and were performed by the OpenText Performance
Engineering Group. Note that using a configuration similar to that described in this
document, or any other certified configuration, does not guarantee the results
documented herein. There may be parameters or variables that were not contemplated
during these performance tests that could affect results in other test environments.
For any OpenText production deployment, OpenText recommends a rigorous
performance evaluation of the specific environment and applications to avoid any
configuration or custom development bottlenecks that hinder overall performance.
All results in this paper are based on server–side measurements and do not capture
browser rendering of results. Actual timings including client-side timings (for example,
from browsers) may vary significantly depending on the client machine specifications,
the client network, browser variations, and other conditions of the user’s environment.

The Information Company™ 3


Archive Center 16.2.1 Cluster Document Ingestion

Executive summary
This paper describes the testing efforts undertaken by OpenText to benchmark the
performance of document creation with a four-node Archive Center (AC) 16.2.1 cluster.
Tests were run with different numbers of concurrent test clients to show how
performance scaled with increasing load, and with a range of different document sizes.
This paper is specific to AC 16.2.1 deployed on RHEL 7.4 in a four-node cluster, with
Oracle 12.1.0.2.
Some key findings are summarized below:
• Using a document size of 100 KB, the chart below shows the throughput and
average response time with different numbers of concurrent clients:
Figure 1 Throughput and Avg. Response Time for tests with 10, 20, 30, 40 Clients

• With a higher load of 120 clients, the chart below shows the throughput and
average response time with different document sizes:
Figure 2 Throughput and Avg. Response Time for tests with different document sizes

The Information Company™ 4


Archive Center 16.2.1 Cluster Document Ingestion

Assessment overview

Objectives
This assessment strives to test the responsiveness, resource consumption, and
scalability of a four-node Archive Center 16.2.1 cluster, deployed in a virtual
environment.
The following were the specific objectives of this assessment:

Determine the performance characteristics for creating documents using a test client
with the following approaches:
a. With increasing number of clients creating documents concurrently, document
throughput as load increases.
b. With different document sizes, running at the same high load level, document the
impact of document size on throughput.

The Information Company™ 5


Archive Center 16.2.1 Cluster Document Ingestion

Testing methodology
This section describes the tests that were executed as part of this assessment.

Test setup
All the tests were executed with a four-node AC 16.2.1 cluster running on RHEL 7.4,
with an Oracle 12.1.0.2 database. The test environment is described in Appendix A.

Test strategy
All tests were executed using a Perl based test client with the following characteristics:
• Documents were created in AC through the libdsh API.
• The test client connected to AC through an F5 Load Balancer URL.
• The test client created a specified number of documents sequentially with no think
time between requests.
• Each test ran for a duration of roughly 8 to 10 minutes.
• A batch file was used to initiate multiple instances of the test client to run tests with
different numbers of parallel clients.

Test types
Tests were run as load tests with all test client instances created simultaneously at the
start of the test, running for an overall duration of roughly 10 minutes. The following
specific tests were executed to meet the assessment objectives:
• 10 Clients with 100 KB document size (all running on test client 1)
• 20 Clients with 100 KB document size (all running on test client 1)
• 30 Clients with 100 KB document size (all running on test client 1)
• 40 Clients with 100 KB document size (all running on test client 1)
• 120 Clients with document sizes 10 KB, 20 KB, 50 KB, 100 KB (equally split
between test client 1 and 2)

The Information Company™ 6


Archive Center 16.2.1 Cluster Document Ingestion

Test results
This section of the report contains detailed results for each of the test types outlined in
the Testing Methodology section.

Test with four nodes and 10/20/30/40 clients


This section shows the results of tests that were run with 10, 20, 30, and 40 concurrent
clients, each uploading documents of size 100 KB.

Client-side statistics
Table 1 Client-Side Statistics for tests with 10, 20, 30, 40 Clients
Throughput Avg
Number of Duration Total Throughput
Percent Response
Clients (sec) Documents (docs/sec)
Increase Time (ms)
10 575 100000 173.5 57.7

20 435 100000 228.9 31.9% 87.4

30 446 120000 267.9 17.0% 112.0


40 507 160000 313.5 17.0% 127.7

Figure 3 Throughput and Avg. Response Time for tests with 10, 20, 30, 40 Clients

Observations
• Throughput increased as the number of concurrent clients increased, although not
at a linear rate, increasing by 31.9% with 20 clients, and then by 17% with the
increase to 30 and then 40 clients.

The Information Company™ 7


Archive Center 16.2.1 Cluster Document Ingestion

Server metrics
This table shows server resource usage during the four tests.

Table 2 Resource Usage for Tests with 10, 20, 30, and 40 Clients
AC Node 1 AC Node 2 AC Node 3 AC Node 4 Oracle

CPU Usage
10 Client 17.8 18.2 17.3 20.42 5.5

20 Client 24.8 25.6 24.8 29.11 7.76

30 Client 29.30 30.42 29.0 34.24 8.89

Avg. CPU Usage (%) 40 Client 34.6 35.8 34.35 39.70 10.60

Memory Usage
10 Client 4.8 5.1 5.6 5.6 0.30

20 Client 4.20 4.26 4.26 4.24 0.29

30 Client 3.0 3.13 3.09 3.08 0.30

Avg. MemFree (GB) 40 Client 1.57 1.64 1.62 1.61 0.29

Disk Usage
10 Client <1 <1 <1 <1

20 Client <1 <1 <1 <1

30 Client <1 <1 <1 <1


Avg. Disk Queue
Length 40 Client <1 <1 <1 <1

10 Client 0.07 0.08 0.06 0.05 0

20 Client 0.24 0.16 0.13 0.22 0

30 Client 0.24 0.22 0.17 0.25 0

Avg. Reads/sec 40 Client 0.26 0.50 0.32 0.46 0

10 Client 960 965 948 934 168

20 Client 1340 1342 1347 1375 244

30 Client 1588 1591 1601 1534 294

Avg. Writes/sec 40 Client 1846 1844 1843 1823 358

10 Client 0 0 0 0 0

20 Client 0.001 0.001 0.001 0.001 0

30 Client 0.001 0.001 0.001 0.001 0


Avg. Read rate
(MB/sec) 40 Client 0.001 0.001 0.001 0.001 0

10 Client 11.20 11.22 11.44 10.88 2.50


Avg. Write rate
(MB/sec) 20 Client 15.90 16.05 16.30 16.45 3.49

The Information Company™ 8


Archive Center 16.2.1 Cluster Document Ingestion

AC Node 1 AC Node 2 AC Node 3 AC Node 4 Oracle


30 Client 19.13 18.90 19.60 18.42 4.13

40 Client 22.76 21.64 22.13 21.57 5.06

Disk Usage
10 Client 7.89 7.89 3.43 4.16

20 Client 6.25 7.27 5.93 7.5

30 Client 5.44 9.05 6.06 6.94


Avg. Read Latency
(msecs) 40 Client 8.37 7.80 6.08 7.32

10 Client 0.26 0.26 0.27 0.40

20 Client 0.26 0.30 0.25 0.24

30 Client 0.28 0.31 0.26 0.33


Avg. Write Latency
(msecs) 40 Client 0.27 0.31 0.26 0.32

Network Usage
10 Client 54.19 53.65 53.98 40.47 11.24

20 Client 76.82 76.30 76.94 58.12 16.11

30 Client 91.12 90.68 91.84 68.72 18.96


Avg. Bytes Received
(Mbps) 40 Client 107.12 106.21 106.79 80.30 22.41

10 Client 9.01 9.13 8.47 59.43 7.53

20 Client 12.66 12.88 11.97 85.64 10.78

30 Client 14.94 15.25 14.22 101.48 12.69


Avg. Bytes Sent
(Mbps) 40 Client 17.44 17.75 16.42 118.90 15.02

Observations
• Resource usage across the Oracle server and all four AC nodes was well below
saturation.
• The higher CPU usage and Avg. Bytes Sent on AC node 4 reflects that it was acting
as the master in the AC cluster.
• Tests with more clients had lower MemFree values. This was mostly due to
increased memory usage by the Linux file cache.
• The low available memory on the Oracle Linux VM was also due to high cached
memory usage. Cached memory was around 26 GB during the tests.

Test with differently sized documents


Several tests were run with a larger number of concurrent clients, and documents of
different sizes, to determine the impact of document size on throughput. Resource
usage data was not collected for these tests, so only the client-side throughput is
presented below.

The Information Company™ 9


Archive Center 16.2.1 Cluster Document Ingestion

Client-side statistics
Table 3 Client-Side Statistics for tests with different sized documents
Avg
Number Document
Throughput Response
of Clients Size
Time (ms)
10KB 817.3 147.0
120

20KB 768.3 158.5


120

50KB 775.9 155.0


120

100KB 506 229.5


120

Figure 4 Throughput and Avg. Response Time for tests with different sizes

The Information Company™ 10


Archive Center 16.2.1 Cluster Document Ingestion

Conclusions
The tests that were completed lead to the following conclusions:
• Using a document size of 100 KB, the tests achieved a document creation
throughput of 173.5, 228.9, 267.9, and 313.5 documents/second when running
with 10, 20, 30, and 40 concurrent clients, respectively.
• With 120 concurrent test clients, the tests achieved a document creation
throughput of 817 with document size 10 KB, 768 with document size 20 KB, 775.9
with document size 50 KB, and 506 with document size 100 KB.

The Information Company™ 11


Archive Center 16.2.1 Cluster Document Ingestion

Appendices
Appendix A - Test environment
The diagram below shows the architecture used for these tests.

Figure 5 System Architecture

Client Tool

F5 LB

SAN1 SAN2 SAN3 SAN4

AC Node AC Node AC Node AC Node


1 2 3 4

NAS Share Oracle 12

• Each AC node had its own 50 GB SAN partition (Hitachi AMS 2100 Storage Array),
used for a disk volume associated with the disk buffer.
• Each AC node connected to a common 100 GB NAS share, which held disk
volumes associated to each of the pools.

The Information Company™ 12


Archive Center 16.2.1 Cluster Document Ingestion

Hardware and software resources


The table below shows the hardware and software specifications for the test
environment.
Table 4 System Hardware and Software Specifications

System No Role CPU RAM OS S/W

1 Archive Center node 1 Oracle Client 12.2.0.1


Java 1.8.0_181
Tomcat 8.5.24
2 Archive Center node 2 4 cores (E5-
Archive Center 16.2.1 +
2697 v2 16 GB RHEL 7.4
Performance patch (ECCN
3 Archive Center node 3 2.7GHz)
17119, will be part of AC
16.2.2)
4 Archive Center node 4

4 core (E5-
5 OTDS 2680 v4 32 GB Win 2012 R2 OTDS 16.2.2
2.4GHz)
4 cores (Xeon
6 Oracle 12 E5-2695 v2 32 GB RHEL 7.1 Oracle 12.1.0.2
2.4GHz)

7 F5 LB 4 cores Big-IP 11.6 Big-IP 11.6

8 Test Client 1 8 cores 12 GB Win 2012 R2


Perl 5.12.2

9 Test client 2 4 cores 8 GB Win 2012 R2

Data set
The initial AC setup was an empty system.
Most tests used text files of size 100 KB as the source for adding content into the
system. A final set of tests used some additional source documents of size 10 KB,
20 KB, and 50 KB.

The Information Company™ 13


Archive Center 16.2.1 Cluster Document Ingestion

Appendix B - Application and system tuning guide


Setting Value Notes
Archive Center
The AC cluster was set up with 4 nodes, as per the Cluster Installation
Cluster Guide (https://knowledge.opentext.com/knowledge/piroot/ar/v160200-
00/ar-iclu/en/html/_manual.htm ).
In the MMC Admin Client, under Archive Server > Configuration >
Database
Archive Server > Database Server, Maximum number of connections to
Connections
the database made by JDS was changed from 20 to 50.
• Archive Center nodes had default ephemeral port range 32768
to 61000.
Ephemeral • The two test client VMs perf-7508-As and perf-15363-LG14
Ports had customized ephemeral port settings to provide > 55000
ports, in order to handle a larger volume of connections from
the load test tool.
• 100 logical archives (VW1-VW100) were created, only VW1
was utilized for performance tests.
o On security tab, in “Authentication (SecKey)
Required to” section all items were checked, SSL set
to May use.
o On Settings tab, Compression was checked.
Storage
• Each logical archive had a pool configured as type Single file
(FS), storage tier none, with a disk volume located on a
shared NAS drive that is common to all 4 AC nodes. Each
pool was also configured to use a common disk buffer.
• The disk buffer had 4 disk volumes, 1 for each AC node,
located on its SAN partition.
The following Java options were added to Tomcat startup:
• -Xms2048m
• -Xmx2048m
• -XX:+UseG1GC
Tomcat • -XX:MaxGCPauseMillis=60
• -XX:G1HeapRegionSize=2M
• -verbose:gc
• -XX:+PrintGCDateStamps
• -XX:+UnlockCommercialFeatures
The Linux file limits for the user that AC runs under were increased from
File Limits
the default 1024 to 15000.
Oracle
• Sga_max_size: 19GB
Oracle Settings • Sessions: 2000
• Processes: 1000
• Optimizer_index_caching: 0
• Optimizer_index_cost_adj: 100
• Cursor_sharting: Exact

F5 LB
Virtual server was created with a new pool containing the 4 AC nodes,
Virtual Server set up for round-robin load balancing for HTTP traffic on port 8080.

The Information Company™ 14


Archive Center 16.2.1 Cluster Document Ingestion

Appendix C - References
• Archive Center Cluster Guide:
https://knowledge.opentext.com/knowledge/piroot/ar/v160200-00/ar-
iclu/en/html/_manual.htm

• Archive Server API:


https://knowledge.opentext.com/knowledge/cs.dll/Open/33713187

The Information Company™ 15


Archive Center 16.2.1 Cluster Document Ingestion

About OpenText
OpenText enables the digital world, creating a better way for organizations to work with information, on premises or in the
cloud. For more information about OpenText (NASDAQ: OTEX, TSX: OTC) visit opentext.com.
Connect with us:

OpenText CEO Mark Barrenechea’s blog


Twitter | LinkedIn

16
www.opentext.com/contact
Copyright © 2019 Open Text SA or Open Text ULC (in Canada).
All rights reserved. Trademarks owned by Open Text SA or Open Text ULC (in Canada).

You might also like