Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

COMPUTER SYSTEM PERFORMANCE EVALUATION

CONTENTS
1 INTRODUCTION AND BASIC CONCEPTS
1.1 Background
1.2 Performance Evaluation Viewpoints and Concepts
1.3 Goals of Performance Evaluation
1.4 Applications of Performance Evaluation
1.5 Techniques71.6 Metrics of Performance
1.7 Workload Characterization and Benchmarking
1.8 Summary18References19Exercises

2 PROBABILITY THEORY REVIEW


2.1 Basic Concepts on Probability Theory
2.2 Elementary Sampling
2.3 Random Variables
2.4 Sums of Variables
2.5 Regression Models
2.6 Important Density and Distribution Functions
2.7 Markov Processes
2.8 Limits
2.9 Comparing Systems using Sample Data
2.10 Summary

3 MEASUREMENT/TESTING TECHNIQUE
3.1 Measurement Strategies
3.2 Event Tracing
3.3 Monitors
3.4 Program Optimizers
3.5 Accounting Logs
3.6 Summary

4 BENCHMARKING AND CAPACITY PLANNING


4.1 Introduction
4.2 Types of Benchmark Programs
4.3 Benchmark Examples
4.4 Frequent Mistakes and Games in Benchmarking
4.5 Procedures of Capacity Planning and Related Main Problems
4.6 Capacity Planning for Web Services

5 DATA REPRESENTATION AND ADVANCED TOPICSON VALIDATION MODELING


5.1 Data Representation
5.2 Measurements
5.3 Program Profiling and Outlining
5.4 State Machine Models
5.5 Petri Net-Based Modeling
5.6 Protocol Validation

6 BASICS OF QUEUEING THEORY


6.1 Queue Models
6.2 Queue Parameters
6.3 Little’s Law
6.4 Priority Management
6.5 Analysis of M/M/1 Systems
6.6 The M/M/M Queue
6.7 Other Queues
6.8 Queueing Models with Insensitive Length Distribution

7 QUEUEING NETWORKS
7.1 Fundamentals of Queueing Networks
7.2 Model Inputs and Outputs in Queueing Networks
7.3 Open Networks
7.4 Closed Queueing Networks
7.5 Product Form Networks
7.6 Mean Value Analysis
7.7 Analysis Using Flow Equivalent Servers

8 OPERATIONAL AND MEAN VALUE ANALYSIS


8.1 Operational Laws
8.2 Little’s Formula
8.3 Bottleneck Analysis\
8.4 Standard MVA
8.5 Approximation of MVA
8.6 Bounding Analysis
8.7 Case Study: A Circuit Switching System

9 INTRODUCTION TO SIMULATION TECHNIQUE


9.1 Introduction
9.2 Types of Simulation
9.3 Some Terminology
9.4 Random-Number-Generation Techniques
9.5 Survey of Commonly Used Random Number Generators
9.6 Seed Selection
9.7 Random Variate Generation
9.8 Testing of Random Number Sequences

10 COMMONLY USED DISTRIBUTIONS IN SIMULATIONAND THEIR APPLICATIONS


10.1 Exponential Distribution
10.2 Poisson Distribution
10.3 Uniform Distribution
10.4 Normal Distribution
10.5 Weibull Distribution
10.6 Pareto Distribution
10.7 Geometric Distribution
10.8 Gamma distribution
10.9 Erlang Distribution
10.10 Beta Distribution
10.11 Binomial Distribution
10.12 Chi-Square Distribution
10.13 Student’stDistribution
10.14 Examples of Applications
10.15 Summary

11 ANALYSIS OF SIMULATION RESULTS


11.1 Introduction35311.2 Fundamental Approaches
11.3 Verification Techniques
11.4 Validation Techniques
11.5 Verification and Validation in Distributed Environments
11.6 Transient Elimination
11.7 Stopping Principles for Simulations
11.8 Accreditation
12 SIMULATION SOFTWARE AND CASE STUDIES
12.1 Introduction
12.2 Selection of Simulation Software
12.3 General-Purpose Programming Languages
12.4 Simulation Languages
12.5 Simulation Software Packages
12.6 Comparing Simulation Tools and Languages
12.7 Case Studies on Simulation of Computer and Telecommunication Systems
Brief Introduction
A computer performance evaluation is defined as the process by which a computer system's
resources and outputs are assessed to determine whether the system is performing at an optimal
level. It is similar to a voltmeter that a handyman may use to check the voltage across a circuit.
This has to do with the study of the performance and behaviour of computer systems in order to
make choices in the design, selection or procurement of these systems and their components that
balances computer system performance with cost. Thus, systems such as: processors, operating
systems, computer architecture and organizations, network configurations, languages and
databases are all subjects of investigation and optimization. Of course, to apply the ideas of
performance analysis we need to be familiar with the systems we are studying. To study such
systems we need to collect data on real systems performance, analyze and interpret their results,
set up experiments and test hypothesis and we need to model the systems so that we can
experiment and analyze the performance of systems choices or design decision that do not yet
exist. Thus, there are two basic areas of performance analysis one measurement and analysis of
systems and the other modelling, and the methodologies used can be divided in this way. For
measurement and analysis we need statistics and data analysis methods, mathematical
expressions and measures and experimental design tools, while for modelling we need
probability, queuing theory, simulation techniques, and other mathematical methods.

Tools for Performance Evaluation of Computer Systems:


Historical Evolution and Perspectives
Since the early years of computing, software tools have been used to evaluate and improve
system performance. This has been soon recognized as fundamental in a number of phases of a
computer system’s life-cycle, namely design, sizing, procurement, deployment, and tuning.
However, due to the inherent complexity of the systems being evaluated and the novelty of the
computing field, effective performance evaluation tools took several years to appear on the
market. Simulation was the first technique used extensively for evaluating the performance of
hardware logic of single components initially, and of entire systems later for a review. The
introduction of simulation languages in the 60s, such as Simscript and GPSS, was a milestone
since several tools oriented to the simulation of computer systems and networks appeared shortly
afterwards on the market. In the early 70s, two simulation packages oriented to computer
performance analysis, namely Scert and Case, were among the first to reach commercial success.
It must be pointed out that, due to its dominant position in the computer market from the 60s to
the 80s, almost all tools were developed for modeling systems and network technologies
developed by IBM. Features of all generations of IBM systems, such as 360s and MVS, were
deeply analyzed through simulation models and with other new analysis techniques that were
becoming available. Other types of tools such as hardware monitors i.e., electronic devices
connected to the system being measured with probes and capable of detecting significant events
from which performance indexes can be deduced, were also used in the 70s. These did not reach
a great diffusion due to their high costs, the difficulty of use, and the huge effort required to
adapt them to different systems and configurations.
In those years, models started to emerge as a new way to evaluate single components and system
architectures. Among the various problems approached were the evaluation of time-sharing
supervisors, I/O configurations, swapping, paging, memory sizing, and networks of computers.
The commercial interest in simulation modeling tools declined once efficient computational
algorithms for analytical modeling appeared thanks to the pioneering work of Buzen. Analytical
techniques became rapidly popular because of their relatively low cost, general applicability, and
easy and flexibility of use with respect to simulation. Such techniques are still popular today and
have been the subject of several books and surveys. BEST/1 was the first tool implementing
analytical techniques being marketed commercially with great success. Rapidly, tens of tools for
analytical modeling appeared on the market. Over the years, as soon as a new analytical
technique has been discovered a new tool implementing it has been developed. Thus, we have
now performance evaluation tools based on Queuing Networks, Petri Nets, Markov Chains, Fault
Trees, Process Algebra, and many other approaches. Hybrid and hierarchical modeling
techniques have been introduced in the 70s and 80s to analyze very large and complex systems.
Starting from the 90s, due to the increase of the state spaces needed to represent models of
modern systems, simulation has become again a fundamental tool for model evaluations. This
has been also a consequence of the dramatic increase of computational power in the last two
decades, which has made simulation a more effective computational tool than in the past. Several
tools were designed specifically to solve particular class of problems. For example, SPE.ED is a
tool focused on the solution of the problems typical of Software Performance Engineering. More
recently, in the security domain, the ADVISE method has been introduced to quantitatively
evaluate the strength of a system’s security. In spite of this long historical evolution, there is a
lack of surveys covering the history and current perspectives of the performance tool area. The
aim of this paper is to fill this gap and provide an up-to-date review and critique of current
software tools for performance modeling. We point to for a special issue on popular open source
tools developed in academia in recent years.

You might also like