CSC 415

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Computer Performance: 5.

Cost Op miza on: Performance evalua on helps iden fy


opportuni es for cost op miza on by op mizing resource
Is the efficiency of a given computer system, or how well the
usage, reducing energy consump on, and improving
computer performs, when taking aspects into account.
hardware efficiency.
Computer performance evalua on:
Goals and Objec ves of Performance Evalua on:
Is defined as the process by which a computer system’s
1. Measure Performance: Quan fy and measure various
resources and outputs are assessed to determine whether
performance metrics, such as response me, throughput,
the system is performing at an op mal level.
u liza on, and scalability, to assess the effec veness and
Objec ves of performance study: efficiency of the system.

 Evalua ng design alterna ves (system design) 2. Iden fy Bo lenecks: Iden fy performance bo lenecks,
 Comparing two or more systems (system selec on) resource constraints, and limi ng factors that impede system
 Determining the op mal value of a parameter (system performance and scalability.
tuning)
3. Op mize Performance: Op mize system performance by
 Finding the performance bo leneck (bo leneck iden fying opportuni es for improvement, such as tuning
iden fica on) configura on parameters, op mizing algorithms, and
 Characterizing the load on the system (workload enhancing resource u liza on.
characteriza on)
 Determining the number and sizes of components 4. Predict Performance: Predict system performance under
(capacity planning) different workloads, scenarios, and opera ng condi ons to
 Predic ng the performance at future loads (forecas ng). support capacity planning, resource alloca on, and decision-
making.
BASIC TERMS:
5. Validate Design Decisions: Validate design decisions,
System: Any collec on of hardware, so ware, and network. architectural choices, and performance trade-offs to ensure
Metrics: Criteria used to analysis the performance of the that they align with performance objec ves and user
system or components. requirements.

Workloads: The requests made by the users of the system. Metrics for Performance Evalua on:

Importance of Performance evalua on in computer system: 1. Throughput:

1. Op miza on: Performance evalua on helps iden fy Throughput measures the rate at which a system processes
bo lenecks, inefficiencies, and areas for improvement within or completes tasks within a given me period.
a computer system. 2. Response Time:
2. Capacity Planning: Performance evalua on provides Response me, also known as latency, measures the me
insights into the capacity and scalability of computer taken for a system to respond to a user request or complete
systems. a task.
3. Resource Alloca on: Performance evalua on helps 3. U liza on:
allocate resources effec vely to ensure equitable access and
op mal u liza on of compu ng resources such as CPU, U liza on measures the degree to which system resources
memory, disk 1/O, and network bandwidth. (such as CPU, memory, disk, or network) are being used or
occupied over me.
4. Quality of Service (OS) Assurance: Performance
evalua on ensures that computer systems meet specified 4. Scalability:
quality of service objec ves, such as response me, Scalability measures a system's ability to handle increasing
throughput, and availability. workload or user demand by adding resources or scaling
horizontally.
5. Efficiency: 5. Applica on Layer:

Efficiency measures the effec veness of resource u liza on Performance metrics: Response me, throughput, user
in achieving desired system outcomes or goals. sa sfac on, error rate, scalability, resource consump on,
etc.
Measurement techniques in performance evalua on

1. Instrumenta on:
Involve adding code to so ware or hardware PERFORMANCE MODELLING
component of a system to collect performance data
What is a model?
during execu on.
2. Profiling: In the context of performance modeling and beyond, a
Collect informa on about the execu on me and model is a simplified representa on of a real-world system,
resource usage of various components of a system, process, or phenomenon.
helping iden fying performance bo lenecks.
Reasons for using models:
3. Tracing:
Captures and records the sequence of events, 1. Understanding Complexity: Real-world systems and
func on calls, and system interac ons during the phenomena can be complex and difficult to
execu on of a program or system. comprehend.
4. Simula on: 2. Predic on and Forecas ng: Models enable
Uses mathema cal models and simula ons to predic on and forecas ng of future outcomes or
predict system behavior and performance under behavior based on current data and assump ons.
different workload scenarios. 3. Problem-solving and Decision-making: Models
5. Benchmarking: serve as tools for problem-solving and decision-
Involves running standardized tests or benchmarks making by providing a structured framework for
on a system to evaluate its performance rela ve to analyzing problems, evalua ng alterna ve solu ons,
other systems or industry standards. and assessing their poten al impacts.
4. Hypothesis Tes ng: Models facilitate hypothesis
Performance Metrics in Different Layers of Computer
tes ng by allowing researchers to formulate
Systems:
hypothesis about the underlying mechanisms or
1. Hardware Layer: rela onships in a system and test them against
empirical data or observa ons.
Performance metrics: CPU u liza on, memory bandwidth,
5. Design and op miza on: models aid in system
disk I/O throughput, network latency, power consump on.
design and op miza on by exploring different
Etc.
design alterna ves, assessing their performance,
2. Opera ng System Layer: and iden fying op mal configura ons or parameters
to achieve desired objec ves.
Performance metrics: Process scheduling latency, context
6. Communica on and Visualiza on: Models provide a
switch me, memory usage, file system throughput, kernel
means of communica on and visualiza on, allowing
overhead, etc.
stakeholders to share and discuss complex ideas,
3. Network Layer: concepts, and insights in a clear and intui ve
manner.
Performance metrics: Bandwidth, latency, packet loss rate,
7. Educa on and Training: Models are valuable
throughput, ji er, network conges on, etc.
educa onal tools for teaching and learning about
4. Database Layer: complex systems, processes, and concepts.
8. Resource Efficiency: Models can be used to conduct
Performance metrics: Query response me, transac on
virtual experiments or simula ons that are less
throughput, concurrency, disk I/O latency, lock conten on,
costly, me-consuming, or risky than real-world
index efficiency, etc.
experiments.
Performance Modeling: adapt and improve over me based on new
observa ons.
Performance modeling involves crea ng mathema cal or
i. The clock signal in a digital circuit serves as a ming
computa onal models to analyze and predict the
reference, synchronizing the opera on of various
performance of computer systems, so ware applica ons, or
components.
networks under various condi ons.
ii. CPU Execu on me: the execu on me or CPU
Components of performance modeling: Time which we called Ci, is the total amount of me
that the process execute; that me is generally
1. System Descrip on: Defining the components,
independent of the imita on me but o en
structure, and behavior of the system being modeled,
depends on the input data.
including hardware, so ware, network topology, and
iii. CPU Clock cycle: it is a single increment of the
workload characteris cs.
central processing unit (CPU clock during which the
2. Performance Metrics: Iden fying performance metrics
smallest unit or processor ac vity is carried out.
to measure and analyze, such as response me,
iv. Clock Period: the clock period or cycle me, Tc is the
throughput, resource u liza on, scalability, reliability,
me between rising edges of a repe ve clock
and availability.
signal.
3. Workload Characteriza on: Analyzing the workload
pa erns, arrival rates, service demands, and Steps involved in model formula on for computer systems
concurrency levels to represent the behavior of system performance evalua on:
users or applica ons.
1. Define the problem
4. Resource Modeling: Modeling the resources (e.g.,
2. Collect data
CPUs, memory, disks, network bandwidth) and their
3. Iden fy variables
interac ons within the system, including queuing
4. Formulate hypotheses
delays, conten on, and resource sharing.
5. Choose a model type
5. Concurrency and Synchroniza on: Modeling the
6. Develop the model
concurrency and synchroniza on mechanisms used in
7. Validate the model
the system, such as locks, semaphores, and
8. Analyze the model
transac ons, to analyze the impact on system
9. Refine the model
performance.
10. Interpret results
Four key techniques in performance modeling:

1. Analy cal Modeling: Using mathema cal


techniques, such as queuing theory, stochas c
processes, and probability theory, to develop
analy cal models that describe system behavior and
performance characteris cs.
2. Simula on: Building computer-based simula ons to
replicate the behavior of the system over me,
enabling the study of complex interac ons and
dynamic behavior under different scenarios.
3. Sta s cal Modeling: Applying sta s cal techniques,
such as regression analysis, me series analysis, and
hypothesis tes ng, to analyze performance data,
iden fy pa erns, and make predic ons about future
behavior.
4. Machine Learning: U lizing machine learning
algorithms to analyze performance data, discover
pa erns, and develop predic ve models that can

You might also like