Professional Documents
Culture Documents
Software Reliability & Smerfs 3: A Methodology and Tool For Software Reliability Assessment
Software Reliability & Smerfs 3: A Methodology and Tool For Software Reliability Assessment
Software Reliability & Smerfs 3: A Methodology and Tool For Software Reliability Assessment
To provide an overview of
software reliability and to
illustrate the SMERFS^3 tool for
software reliability assessment.
2
3/14/2002
Outline of Presentation
• Overview of Software Reliability Modeling
• History of SMERFS^3
• Sample data analysis
– Data Input
– Data Plotting and transformations
– Modeling analysis
– Using SMERFS^3 for tradeoff analysis
• Lessons Learned
• Summary
3/14/2002
3
Overview of
Software Reliability Modeling
4
3/14/2002
OBJECTIVES OF SOFTWARE
RELIABILITY MODELING
• QUANTITATIVE ASSESSMENT OF
RELIABILITY
• ALLOCATION OF RESOURCES
6
3/14/2002
DEFINITIONS
SOFTWARE RELIABILITY IS THE PROBABILITY
THAT A GIVEN SOFTWARE PROGRAM WILL
OPERATE WITHOUT FAILURE FOR A SPECIFIED
TIME IN A SPECIFIED ENVIRONMENT.
SOFTWARE RELIABILITY ENGINEERING (SRE)
IS THE APPLICATION OF STATISTICAL
TECHNIQUES TO DATA COLLECTED DURING
SYSTEM DEVELOPMENT AND OPERATION TO
SPECIFY, PREDICT, ESTIMATE, AND ASSESS THE
SOFTWARE RELIABILITY OF SOFTWARE-BASED
SYSTEMS.
7
3/14/2002
Errors, Faults, Failures
• ERROR SEEDING/TAGGING
MODELS
• DATA DOMAIN
• TIME DOMAIN
10
3/14/2002
APPROACH TO ESTIMATING S/W
RELIABILITY IN THE TIME DOMAIN
OUTPUT
INPUT PROGRA
M
SOFTWARE FAILURE
t1 t2 tn
TIME
xx x xx x xx x x x x x x x x
PERIOD 1 PERIOD 2 PERIOD 3 PERIOD 4 PERIOD 5
6 T
I 8
M
C 5
O
E 6
U 4 B
N E 4
T
3 T
S
2
W
2
E
E
1 N 0
1 2 3 4 5 N 1 2 3 4 5 6 7 8 N
12
3/14/2002
EXAMPLE SOFTWARE
RELIABILITY MEASURES
ERROR COUNT MODELS TIME BETWEEN MODELS
• FAILURE RATE
• FAILURE RATE
• TESTING TIME REQUIRED TO
ELIMINATE THE NEXT K • TESTING TIME REQUIRED TO
ERRORS OR TO ACHIEVE A
SPECIFIED FAILURE RATE ACHIEVE A SPECIFIED RATE
^t E{M(t)}
^ = Cumulative Mean Value Function
14
3/14/2002
SAMPLE
TRADE-OFF ANALYSIS
1200
T
I 1000
M
E 800
600 TEST TIME
-
400
H
R 200
S
0
600 800 1000 1200 1400
MTBF T
1000
I 800
M
E 600
TEST TIME
-
400
H
200
R
S
0
0.0008 0.0010 0.0012 0.0014 0.0016 0.0018
15
3/14/2002
TIME UNITS
18
3/14/2002
ORGANIZATION OBJECTIVES
S TATISTICAL S TATISTICAL
M ODELING AND M ODELING AND
E STIMATION OF E STIMATION OF
R ELIABILITY R ELIABILITY
F UNCTIONS FOR F UNCTIONS FOR
S OFTWARE S YSTEMS^3
(HARDWARE, SOFTWARE,
3/14/2002 & SYSTEMS ) 21
SMERFS and SMERFS^3
SMERFS
Time Models
• Littlewood & Verrall Bayesian
Model Mean time to failure
• Musa Basic Execution Model
• Geometric Model
Time between failures • Non-homogeneous Poisson
Process Model
• Musa Logarithmic Poisson Model
• Jelinski-Moranda Model Reliability
Error Count Models
• Generalized Poisson Process
Model
• Non-homogeneous Poisson
Process Model for Counts Number of errors
• Brooks and Motley’s Model remaining
• Schneidewind Model
• S-shaped Reliability Growth
Model
Number of failures per Required time to meet
unit time SMERFS^3 specified reliability
Software Models objectives
• All of the above
Hardware Models
• Weibull
• Exponential Number of failures
• Rayleigh* expected in a specified
• 4 Army Models* time period
System Models
• Markov*
• Hyper-exponential*
* Not Implemented 22
3/14/2002
SMERFS^3
• FOR A PC WINDOWS ENVIRONMENT (WINDOWS, NT)
• INCORPORATES HARDWARE, SOFTWARE, AND
SYSTEM’S RELIABILITY ( SOFTWARE + HARDWARE)
• CAPTURES ALL OF THE FUNCTIONALITY OF SMERFS
• GUI INTERFACE - (VISUAL C++)
• LIBRARY IN FORTRAN 90
• HARDWARE MODELS: EXPONENTIAL, RAYLEIGH,
WEIBULL, CONTINUOUS AND DISCRETE RELIABILITY
GROWTH PROJECTION MODELS, AND THE CONTINUOS
AND DISCRETE RELIABILITY TRACKING MODELS
• SYSTEM MODELS: HYPEREXPONENTIAL AND MARKOV
MODELS
• INCORPORATED INTO ARMY’S INSIGHT TOOL 23
3/14/2002
SMERFS^3 OPTIONS
DATA INPUT
ASCI FILE of TBF or FAULT Count
PLOTS (Raw, Fitted, Residual, etc.)
DATA EDITING
UNIT CONVERSION (Time – seconds, minutes,
hours, etc.)
DATA TRANSFORMATIONS
DATA STATISTICS
MODEL FITTING
Model Applicability analysis
Model Fit analysis
Trade-off analysis 24
3/14/2002
Current & Future Efforts
• ARMY
– Last year we had an initiative to incorporate SMERFS^3 into
the Army’s Insight tool.
• Insight is a tool developed by the Army’s Software Metrics Office that
helps the user tailor and implement the issue-driven software and
systems measurement process as defined in the DOD’s Practical
Software and Systems Measurement (PSM) Guide.
• Jet Propulsion Laboratory (JPL), Naval Postgraduate School
(NPS), and the Air Force Operational Test and Evaluation
Center (AFOTEC)
– Develop a new SMERFS^3 incorporating software, hardware, and systems as well
as integrating in JPL’s CASRE tool
– Additional features will include: early reliability prediction and new models
– Currently seeking funding and developing a series of papers/presentations
3/14/2002
showing the benefits of reliability assessment 25
HOW TO OBTAIN COPIES
• Contact me:
– By Phone: (540) 653-8388
– By Email: farrwh@nswc.navy.mil
• Indicate SMERFS or SMERFS^3
• Selected copy will be emailed as a
zipped file
26
3/14/2002
SMERFS^3 SAMPLE
DATA ANALYSIS
27
3/14/2002
FILE INPUT FORMAT (TBF)
Software: TIME BETWEEN FAILURES
Elapsed Time; “0” to denote software, a severity level of the
failure (1-4 levels)*; example:
5.3 0 1
6.7 0 2
10.5 0 1
System:
Elapsed Time; “0” to denote software, “1” to denote hardware; a
severity level of the failure (1-4 levels)*; example:
5.3 0 1
4.7 1 2
6.7 0 2
8.9 1 1
10.5 0 1 * Currently not activated 28
3/14/2002
FILE INPUT FORMAT
(INTERVAL)
Software Interval (Count) Data
Software:
Fault Count ; Interval Length (usually normalized to 1 – 1 day, 1-
month, etc); example:
20 1
32 1
14 1
10 .5
5 .5
2 .25
0 .25
29
3/14/2002
Data File (soft_int)
Soft_int
30
3/14/2002
SMERFS^3 EXECUTION
SMERFS^3
31
3/14/2002
Output From SMERFS^3
32
3/14/2002
Data File (soft_tbf)
Soft_tbf
33
3/14/2002
SMERFS^3 EXECUTION
SMERFS^3
34
3/14/2002
Data Keyboard INPUT
SMERFS^3
35
3/14/2002
Data File (system_tbf)
System_tbf
36
3/14/2002
SMERFS^3 EXECUTION
SMERFS^3
37
3/14/2002
FUTURE PLANS
38
3/14/2002
Lesson’s Learned
1. You must keep in mind your environment that your
trying to make an assessment of. The data collected
for your assessment must be reflective of that
environment.
2. Keep in mind your objectives. That will help
determine what data to collect and at what level.
3. Provide feedback to those involved in the collection
process on the results of the assessment.
4. Software reliability models provide one perspective
on reliability but there are others!
39
3/14/2002
Lesson’s Learned cont.
5. In our environment the models based upon counts did
just as well as the ones based upon “time”.
6. Everyone must agree on what constitutes a software
fault and from where it may arise. Also, if your
considering “severity” of a fault, a set of accepted
definitions must be established.
7. If count data is used you must capture the usage
and/or effort involved using the software during the
time intervals of interest. If wall clock tbf data is used
you also must be aware of the effort.
40
3/14/2002
Do’s and Don'ts of Software
Reliability Lessons Learned
Do’s
1. Consider it as a quantitative measure to be used in determination of
software quality.
2. Track reliability over time and/or software version to determine whether
reengineering is needed.
3. Consider it as an important aid in determining resource allocation and in
the testing/maintenance efforts.
4. Develop specific objectives for a software reliability effort.
5. Develop an automated data collection process and database.
Don’t
1. Consider one software reliability model is applicable for all situations.
2. Extrapolate results beyond the environment in which the data is
generated.
3. Apply it before integrated testing in the life cycle.
4. Consider it as the sole data point upon which to make judgments.
41
3/14/2002
SUMMARY
• THERE IS A NEED TO QUANTIFY BOTH COMPONENT
(HARDWARE & SOFTWARE) AND SYSTEM’S RELIABILITY
43
3/14/2002
REFERENCES
44
3/14/2002
KEY REFERENCES
• Software Reliability Engineering, by John Musa, McGraw-Hill,
1999.
• Software Reliability Measurement, Prediction, Application, by
J. Musa, A. Iannino, and K. Okumoto, McGraw-Hill, 1996.
• Software Reliability Modeling, by M. Xie, World Scientific,
1991.
• Handbook of Software Reliability Engineering, Edited by
Michael Lyu, McGraw-Hill, 1997.
• Reommended Practice for Software Reliability, ANSI/AIAA,
R-013-1992, ISBN 1-56347-024-1
• Center For Software Relability - http://www.csr.city.ac.uk/
• Links to Others - http://members.aol.com/JohnDMusa/
3/14/2002 http://www.dacs.dtic.mil/ 45