Professional Documents
Culture Documents
Presentation On: Software Engineering Quality Assurance and Testing
Presentation On: Software Engineering Quality Assurance and Testing
Presentation On: Software Engineering Quality Assurance and Testing
What is Software??
• Computer software, or just software, is a collection of computer
programs and related data that provides the instructions to a
computer what to do and how to do (for perform a specific job).
• Types of Software:
-System Software
-Application Software
What is Software Quality??
• Software Quality (as per ISO/ IEC 9126):
The totality of functionality and features of a
software product that contribute to its ability to
satisfy stated or implied needs.
-Human Error:
coding, DB design, system configuration…
Causes: knowledge, time presser, complexity …
-Environmental Condition:
change of environment…
Causes: radiation, electro-magnetic field, pollution,
sun spots, power failure…
Cost & Defects
Relative
Costs of defects Cost of error
correction
-The costs of fixing
defect is increase with
the time they remain in
the system.
-Detecting errors at an
early stage allows for
error correction at
reduced cost.
Specification Design Coding Test Acceptance
Software Quality:
according to ISO/IEC 9126
software quality consists of:
-Functionality
-Reliability
-Usability
-Efficiency
-Maintainability
-Portability
Types of QA:
Constructive QA
activities to prevent defects,
e.g. through appropriate methods of software
engineering
Analytical QA
activities for finding defects,
e.g. through testing
Constructive QA
Technical
Methods
Tools
Languages
Temples
IDE
Organizational
Guidelines
Standards
Check lists
Process rules and regulation
Legal requirements
Guidelines
organizational
Standards
Checklists
Constructive QA
Process rules and regulations
Legal requirements
Methods
Tools
technical
Languages
Lists/templates
IDE
Analytical QA
Static
Examination without executing the program
Dynamic
Includes executing the program
White box
Black box
Experience based techniques
Equivalence partitioning
Black box
Boundary value analysis
State transition testing
Figure of Analytical QA
Decision tables
Use case based testing
Dynamic
Experience-based techniques
Statement Coverage
White box
Branch Coverage
Condition Coverage
Path Coverage
Reviews/ walkthroughs
Control flow analysis
Static
-Functionality Functional Q
-Reliability
-Usability
-Efficiency Non-Functional
-Maintainability
-Portability
Functional Q-attributes:
Functional means correctness & completeness
-Reliability
maturity, fault tolerance, recovery after failure
-Usability
learn-ability, understandability, attractiveness
-Efficiency
min use of resource
-Maintainability
Verifiability, changeability
-Portability
Transfer, easy to install …
Non-Functional Q-attributes
Reliability
- maturity, fault tolerance, recovery after failure
- characteristic: under given conditions, a software/ a system will keep its
capabilities/ functionality over a period of time.
- reliability = quality/time
Usability
- Learn ability ,understandability, attractiveness
- Characteristics: easy to learn, compliance with guidelines, intuitive handling
Efficiency
- System behavior: functionality and time behavior
- Characteristics: the system requires a minimal use of resources
(e.g. CPU-time) for executing the given task
Maintainability
- Verifiability, stability, analyzability, changeability
- Characteristics: amount of effort needed to introduce changes in system
components
Portability
- Reparability, compliance, install ability
- Ability to transfer the software to a new environment
(software, hardware, organization)
- Characteristics: easy to install and uninstall, parameters
How much Testing is Enough ?
-Exit Criteria
- priority(optional)
Testing and Debugging?
Correct.
Test Debugging Re-test
Defects
s
m
u
atp
ile
m
n
t gExite
n
C
riteata
ai
o
n
n
d
a
R
P
en
d
p
Test
o
rtExec
l iu
t
n
gi
o
a
n
T
e
s
t
C
l
o
Test Controlling
s
u
r
e
A
c
t
i
v
i
t
i
e
s
- Testing is more than test
execution!
s
m
u
atp
ile
m
n
t gExite
n
C
riteata
ai
o
n
n
d
a
R
P
en
d
p
Test
o
rtExec
l iu
t
n
gi
o
a
n
T
e
- Determining the scope and risk s
t
- Identifying the objectives of testing C
and exit criteria l
Test Controlling
o
s
- Determining the approach: test techniques, u
r
test coverage, testing Teams e
s
t
a
R
P
en
l
a
n
d
p
EIval
Test
o
m
u
atp
ile
rtExec
iu
t
n
m
n
gExite
gi
o
n
n
C
riteata
ai
o
n
n
d
Test Controlling
o
s
*evaluate the availability of test data and/or the u
r
feasibility of generating test data. e
Test Controlling
Design
• creating test sequences
– creating test automation scripts, if necessary Test Implementation
and Test Execution
– configuring the test environment
Evaluating Exit
– executing test(manually or automatically) Criteria and
Reporting
• follow test sequence state in the test
plan(test suites, order of test cases) Test Closure Activities
Test Controlling
Design
defined objectives (e.g. test and criteria)
– Evaluating test logs (summary of test Test Implementation
and Test Execution
s
m
u
atp
ile
m
n
t gExite
n
C
riteata
ai
o
n
n
d
a
R
P
en
d
p
Test
o
rtExec
l iu
t
n
gi
o
a
n
T
e
Test control is an on going activity s
t
influencing test planning. The test plan C
may be modified according to the information l
Test Controlling
o
s
acquired from best controlling u
r
- The status of the test process is determined e
Test Controlling
Design
to consolidate experience, facts
Test Implementation
and numbers. and Test Execution
• Test execution
– The process of running a test, producing actual results.
• Regression tests:
– tasting of a previously tasted program following modification of ensure that defects have not been introduced
or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the
software or its environment is changed.
Wrong!
Testing is a constructive activity as well,
Personal attributes of a good tester /1
Experiences
– Personal factors influencing error occurrence
– Experience helps identifying where errors might accumulate
Differences: to design- to develop – to test
– In principle, one person can be given all three roles to work at.
• Differences in goal and role models must be taken into account
• This is difficult but possible
• Other solution (independent testers) are often easier and produce better results
Independent testing
Developer test
-The developer will never examine his ”creation” unbiased
(emotional attachment)
• He, however, knows the test object better than anybody else
• Extra costs result for the orientation of other person on the test
object
-Human being tend to overlook their own faults.
• The developer run the risks of not recognizing even self-evident defects.
• Teams of developers
– Developers speak the same language
– Costs for orientation in the test object are kept moderate especially
when the teams exchange test objects.
– Danger of generation of conflicts among developing teams
• One developer who looks for and finds a defect will not be the other
developer’s best friend
• Test teams
– Creating test teams converting different project
areas enhances the quality of testing.
– It is important that test teams of different areas in
the project work independently
Types of test organization /4
• Outsourcing tests
– The separation of testing activities and development activities offers best independence
between test object and tester.
– Outsourced test activities are performed by persons having relatively little knowledge
about the test object and the project background
• Learning curve bring high costs, therefore unbiased party experts should be involved at the early
stages of the project
•
Difficulties /2