Professional Documents
Culture Documents
TIE-21204 2015 en PDF
TIE-21204 2015 en PDF
TIE-21204 2015 en PDF
Ohjelmistotekniikka
Contents 2/10
2.9 Schools of testing 44
2.10 Testing 50
2.11 Testing is demanding work 53
2.12 Testing is team work 54
2.13 What can be required from testing? 55
2.14 Important terms 56
3. Testing based on test cases 62
3.1 Test case in a nutshell 63
3.2 Test case structure and content 71
3.3 Other notes about test cases 74
4. Testing as a part of a software engineering process 75
4.1 Levels of testing – V-model of testing 79
4.2 Unit testing 84
4.3 Test-Driven Development 104
Ohjelmistotekniikka
Contents 3/10
4.4 Techniques for designing test cases 108
4.5 Equivalence partitioning method 109
4.6 Boundary value analysis 114
4.7 Testing combinations 119
4.8 Fuzz testing 126
4.9 Low level integration testing 129
4.10 Continuous Delivery 144
4.11 System testing 147
4.12 System integration testing 156
4.14 Testing in agile development 170
4.15 Acceptance testing 179
4.16 Agile acceptance testing: ATDD 187
4.17 Alpha and Beta tests 190
4.18 Are all test levels and phases needed? 191
Ohjelmistotekniikka
Contents 4/10
4.19 When to move to the next level? 195
5. Exploratory testing 203
5.1 Exploratory testing in nutshell 204
5.2 Exploratory testing: Examples 209
5.3 Exploratory testing is based on strategies and knowledge 211
5.4 Starting point for a session 215
5.5 Documenting the testing? 219
5.6 Exploratory testing in practice 221
5.7 Fast test planning for new features 223
5.8 Preparation 224
5.9 Agile testing in non-agile project 225
6. Risk analysis and prioritization of tests 237
6.1 Basis in understanding usage 239
6.2 Flow of risk analysis 240
Ohjelmistotekniikka
Contents 5/10
7. On documentation 253
7.1 High level test plan documentation 254
7.2 Test planning process 255
7.3 IEEE 829 guides planning 260
7.4 Low level: documentation of test cases 263
7.5 Lightly in small projects 264
7.6 Test reports 265
7.7 Good test report 266
7.8 IEEE 829-2008 – In agile development, less documentation is required 271
8. Monitoring testing 272
8.1 Test management software in a nutshell 273
8.2 Monitoring error situation 274
8.3 Test report 275
9. More on methods and techniques of testing 276
Ohjelmistotekniikka
Contents 6/10
9.1 Is the source code used in testing or not? 280
9.2 Who does the testing? 284
9.5 What kinds of things are tested? 287
9.8 What kinds of problems are looked for? 290
9.11 How does the program need to be used? 293
9.12 Installation testing 294
9.13 Performance and load testing 295
9.14 Robustness testing 302
9.15 Regression testing 303
9.16 Smoke testing 306
9.17 How do we know whether a test run was successful or not? 308
9.18 Heuristic consistency 309
10. Error reporting 311
10.1 An error report shared information 312
Ohjelmistotekniikka
Contents 7/10
10.2 Can the error be repeated? 321
10.3 Recipe for a good error report 322
10.4 Error databases 327
11. Measuring software 329
11.1 Metrics 330
11.2 Non-functional testing 344
11.3 You get what you measure! – How metrics are fooled 345
11.4 System level coverage metrics 346
11.5 Code coverage metrics 348
11.6 Complexity metrics 359
11.8 Error seeding 361
12. Automation and tools 364
12.1 Test automation as a whole 366
12.2 What is test automation 367
Ohjelmistotekniikka
Contents 8/10
Different ways to control the SUT (simplified) 368
12.3 Promises of test automation 369
12.4 Common problems of test automation 370
12.5 Limits of automation 371
12.6 Testability of software 374
12.7 Test automation – different kind of software engineering 379
12.8 Approaches to automation 383
12.9 Planning of automation 393
12.10 Well-known tools 397
12.11 Automation project 398
12.12 Choosing a tool 400
12.13 Model-based testing 405
13. Testing of information security 426
13.1 What is included in information security? 428
Ohjelmistotekniikka
Contents 9/10
13.2 Testing of information security is important 430
13.3 Targets of information security testing 432
13.4 Nature of testing information security 433
13.5 Based on a risk analysis 434
13.6 Lots of guidelines 435
13.7 OWASP – Security of web pages 436
13.8 OWASP – Mobile security 439
13.9 Threats to PC applications 440
14. Techniques of static testing 458
14.1 Inspection 459
14.2 Review 468
14.3 Walkthrough 471
14.4 Static analysis of code 473
15. Improving testing 478
Ohjelmistotekniikka
Contents 10/10
15.1 Continuous improvement 480
15.2 Improvement project 481
15.3 Key areas of testing process – an example with TPI 487
15.4 Improvement to meet the requirements of standards 494
15.5 Tester certification – ISTQB 496
16. Closing words of the course 498
Literature 502
Ohjelmistotekniikka
Foreword
These slides have been created during the years 2003 – 2015, when
the software testing course has been arranged at TUT in its current
form. Sources used include the books listed at the end of the slide set,
training materials by Maaret Pyhäjärvi and Erkki Pöyhönen, and related
course material from University of Helsinki. Other sources will be cited
in the slides as needed.
The slide set has been updated every year to keep it up to date, taking
into account the competencies required of MSc’s in the near future.
OTE
The mark in some of the slides indicates that the subject has
been covered in the prerequisite course Ohjelmoinnin tekniikat
(programming techniques) and will be dealt with more briefly.
Designed test
Program
execution
2. Misuse of software
• Wrong purpose
3. Habits of use
1. Failures
• Different users
• Network connection is lost
• Hard drive becomes full
• File isn’t found
Un-
prepared-
ness
6. Erroneous inputs
4. Use environment
• Typos
• Misunderstandings
• Missing data
• Errors in files 5. Sabotage
• Internal attacks
• External attacks
• Hacking
• Denial of service
Figure: Timo Malm, VTT. Data origin: Capers Jones. Software quality in 2008: A
survey of the state of the art.
Software testing, 2015 34(504)
Ohjelmistotekniikka
2.5 The concept of test type
• The concept of test type is closely related to system features.
• It describes testing performed to measure the quality of a specific
feature.
– Functional testing tests functional features – does everything work as it
should. It can be done in low level unit testing or high level system
testing for example through the user interface
– Usability testing tests usability.
– Performance testing tests performance etc…
• The idea is to focus testing on some thing and then make use of
methods and tools suitable for it – and also professionals expert in it.
• Functional testing is the most common and therefore will be
examined most on this course.
• Standardization school.
– Testing should be based on standards, such as ISO/IEC 29119 or the
descriptions of ISTQB. The compatibility of testing methods with these
is considered a show of professionalism.
– Testing is the same in all contexts and so should be the practices.
• Quality management and assurance school.
– Testing is part of quality management and assurance.
– Testing is a method of verification and validation (V&V) and used in
well-defined, repeatable ways.
– Testing is well defined in software production processes.
– Testing management and measurement have a large role.
– Close to standardization school.
• Automation school.
– All testing should be automated, no testing should be performed
manually.
– The nature of bugs is such that automation can detect them.
– Test coverage should be near perfect.
– Testing is by its nature a logistic process.
• Developer-centric school.
– Unit and integration testing performed by developers is usually enough.
– Testing must be integrated if software production processes.
– Test automation must be practical, fast, simple and easy.
• Routine school.
– Testing is simple work that doesn’t require special skills.
– Pretty much everyone in the organization can test.
– Discipline, precision and following the plan are the most
important things in testing work.
• Holistic school.
– There is no one way to do testing.
– Good testing applies a context-dependent mixture of several
paradigms.
– All approaches complement each other.
– Even contradictory paradigms are good for testing.
Functional
System testing
specification
Implementation
Test design
Result verification
Software testing, 2015 79(504)
Ohjelmistotekniikka
Traditional, still essential
• Testing is attached to traditional software engineering process
that follows the waterfall model according to the V-model
• Describes the ”levels” of testing that are still relevant in all
development models
• Has a strong role in ”regulated” development, in standards
regarding safety-critical systems
• Part of an all-round understanding of testing
• In practice often too rigids to be applied literally, often more of
a pedagogical abstraction
developer’s workstation
• In modern software development the developers have the
entire program available through version control.
– In the state in which others have submitted their work into the
whole.
• Own code is developed within this whole.
• The testing performed by the developer is focused on their
own work, but at the same time they check that the entire
program compiles and at least can be launched.
• So own code is personally integrated into the whole and unit
tests used to check that it works.
• Unit testing is started from the lowest level units and drivers
are implemented for these units
• In the next phase, these drivers are replaced with the
corresponding units as they become available
• These units are tested next
– Next, implement the necessary drivers again
– The lowest level units remove the need for stubs
• Emphasizes low-level functionality
• No prototype of the system is available until the end of the
integration testing
requirements
– Whenever possible, it is
worthwhile to test many TC2 OK OK
TC4 OK OK
• Purpose
– Ensure that the combined system works well and all component
systems interoperate.
• Based on a test plan.
• Test environment matches the end use environment of the system.
• Test environment combines systems produced by different parties.
• One of the most critical parts of testing in information system
projects.
– Many large projects are in trouble because the systems made by
different parties don’t work together.
• Requires cooperation between all parties.
• Done by a separate team.
Software testing, 2015 158(504)
Ohjelmistotekniikka
Approach
• Errors in documentation
• Erroneous assumptions made in absence of documentation
• Timeouts
– One system doesn’t wait until another finishes a task
Unit and
integration testing New features
Feature 2…N
• Level of interaction:
– User story or use case as a starting point
– Works directly as a starting point for exploratory testing
• Level of logic:
– Traditional testing techniques – equivalence partitioning,
boundary value analysis, decision trees, state machine based
testing
• Physical level:
– Monitoring events programmatically as part of exploratory testing
• Test planning
– The W-model of testing, where the basic lines of testing are
planned at the beginning of the project but details only when the
product is getting ready for testing
• Dynamic guidance
– Although the product has a build plan, testing is fitted into the
completion of components in an agile manner
– The emphases of testing on different sections is changed
dynamically according to how many errors are found and the risk
levels of the sections
– The test set for each round is in the end a situational choice
– Test suites are updated according to observations gotten from
clients and stakeholders
• Exploratory testing
– Testing always includes a more free-form part
– Familiarizing oneself with a new version is done with exploring; it
gives an idea of targets for systematic testing and is therefore
part of test planning
– When systematic testing no longer finds errors effectively,
emphasis is moved more to exploratory testing
– Systematic tests are also updated based on observations
Produces Systematic
knowledge testing
• Application situation
– New build has been tested systematically
• Basic idea:
– Change in testing methods helps find errors
– Use exploratory testing to find new errors
– Apply techniques that have yet to be used in the project
1. Choose a role in which you act (simulate some user group’s choice of
features to use)
2. Choose the test environment based on that
3. Choose use cases and prioritize them
4. Recognize priorities
– New features, changes
– Which areas have had errors before
– Feature priorities (for the product and chosen user demographic)
5. Perform an experimental round of testing
6. Perform a round of testing making mistakes
7. Perform a round of testing that stresses the system
8. Fit your actions to your observations and improvise!
Software testing, 2015 236(504)
Ohjelmistotekniikka
6. Risk analysis and prioritization of
tests
• Risk analysis is performed
– Unknowingly: If I only study for the exam on the previous night,
what are the odds that I’ll fail, and what are the consequences?
– Knowingly: When the key persons of the organization have to
travel on the same day to a same place, is it a good idea to put
them on a same flight?
• In developing safety-critical systems, risk analysis is
mandatory, elsewhere recommended
– There are no one-size-fits-all practices for risk analysis
– Every organisation should have a person who can plan efficient
and simple practices for it
• The methods of risk analysis vary by situation. This section
only covers a few method suitable for directing testing. The risk
analysis for e.g. a project would be different.
Evaluate
Use ordering
to prioritize
Sort features results and Calculate risk-
by risk change if based priority
tests
needed
Could test
estimate
Should test
actual
Must test
time
Test reports
Master
System
test plan
test plan
Integration
test plan
Unit
test plan
Note: Levels of
Master test plan testing may be
(MTP) other than those
shown here
System test
System test cases
procedures
Test execution
Test level
Test level logs
anomaly reports
Component
Acceptance test System test Component test
integration test
report report report
report
Master test
report
• Developer:
– Unit testing, integration testing
• Test engineer:
– Integration testing, system testing
– On the customer's side: acceptance testing
• User tests:
– Software is tested by its user, sometimes with a member of the
supplier’s testing team; acceptance testing
• Alpha testing:
– A user test at supplier's premises, test releases not public
• Beta testing:
– A user test at customer's premises, public test releases
Software testing, 2015 284(504)
Ohjelmistotekniikka
9.3 Who does the testing? 2/3
• Crowd testing:
– Large crowds perform goal-oriented testing
• Subject-matter expert testing:
– The software is given for testing to a person (not necessarily an
end user) who knows (some part of) the application area very
well
– This results in errors, criticism and sometimes even praise
• Pair testing:
– Two testers test together with one computer and switch roles
from time to time
• There are many alternative metrics, and the first one offered
should not be accepted immediately
• The spirit among the testers is an important and sometimes
the best metric: how does the software feel, has it matured
and is it getting ready for deployment
Software testing, 2015 342(504)
Ohjelmistotekniikka
Direct and indirect metrics
• Basic metrics: tests run, errors found, coverage with regard to
requirements / specifications, code coverage, size,
complexity, amount of work done, etc.
• Derived metrics: error density, rate of successful and
unsuccessful error fixes, rate of errors found to errors that
could have been found, etc.
• Process-based metrics: execution time of tests, degree of
automation, life time of errors in days, amount of work spent
in test development, quality debt in an agile project, etc.
• Generally the problem is not in coming up with metrics, as
there are enough already; the problem is in picking the right
ones
end define
grade
Update
student's
data
Example from
[Haikala&Märijärvi 06]
Software testing, 2015 352(504)
Ohjelmistotekniikka
calculate
results
Statement coverage
initialisations
63 % define
grade
Update
student's
data
coverage
• Decision coverage requires that every yes
alustukset
n opiskelijan
attended the exam or not: 2 tiedot
määrää
the middle, the number of paths would be arvosana
n n-1 n-2 1 0
even greater: 2 +2 +2 +…+2 +2
päivitä
opiskelijan
tiedot
comparison
Eliminating alternatives
single choice
Pilot project
Play
FF, Rwd, Play Fast forward
Rec, Play Stop Stop
Stop
Play FF, Rec, Rwd
FF
Play Ready
Rec
Stop
Stop
Rwd
Rewind Record
Play
FF, Rwd, Play Fast forward
Rec, Play 5 Stop Stop
Stop
Play FF, Rec, Rwd
FF
Play Ready
1 Rec
Stop
3
Stop
Rwd
4
Rewind Record
2
Rwd, Rec, FF Play, FF, Rec, Rwd
Test
Generate Execute Evaluate Report
Behaviour
Test Suite Test Suite Test Results Results
No
Online? Yes
Yes No
Test
Select Next Execute Step on Evaluate Objectives
Objectives
Test Step Model& SUT Result Achieved?
Adapted from: Alan Hartman, Mika Katara, and Sergey Olvovsky. Choosing a Test Modeling Language:
a Survey. In Proceedings of the Haifa Verification Conference 2006, IBM Haifa Labs, Haifa, Israel,
October 2006. Number 4383 in Lecture Notes in Computer Science, pages 204-218. Springer 2007.
OK
Specification Do Draft Inspect Phase product
n
requirements specification
design
system testing
time
bar1()
foo()
bar2()