CS-E4960 - Software Testing and Quality Assurance

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

CS-E4960 - Software

Testing and Quality


Assurance
Adj. Prof. Jussi Kasurinen
(Based on last year’s slides by Dr. Juha Itkonen)
Fall 2017
Exploratory vs. Confirmatory Testing
and the Role of Knowledge
CS-E4960Software Testing and Quality Assurance
28.9.2016
Software testing…

• is creative and exploratory work


• requires skills and knowledge
– application domain
– users’ processes and objectives
– some level of technical details and history of the application
under test
• requires certain kind of attitude
• requires ability to work without exact objetives.
• Organizations may not have a well-defined goal
on the quality they want

3
Quality characteristics
Functional suitability 4,2

All at least somewhat Reliability 4,1

important
Only in 9 out of 248 Performance 3,8

(3,6%) assessments Operability 3,6

“not important” Security 4,0

Perceived quality,
not absolute quality! Compatibility 3,9

Maintainability 3,5

Transferability 3,3

0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5

Jussi Kasurinen
Important things in testing that affect
perceived quality (2009 survey)
Trust between the customer and developer.
Specifications are distributed freely, things are
discussed and more information is available if
needed.
Existing conformance with the ISO/IEC 29119 model concepts.
Workflow has a feedback system which is used,
plans and made decisions are assessed and changes
can be made.
Elaboration of quality
Everyone in the organization knows what kind of
quality is preferred and tries to achieve that.

Jussi Kasurinen
Things that do not affect the perceived quality that
much even if they seem a big influence
End-product criticality
Obviously a rocket control system is tested more thoroughly
than a mobile game, but preferred quality comes from the
domain, not criticality.
Development method
An agile product can be as high quality as a “traditionally”
made.
Open source is not necessarily better (or worse) than closed
software.
Outsourcing
In large organizations, outsourcing does not seem to affect
perceived quality.
In smaller organizations it may, but not always.

Jussi Kasurinen
Typical, confirmatory paradigm
of software testing techniques
• Comprehensive pre-planning of the
tests
• High emphasis on documentation
• Test execution is seen as a mechanical,
repetitive activity
– including recognition of the defects
• Goal is automation
– Eliminating the human factor

7
Exploratory Testing is creative testing
without predefined test cases
Based on knowledge and skills of the tester

1. Tests are not defined in advance


– Exploring with a general mission
– without specific step-by-step instructions on how to accomplish the mission
2. Testing is guided by the results of previously performed tests and the
gained knowledge from them
– Testers can apply deductive reasoning to the test outcomes
3. The focus is on finding defects by exploration
– Instead of demonstrating systematic coverage
4. Parallel learning of the system under test, test design, and test execution
5. Experience and skills of an individual tester strongly affect effectiveness
and results
Itkonen J, Rautiainen K (2005) “Exploratory testing: a multiple case study.”
Proceedings of International Symposium on Empirical Software Engineering, pp 84–93.

8
Knowledge is invaluable in software
testing
• Domain knowledge
– Knowledge and skills gained in the application domain area
– How the system is used in practice, and by whom
– What are the goals of the users
– How is the system related to the whole (business) processes
• Technical system knowledge
– How the system was built
– What are the typical problems and defects
– What are the most important features
– How the system is used and how all the details are designed to work
– How things work together and interact
• Testing knowledge
– Knowledge of testing methods and techniques
– Testing strategies and experience grown in practice

9
Confirmatory Approach
Knowledge transfer
Tacit knowledge
Knowl. tr.
Explicit knowledge Designing and
discovering
Documentation new tests
Explicit knowledge
Test Reports
Knowl. tr. Knowl. tr.
Explicit
Knowl. tr. knowledge Human Execution
Tests

Design Documenting Automated Execution Reporting

ExploratoryApproach
Tacit knowledge
Explicit knowledge Knowl. tr.
Documentation Exploring,
designing and
discovering new
tests Tacit
Explicit knowledge
knowledge
Learning
Test logs
Explicit knowledge

Knowl. tr. Test Reports

Parallel design, execution, learning, and reporting


Itkonen, J., M. V. Mäntylä, and C. Lassenius. 2016. “Test Better by Exploring:
Harnessing Human Skills and Knowledge.” IEEE Software 33 (4): 90–96.
Exploratory and Confirmatory testing are
different approaches
• Different goals
• Different strengths
• Different shortcomings

• Automation helps with confirmatory goals


• Exploratory testing can be augmented with good tools

11
Itkonen, J., M. V. Mäntylä, and C. Lassenius. 2016. “Test Better by Exploring:
Harnessing Human Skills and Knowledge.” IEEE Software 33 (4): 90–96.
Individual differences are significant in
manual testing
• It is common to bypass the effects of
the people
– Thinking that anybody is capable of
executing comprehensively described
tests
• In reality, the individual differences
are large
– The effect of the tester is higher than the
effect of applied testing technique

It’s clear, in practice, that some testers


are better than others in testing and
more effective at revealing defects...
Exploratory Testing is an approach

• Most of the testing techniques can be used in exploratory


way
• Exploratory testing and (automated) scripted testing are
the ends of a continuum

Freestyle exploratory Pure scripted


“bug hunting” (automated) testing
High level
test cases

Chartered Manual scripts


exploratory testing
Lateral thinking

• Allowed to be distracted
• Find side paths and explore interesting areas
• Periodically check your status against your mission
Exploratory and experience based
testing focuses on
• Revealing problems
– instead of demonstrating features
• Investigating risks to the expected benefits
– instead of reassuring that the benefits will realize
• Understanding the real needs of the users
– instead of relying (purely) on the specifications
• Providing relevant and useful information on the quality
– instead of reporting numbers of executed test cases and
achieved coverage
Agenda
• Exploratory and Confirmatory testing
– Exploratory vs. confirmatory
– Knowledge in testing

• Ways of Exploring
– Session Based Test Management
– Touring testing Photo by Diego Torres Silvestre, flickr:
http://www.flickr.com/photos/3336/1607347259/

• Benefits of Experience Based Testing


– Strengths
– Shortcomings

17
Some ways of exploring in practice
Itkonen J, Rautiainen K (2005) “Exploratory testing: a
multiple case study.” Proceedings of ISESE, pp 84–93.
• Session-based exploratory testing
• Touring testing
• Varied scenario testing
• Freestyle exploratory testing
– Unmanaged ET
• Functional testing of individual features
• Exploratory smoke testing
• Exploratory regression testing
– verifying fixes or changes
• Exploring with high level test cases
• Outsourced exploratory testing
– Advanced users, strong domain knowledge

• Many more ways of managing ET can be found


– e.g., list by James Lyndsay:
http://workroomprds.blogspot.fi/2011/12/there-are-plenty-of-ways-to-manage.html

18
Session Based Test Management (SBTM)
Bach, J. "Session-Based Test Management", STQE, vol. 2, no. 6, 2000.
http://www.satisfice.com/articles/sbtm.pdf
Lyndsay, J., and N. van Eeden. 2003. “Adventures in Session-Based Testing.”
http://www.workroom-productions.com/papers/AiSBTv1.2.pdf

• Charter
• Time Box
• Reviewable Result
• Debriefing

19
Session-Based Testing
– a way to manage ET
• Enables planning and tracking exploratory testing
– Without detailed test (case) designs
– Dividing testing work in small chunks
– Tracking testing work in time-boxed sessions
• Efficient – no unnecessary documentation
• Agile – it’s easy to focus testing to most important areas based on
the test results and other information
– Changes in requirements, increasing understanding, revealed
problems, identified risks, …
• Explicit, scheduled sessions can help getting testing done
– When resources are scarce
– When testers are not full-time testers...

20
Exploring like a tourist
– a way to guide ET sessions
• Touring tests use a tourist metaphor to
guide testers’ actions
• Focus to intent rather than separate
features
– This intent is communicated as tours in
different districts of the software

James A. Whittaker. Exploratory Software Testing, Tips, Tricks,


Tours, and Techniques to Guide Test Design. Addison-Wesley, 2009.

21
Districts and Tours
• Business district • Tourist district
– Guidebook tour – Collector’s tour
– Money tour – Lonely businessman tour
– Landmark tour – Supermodel tour
– Intellectual tour – TOGOF tour
– FedEx tour – Scottish pub tour
– After-hours tour • Hotel district
– Garbage collector’s tour – Rained-out tour
• Historical district – Coach potato tour
– Bad-Neighborhood tour • Seedy district
– Museum tour – Saboteur tour
– Prior version tour – Antisocial tour
• Entertainment district – Obsessive-compulsive tour
– Supporting actor tour
– Back alley tour James A. Whittaker. Exploratory Software Testing, Tips, Tricks,
– All-nighter tour Tours, and Techniques to Guide Test Design. Addison-Wesley,
2009.

22
Examples of exploratory testing tours
The Guidebook Tour The Garbage Collector’s Tour
• Use user manual or other • Choosing a goal and then visiting
documentation as a guide each item by shortest path
• Test rigorously by the guide • Screen-by-screen, dialog-by-
• Tests the details of important dialog, feature-by-feature, …
features • Test every corner of the software,
• Tests also the guide itself but not very deeply in the details
• Variations The All-Nighter Tour
– Blogger’s tour, use third party advice as guide • Never close the app, use the
– Pundit’s tour, use product reviews as guide
features continuously
– Competitor’s tour
– keep software running
– keep files open
– connect and don’t disconnect
– don’t save
– move data around, add and remove
– sleep and hibernation modes ...

23
Agenda
• Exploratory and Confirmatory testing
– Exploratory vs. confirmatory
– Knowledge in testing

• Ways of Exploring
– Session Based Test Management
– Touring testing
– Varied Scenario Testing Photo by Diego Torres Silvestre, flickr:
http://www.flickr.com/photos/3336/1607347259/

• Benefits of Exploratory Testing


– Strengths
– Shortcomings

20
Strengths of exploratory testing
Testers’ skills and knowledge
• Utilizing the skills and experience of the tester
– Testers know how the software is used and for what purpose
– Testers know what functionality and features are critical
– Testers know what problems are relevant
– Testers know how the software was built
• Risks, tacit knowledge
• Enables creative exploring
• Enables fast learning and improving testing
– Investigating, searching, finding, combining, reasoning,
deducting, ...
• Testing intangible properties
– “Look and feel” and other user perceptions

25
Strengths of exploratory testing
Process
• Agility and flexibility
– Easy and fast to focus on critical areas
– Fast reaction to changes
– Ability to work with missing or weak documentation

• Effectiveness
– Reveals large number of relevant defects
– Knowledge in different forms can be readily applied

• Efficiency
– Low documentation overhead
– Fast feedback

26
ET is efficient testing approach
Few studies comparing exploratory vs. scripted
testing approach report:

• Exploratory testing reveals at least as


many defects than scripted approach
• Exploratory testing is much more cost
effective
– Avoiding the expensive pre-design and
documentation of the details of every test
Experimental Comparison of ET and Test Case
Based Testing (TCBT)
Itkonen, J., M. V. Mäntylä and C. Lassenius. "Defect Detection Efficiency: Test Case Based vs.
Exploratory Testing", in proceedings of the International Symposium on Empirical Software Engineering and
Measurement, pp. 61-70, 2007.

• We compared the effectiveness of ET and TCT in a student


experiment
– Effectiveness in terms of revealed defects
– Test execution time was fixed
– Replicated later in two other studies

• No difference in effectiveness
– ET revealed more defects, but no statistical difference
• ET was much more efficient
– TCBT required over five times more effort

• TCBT produced twice as many false reports than ET

28
Challenges of experience based testing
• Planning and tracking
– How much testing is needed, how long does it take?
– What is the status of testing?
– How to share testing work between testers?
• Managing test coverage
– What has been tested?
– When are we done?
• Logging and reporting
– Visibility outside testing team
• or outside individual testing sessions
• Strong dependency on individual skills

29
Traditional emphasis on
test documentation
• Test case design and documentation over emphasized
– Both in textbooks and research
• Test cases make test designs tangible, reviewable, and
easy to plan and track – i.e. manage and control
– False sense of formality, comprehensiveness, control, and
quality?
• In many contexts test cases and/or other test design
documentation is useful – sometimes not

• The level and type of test documentation should vary


based on context

30
RE: Test standard
• Known weaknesses:
• Difficult to apply in
practice.
• Not enough practical
information.
• ”Top-heavy”: way
too much emphasis
on the management
• Overtly optimistic:
feedback is not
always used.

31
Re: How do the organizations
develop their test process?
• Organizations do not tend to try out new ideas.
• Sporadic development is done when the inconveniences overcome
acceptable losses.
• Even if the test process feedback is collected, it is often neglected if the
process is “good enough”.

Jussi Kasurinen
Reasons for documenting tests
• Optimizing
– Selecting optimal test set
– Avoiding redundancy
• Organization
– Organized so that tests can be reviewed and used effectively
– Selecting and prioritizing
• Repeatability
– Know what test cases were run and how; so that you can repeat the same tests
• Tracking
– What requirements, features, or components are tested
– What is the coverage of testing
– How testing proceeds? Are we going to make the deadline?
• Proof of testing
– Evaluating the level of confidence
– How do we know what has been tested?

33
Detail level of test documentation

• Experienced testers need less detailed test documentation


– More experienced as testers
– More familiar with the software and application domain
• Input conditions
– Depends on the testing technique and goals of testing
– E.g. if goal is to cover all pairs of certain input conditions, the test cases
have to be more detailed than in tour-based exploring
• Expected results
– More detail is required if the result is not obvious, requires complicated
comparison, etc.
– Inexperienced tester needs more guidance on what to pay attention to

34
The Oracle Problem
The oracle problem – How to recognize a
failure when it occurs
• Oracle problem is one of the fundamental
challenges in software testing
– Oracle problem is a relevant challenge of all testing
– A serious limitation and challenge in test automation
• Scripted testing aims at “solving” it by pre-
documenting the expected result in test cases
– In practice, very challenging problem that cannot be
solved simply writing “the expected result” down
Personal knowledge as an oracle
• Exploratory testers use their experience based knowledge to
interpret the test results and recognize failures
• Behaviour of systems is too complicated to predict
– to describe comprehensively and precisely all that can go wrong
• Bugs are surprising and testers are able to recognize one when
they see it
– Human tester can identify problems without designing a check for that
particular type of problem beforehand
• Partial oracles1
– Tester with experience can identify incorrect results that are not plausible without knowing
the exactly correct result
– E.g. a controller can differentiate incorrect values for financial figures:
• 300€, 1000€, 10 000€ and 250 000 € are clearly incorrect if correct figure is known to be around
1 000 000€, without knowing the correct figure exactly, (e.g. 1 103 456,42 €)

1Weyuker, E.J., 1982. On Testing Non-Testable Programs. The Computer Journal, 25(4))
The role of the tester’s knowledge in exploratory
software testing
Itkonen J, Mäntylä M.V., Lassenius C (2012) “The Role of the Tester’s Knowledge in Exploratory
Software Testing.” IEEE Transactions on Software Engineering, pre-print, doi: 10.1109/TSE.2012.55

• Detailed analysis of 91 defect detection incidents form


video recorded exploratory testing sessions
• Analysed
– what type of knowledge is required for detecting failures?
– failure detection difficulty

38
Windfall bugs

• Relatively high number (20%) of bugs


were found by opportunity
– Meaning that testers detected failures in other
features than the primary target of the testing
session in question
– as a result of exploring
• This finding supports the strength of ET in
enabling more versatile testing
– Testers are not working blinders on
– Testers explore and investigate the system,
and reveal bugs, when they see the opportunity
Conclusions: Not all bugs are buried
deep or masquerade cleverly
• Almost third of the failures could be identified based on
the generic SE knowledge
• Over 50% were obvious or straightforward to reveal in
terms of interacting variables
This implies that it is possible to
provide fast contribution without
rigorous or sophisticated test design
or deep knowledge…

… but the challenge is to know what


remains under the surface.
Conclusions: Contribution of domain
experts
• Failures that required specific domain knowledge or
users’ perspective to be revealed were often
straightforward to provoke
• People with right type of knowledge are useful for
revealing defects and issues even if not especially
experienced in testing
• It seems that experience-based oracles are often
enough
– If documentation is needed it often does not provide the answer
-> testers have to ask others
– Many times they choose to ask people without bothering to dig
into the documentation at all
Challenges – to distinguish obvious and
straightforward from hidden and complicated
• We need to focus on the skills and
quality of exploratory testing
• It is easy to see what is on the surface
• What lies below will probably determine
the result at the end
• Managing different types of testing
contributions is a challenge
– Understanding the testing done by different
testers and how much their efforts can be
relied on
– Interpreting the results and findings of
different testers
References (primary)
Bach, J., 2000. Session-Based Test Management. Software Testing and Quality Engineering, 2(6). Available at:
http://www.satisfice.com/articles/sbtm.pdf.
Bach, J., 2004. Exploratory Testing. In E. van Veenendaal, ed. The Testing Practitioner. Den Bosch: UTN Publishers, pp.
253-265. http://www.satisfice.com/articles/et-article.pdf.
Itkonen, J. & Rautiainen, K., 2005. Exploratory testing: a multiple case study. In Proceedings of International Symposium on
Empirical Software Engineering. International Symposium on Empirical Software Engineering. pp. 84-93.
Itkonen, J., Mäntylä, M.V. & Lassenius, C., 2007. Defect Detection Efficiency: Test Case Based vs. Exploratory Testing. In
Proceedings of International Symposium on Empirical Software Engineering and Measurement. International Symposium on
Empirical Software Engineering and Measurement. pp. 61-70.
Itkonen, J., Mantyla, M. & Lassenius, C., 2009. How do testers do it? An exploratory study on manual testing practices. In
Empirical Software Engineering and Measurement, 2009. ESEM 2009. 3rd International Symposium on. Empirical Software
Engineering and Measurement, 2009. ESEM 2009. 3rd International Symposium on. pp. 494-497.
Itkonen J., Mäntylä M. V., Lassenius C., 2012. The Role of the Tester’s Knowledge in Exploratory Software Testing. IEEE
Transactions on Software Engineering, Preprint, Sep 2012.
Itkonen, J., 2011. Empirical Studies on Exploratory Software Testing. Doctoral dissertation, Aalto University School of Science.
http://lib.tkk.fi/Diss/2011/isbn9789526043395/
Lyndsay, J. & Eeden, N.V., 2003. Adventures in Session-Based Testing. http://www.workroom-productions.com/papers/
AiSBTv1.2.pdf. Available at: http://www.workroom-productions.com/papers/AiSBTv1.2.pdf .
Martin, D. et al., 2007. 'Good' Organisational Reasons for 'Bad' Software Testing: An Ethnographic Study of Testing in a Small
Software Company. In Proceedings of International Conference on Software Engineering. International Conference on Software
Engineering. pp. 602-611.
Mäntylä, M.V., J. Itkonen, and J. Iivonen, 2012. Who tested my software? Testing as an organizationally cross-cutting activity,
Software Quality Journal, vol. 20(1), 2012, pp. 145–172.
Whittaker, J.A., 2009. Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design, Addison-Wesley
Professional.

39
References (secondary)
Agruss, C. & Johnson, B., 2005. Ad Hoc Software Testing.
Ammad Naseer & Marium Zulfiqar, 2010. Investigating Exploratory Testing in
Industrial Practice. Master's Thesis. Rönneby, Sweden: Blekinge Institute of Technology. Available at:
http://www.bth.se/fou/cuppsats.nsf/all/8147b5e26911adb2c125778f003d6320/$file/MSE-2010-15.pdf.
Armour, P.G., 2005. The unconscious art of software testing. Communications of the ACM, 48(1), 15-18.
Beer, A. & Ramler, R., 2008. The Role of Experience in Software Testing Practice. In Proceedings of Euromicro
Conference on Software Engineering and Advanced Applications. Euromicro Conference on Software Engineering and
Advanced Applications. pp. 258-265.
Houdek, F., Schwinn, T. & Ernst, D., 2002a. Defect Detection for Executable Specifications — An Experiment.
International Journal of Software Engineering & Knowledge Engineering, 12(6), 637.
Kaner, C., Bach, J. & Pettichord, B., 2002. Lessons Learned in Software Testing, New York: John Wiley & Sons, Inc.
Martin, D. et al., 2007. 'Good' Organisational Reasons for 'Bad' Software Testing: An Ethnographic Study of Testing in
a Small Software Company. In Proceedings of International Conference on Software Engineering. International
Conference on Software Engineering. pp. 602-611.
Tinkham, A. & Kaner, C., 2003a. Learning Styles and Exploratory Testing. In Pacific Northwest Software Quality
Conference (PNSQC). Pacific Northwest Software Quality Conference (PNSQC).
Wood, B. & James, D., 2003. Applying Session-Based Testing to Medical Software. Medical Device & Diagnostic
Industry, 90.
Våga, J. & Amland, S., 2002. Managing High-Speed Web Testing. In D. Meyerhoff et al., eds. Software Quality and
Software Testing in Internet Times. Berlin: Springer-Verlag, pp. 23-30.

49
Questions and more discussion

41

You might also like