Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Werabe University IT Department

Chapter 7
Object-Oriented Testing
Overview
Software development is a complex endeavor. You create a variety of artifacts
throughout a project, some of which you keep and some you do not. Regardless
of whether you keep the artifact, the reason why you create it is because it
adds some sort of value.
7.1 The Cost of Change
A critical concept that motivates testing is the cost of change. Figure 7.1
depicts the traditional cost of change curve for the single release of a project
following a serial (water fall) process. It shows the relative cost of addressing a
changed requirement, because it was either missed or misunderstood,
throughout the lifecycle. As you can see, the cost of fixing errors increases
exponentially the later they are detected in the development lifecycle because
the artifacts within a serial process build on each other. For example, if you
make a requirements error and find it during the requirements phase it is
relatively inexpensive to fix. You merely change a portion of your requirements
model.

Figure 7.1: Traditional cost of change curve.


It is clear from Fig. 7.1 that you want to test often and test early. By reducing
the feedback loop, the time between creating something and validating it, you
will clearly reduce the cost of change.
7.2 Testing Philosophies
1. The goal is to find defects. The primary purpose of testing is to validate
the correctness of whatever it is that you are testing. In other words,
successful tests find bugs.
2. You can validate all artifacts. As you will see in this chapter, you can test
all your artifacts, not just your source code. At a minimum you can

Object Oriented System Analysis & Design 1


Werabe University IT Department

review models and documents and therefore find and fix defects long
before they get into your code.
3. Test often and early. The potential for the cost of change to rise
exponentially motivates you to test as early as possible.
4. Testing builds confidence. Many people fear making a change to their
code because they are afraid that they will break it, but with a full test
suite in place if you do break something you know you will detect it and
then fix it.
5. Test to the amount of risk of the artifact. The riskier something is, the
more it needs to be reviewed and tested.
6. One test is worth a thousand opinions. You can tell me that your
application works, but until you show me the test results, I will not
believe you.
7. Testing is not about fixing things. Testing is about discovering defects.
7.3 Regression Testing
Regression testing is the act of ensuring that changes to an application have
not adversely affected existing functionality. Have you ever made a small
change to a program, and then put the program into production only to see it
fail because the small change affected another part of the program you had
completely forgotten about? Regression testing is all about avoiding problems
like this.
7.4 Quality Assurance
Quality assurance (QA) is the act of reviewing and auditing the project
deliverables and activities to verify that they comply with the applicable
standards, guidelines, and processes adopted by your organization.
Fundamentally, quality assurance attempts to answer the following questions:
"Are you building the right thing?" and "Are you building it the right way?"
A key concept in quality assurance is that quality is often in the eye of the
beholder, indicating many aspects exist to software quality, including the
following:
 Does it meet the needs of its users?
 Does it provide value to its stakeholders?
 Does it follow relevant standards?
 Is it easy to use by its intended users?
 Is it reasonably free of defects?
 Is it easy to maintain and to enhance?
 How easy will it integrate into the current technical environment?
Quality assurance is critical to the success of a project and should be an
integral part of all project stages, but only when it is done in an effective and
efficient manner.

Object Oriented System Analysis & Design 2


Werabe University IT Department

7.5 Testing Your Models


You saw that the earlier you detect an error, the less expensive it is to fix.
Therefore, it is imperative for you attempt to test your requirements, analysis,
and design artifacts as early as you can. Luckily, a collection of techniques
exist that you can apply to do exactly that. As you see in Fig. 3.4 these
techniques are
 Proving it with code;
 Usage scenario testing;
 Prototype walkthroughs;
 User interface testing; and
 Model reviews.
7.5.1 Proving It with Code
Everything works on a whiteboard, or on the screen of a sophisticated modeling
tool, or in presentation slides. But how do you know whether it really works?
You don't. The problem is that a model is an abstraction, one that should
accurately reflect an aspect of whatever you are building. Until you build it,
you really do not know whether it works. So build it and find out. If you have
developed a screen sketch you should code it and show your users to get some
feedback. If you have developed a UML sequence diagram representing the logic
of a complex business rule, write the testing and business code to see whether
you have gotten it right
7.5.2 Use case Scenario Testing
Use-case scenario testing is an integral part of the object-oriented development
lifecycle. It is a technique that can be used to test your domain model, which is
a representation of the business/domain concepts and their interrelationships,
applicable to your system. A domain model helps to establish the vocabulary
for your project. Domain models are often developed using class responsibility
collaborator (CRC) models, logical data models or class models.
The steps of a use case scenario testing process are straightforward. They are
1. Perform domain modeling. Create a conceptual domain model,
representing the critical domain concepts (entities) and their
interrelationships. In fact, use-case scenario testing is typically
performed as a part of domain modeling.
2. Create the usage scenarios. A usage scenario describes a particular
situation that your system may or may not be expected to handle. If you
are taking a use-case driven approach to development, use cases
describe a collection of steps that provides value to one or more actors, a
usage scenario will comprise a single path through part or all of a use
case.
3. Assign entities/classes to your subject matter experts (SMEs). Each SME
should be assigned one or more entities that they are to represent.
4. Describe how to act out a scenario. The majority of work with usage
Object Oriented System Analysis & Design 3
Werabe University IT Department

scenario testing is the acting out of scenarios. If the group you are
working with is new to the technique you may want to go through a few
practice rounds.
5. Act out the scenarios. As a group, the facilitator leads the SMEs through
the process of acting out the scenarios. The basic idea is the SMEs take
on the roles of the cards they were given, describing the business logic of
the responsibilities that support each use-case scenario. To indicate
which card is currently "processing," a soft, spongy ball is held by the
person with that card. Whenever a card must collaborate with another
one, the user holding the card throws the ball to the holder of the second
card. The ball helps the group to keep track of who is currently
describing the business logic and also helps to make the entire process a
little more interesting. You want to act the scenarios out so you gain a
better understanding of the business rules/logic of the system (the
scribes write this information down as the SMEs describe it) and find
missing or misunderstood responsibilities and classes.
6. Update the domain model. As the SMEs are working through the
scenarios, they will discover they are missing some responsibilities and,
sometimes, even some classes. Great! This is why they are acting out the
scenarios in the first place. When the group discovers the domain model
is missing some information, it should be updated immediately. Once all
the scenarios have been acted out, the group ends up with a robust
domain model. Now there is little chance of missing information
(assuming you generated a complete set of use-case scenarios) and there
is little chance of misunderstood information (the group has acted out
the scenarios, describing the exact business logic in detail).
7. Save the scenarios. Do not throw the scenarios away once you finish
acting them out. The scenarios are a good start at your user-acceptance
test plan and you will want them when you are documenting the
requirements for the next release of your system.
7.5.3 Prototype Reviews/Walkthroughs
The user interface (UI) of an application is the portion the user directly
interacts with: screens, reports, documentation, and your software support
staff. A user interface prototype is a user interface that has been "mocked up"
using a computer language or prototyping tool, but it does not yet implement
the full system functionality.
A prototype walkthrough is a testing process in which your users work through
a series of usage scenarios to verify that a user prototype meets their needs.
7.5.4 User-Interface Testing
UI testing is the verification that the UI follows the accepted standards chosen
by your organization and the UI meets the requirements defined for it. User-
interface testing is often referred to as graphical user interface (GUI) testing. UI

Object Oriented System Analysis & Design 4


Werabe University IT Department

testing can be something as simple as verifying that your application "does the
right thing" when subjected to a defined set of user-interface events, such as
keyboard input, or something as complex as a usability study where human-
factors engineers verify that the software is intuitive and easy to use.
7.5.5 Model Reviews
A model review, also called a model walkthrough or a model inspection, is a
validation technique in which your modeling efforts are examined critically by a
group of your peers. The basic idea is that a group of qualified people, often
both technical staff and SMEs, get together in a room to evaluate a model or
document. The purpose of this evaluation is to determine whether the models
not only fulfill the demands of the user community but also are of sufficient
quality to be easy to develop, maintain, and enhance.
7.6 Testing Your Code
You have a wide variety of tools and techniques to test your source code. In
this section I discuss
 Testing terminology;
 Testing tools;
 Traditional code testing techniques;
 Object-oriented code testing techniques; and
 Code inspections.
7.6.1 Testing Terminology
Let us start off with some terminology applicable to code testing, system testing
and user testing). To perform these types of testing you need to define, and
then run, a series of tests against your source code. A test case is a single test
that needs to be performed. If you discover that you need to document a test
case, you should describe
 Its purpose;
 The setup work you need to perform before running the test to put the
item you are testing into a known state;
 The steps of the actual test; and
 The expected results of the test.
7.6.2 Testing Tools
As you learned, regression testing is critical to your success. as an agile
developer. Many software developers use the xUnit family of open source tools,
such as JUnit and VBUnit to test their code. The advantage of these tools is
that they implement a testing framework with which you can regression test all
of your source code. Commercial testing tools, such Mercury Interactive), jTest ,
and Rational Suite Test Studio are also viable options. One or more testing
tools must be in your development toolkit.

Object Oriented System Analysis & Design 5


Werabe University IT Department

7.6.3 Traditional Code Testing Concepts


These techniques are
 Black-box testing. Black-box testing, also called interface testing, is a
technique in which you create test cases based only on the expected
functionality of a method, class, or application without any knowledge of
its internal workings. One way to define black-box testing is that given
defined input A you should obtain the expected results B. The goal of
black-box testing is to ensure the system can do what it should be able
to do, but not how it does it.
 White-box testing. White-box testing, also called clear-box testing, is
based on the idea that your program code can drive the development of
test cases. The basic concept is you look at your code, and then create
test cases that exercise it.
 Boundary-value testing. This is based on the knowledge that you need
to test your code to ensure it can handle unusual and extreme
situations.
 Unit testing. This is the testing of an item, such as an operation, in
isolation.
 Integration testing. This is the testing of a collection of items to validate
that they work together.
 Coverage testing. Coverage testing is a technique in which you create a
series of test cases designed to test all the code paths in your code. In
many ways, coverage testing is simply a collection of white-box test cases
that together exercise every line of code in your application at least once.
 Path testing. Path testing is a superset of coverage testing that ensures
not only have all lines of code been tested, but all paths of logic have also
been tested.
Table 7.1: Comparing Traditional Testing Techniques

Object Oriented System Analysis & Design 6


Werabe University IT Department

7.6.4 Object-Oriented Testing Techniques


When testing systems built using object technology it is important to
understand that your source code is composed of several constructs, including
methods (operations), classes, and inheritance relationships. Therefore you
need testing techniques that reflect the fact that you have these constructs.
These techniques, compared in Table 3.3, are
1. Method testing. Method testing is the act of ensuring that your
methods, called operations or member functions in C++ and Java,
perform as defined. The closest comparison to method testing in the
structured world is the unit testing of functions and procedures.
Table 7.2: Comparing Object-Oriented Testing Techniques

2. Class testing. This is both unit testing and traditional integration


testing. It is unit testing because you are testing the class and its
instances as single units in isolation, but it is also integration testing
because you need to verify the methods and attributes of the class work
together. The one assumption you need to make during class testing is
that all other classes in the system work. Although this may sound like
an unreasonable assumption, it is basically what separates class testing
from class-integration testing. The main purpose of class testing is to test
classes in isolation, something that is difficult to do if you do not assume
everything else works. An important class test is to validate that the
attributes of an object are initialized properly.
3. Class-integration testing. Also known as component testing, this
technique addresses the issue of whether the classes in your system, or a
component of your system, work together properly. The only way classes
or, to be more accurate, the instances of classes, can work together is by
sending each other messages.
4. Inheritance-regression testing. This is the running of the class and
method test cases for all the super classes of the class being tested. The
motivation behind inheritance-regression testing is simple: it is
incredibly naive to expect that errors have not been introduced by a new
subclass.

Object Oriented System Analysis & Design 7


Werabe University IT Department

7.6.5 Code Inspections


Code inspections, also known as code reviews, often reveal problems that
normal testing techniques do not, in particular, poor coding practices that
make your application difficult to extend and maintain. Code inspections verify
you built the code right and you have built code that will be easy to
understand, to maintain, and to enhance.
7.7 Testing Your System in Its Entirety
System testing is a testing process in which you aim to ensure that your overall
system works as defined by your requirements. System testing is typically
performed at the end of iteration, enabling you to fix known problems before
your application is user tested System testing comprises the following
techniques:
1. Function testing. When functions testing, development staff verifies that
their application meets the defined needs of their users. The idea is that
developers, typically test engineers, work through the main functionality
that the system should exhibit to assure them selves that their
application is ready for user-acceptance testing (UAT). During user
testing is when users confirm for themselves that the system meets their
needs. In many ways, the only difference between function testing and
user-acceptance testing is who does it: testers and users, respectively.
2. Installation testing. The goal is to determine whether your application
can be installed successfully. The installation utility/process for your
application is part of your overall application package and, therefore,
must be tested.
3. Operations testing. The goal of operations testing is to verify that the
requirements of operations personnel are met. The main goal of
operations testing is to ensure that your operations staff will be able to
run your application successfully once it is installed.
4. Stress testing. Sometimes called volume testing, this is the process of
ensuring that your application works with high numbers of users, high
numbers of transactions (testing of high numbers of transactions is also
called volume testing), high numbers of data transmissions, high
numbers of printed reports, and so on. The goal is to find the stress
points of your system under which it no longer operates, so you can gain
insights into how it will perform in unusual and/or stressful situations.
5. Support testing. This is similar to operations testing except with a
support personnel focus.

Object Oriented System Analysis & Design 8


Werabe University IT Department

7.8 Testing by Users


User testing, which follows system testing, is composed of testing processes in
which members of your user community perform the tests. The goal of user
testing is to have the users verify that an application meets their needs.
User testing comprises the following techniques:
1. Alpha testing. Alpha testing is a process in which you send out software
that is not quite ready for prime time to a small group of your customers
to enable them work with it and report back to you the problems they
encounter. Although the software is typically buggy and may not meet all
their needs, they get a heads-up on what you are doing much earlier
than if they waited for you to release the software formally.
2. Beta testing. Beta testing is basically the same process as alpha testing,
except the software has many of the bugs identified during alpha testing
(beta testing follows alpha testing) fixed and the software is distributed to
a larger group. The main goal of both alpha and beta testing is to test
run the product to identify and then fix any bugs before you release your
application.
3. Pilot testing. Pilot testing is the "in-house" version of alpha/beta testing,
the only difference being that the customers are typically internal to your
organization. Companies that sell software typically alpha/beta test,
whereas IT organizations that produce software for internal use will pilot
test. Basically we have three different terms for effectively the same
technique.
4. User-acceptance testing (UAT). After your system testing proves
successful, your users must perform user-acceptance testing, a process
in which they determine whether your application truly meets their
needs. This means you have to let your users work with the software you
produced.

Object Oriented System Analysis & Design 9

You might also like