Testingexperience27 09 14

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 70

EUROPES GREATEST

AGILE EVENT!
November 1013, 2014
in Potsdam, Germany

Reasons to join:

100+ SESSIONS

Roman Pichler

Lisa Crispin

Bob Marshall

500+ TESTERS
FINALS
Markus Grtner

Antony Marcano

CAR BUILD
PARTY

Matt Heusser

Alan Richardson

Janet Gregory

Dear readers,
After a great summer we are moving towards late summer, another
beautiful time of the year. Most of the people here are back from their
holidays and we have already entered the busy and hot phase of the
year before the winter comes.
Life and work is determined by continuous change or improvement. I
prefer the good and positive changes in life, which is why I am really
happy to present you with the new issue of Testing Experience, including lots of new improvements and changes. The magazine has been
in existence for six years and we do not stand still we always try to
satisfy our readers and we found out that your requirements have
changed. We have reacted and included a wider range of content. From
this issue, we wont be having a main topic per issue anymore. Instead,
we have decided to give you a more diverse range of articles on topics
such as mobile, security, performance, and Agile. Another new thing
is that we will be regularly publishing columns written by renowned
authors. In the current issue we have started with Alex Podelko writing
about Performance and Manu Cohen with a column about Security. More columns on different topics are planned for the upcoming
issues. Our latest renewal is the Book Corner with reviews and hints
on new book releases and publications relevant to software testing.
Speaking of changes if your company has started to work with mobile app projects then I really can recommend the Mobile App Europe
conference (www.mobileappeurope.com) and the new CMAP training
(cmap.diazhilterscheid.com) dedicated to mobile app professionals.
If your company is in transition to Agile, then Europes greatest event
for the Agile testing community is the place to be! The sixth, and by far
the best, edition of the Agile Testing Days (www.agiletestingdays.com)
will take place in November. This years highlight is the first European
Car Build Party. Together with the conference attendees we will build
a real car using the Scrum methodology.
What can I say? The prospect of autumn 2014 is absolutely awesome
for any testing professional who is ready for a change!
To quote Andy Warhol: They always say time changes things, but you
actually have to change them yourself.
Enjoy reading!

All the best,


Jos Daz

Testing Experience 27/2014

contents 27/2014
From the Editor...............................................................................................1

Book Corner.................................................................................................. 41

How to Stress Test Your Mobile App........................................................4


by Daniel Knott

Test-Driven Developments are Inefficient; Behavior-Driven


Developments are a Beacon of Hope? The StratEx Experience
(A Public-Private SaaS and On-Premises Application) Part I.........42
by Rudolf de Schipper & Abdelkrim Boujraf

Test Design with Different Perceptions..................................................6


by Patrick Prill
A Unified Framework For All Automation Needs Part II...................9
by Vladimir Belorusets, PhD
SECURITY COLUMN: Identity Management......................................... 14
by Manu Cohen-Yashar
Internationalization and Localization Testing..................................... 16
by Nicolaas Kotze
Localization Testing One-Year Status Report
for a Localization Project...........................................................................20
by Christian Kopsch
Localization Testing is More Than Testing the Translation.............. 23
by Nadia Soledad Cavalleri
COLUMN: Test Process Improvement and Agile:
Friends or Foes?...........................................................................................24
by Erik van Veenendaal
Field Report: Test Automation and Quality Assurance in the
Context of Multi-Platform Mobile Development............................... 27
by Felix Krger
Strategic Design of Test Cases.................................................................30
by Jeisson Alfonso Cordoba Zambrano
ADVERTORIAL: Increasing Profitability and Decreasing Defects
with TenKod EZ TestApp............................................................................34
by Emil Simeonov
Software Testing in the Era of Quick Response (QR)
Code Technology......................................................................................... 37
by Ravi Kumar BN

Testing Experience 27/2014

PERFORMANCE COLUMN: The Skills Performance Testers Need


and How to Get Them................................................................................44
by Alex Podelko
Mobile Test Automation Preparing the Right Mixture
of Virtuality and Reality............................................................................ 46
by Venkatesh Sriramulu, Venkatesh Ramasamy, Vinothraj
Jagadeesan & Balakumar Padmanaban
Demystifying DevOps through a Testers Perspective......................50
by Prasad Ramanujam, Alisha Bakhthawar &
Mathangi Pollur Nott
Always Know Whats Going On Successful Quality
Management with Visual Studio IntelliTrace...................................... 53
by Torsten Zimmermann & Frank Maar
Guidelines for Choosing the Right Mobile
Test Automation Tool............................................................................... 60
by Mithun Sridharan
Multi-Platform Mobile Test Automation for the
Financial Sector............................................................................................63
by Dr. Jens Calam, Parag Kulkarni & Sven Euteneuer
Masthead..................................................................................................... 66
Picture Credits............................................................................................. 66
Index of Advertisers................................................................................... 66

By Daniel Knott

How to Stress Test Your


Mobile App
Stress and interrupt testing is an important part of the mobile testing
process. With the help of tools, mobile testers are able to determine
possible performance or stability issues of the app. To test your app
against interrupts, you can manually trigger lots of notifications to the
device while using the app. Notifications can be incoming messages,
calls, app updates, or push notifications (software interrupts). Pressing the volume up and down button, or any other kind of hardware
button, is also an interrupt (hardware interrupt) that can have an
impact on your app.

Doing all those tasks manually means lots of work and is time consuming. In most cases, those test scenarios cannot be done manually,
because it is very hard to simulate fast and multiple user inputs with
one or two hands. But it can be done with the help of tools and it is
really easy to integrate into the development and testing process.

Android Monkey
For Android apps, the Monkey tool [MON01] can be used, which is part
of the Android SDK. Monkey is a tool that is able to run either on the
physical device or the emulator. While running, it generates pseudorandom user events such as touch, click, rotate, swipe, mute the phone,
shutdown the internet connection, and many more to stress test the
app and to see how the app handles all those inputs and interrupts.
You need the package name of the Android apk file to execute the
Monkey tool, otherwise the tool will execute its random commands
to the entire phone instead of to the app under test.
With access to the app code, the package name can be found in the
AndroidManifest.xml. If only the compiled apk file is available, mobile
testers can use the Android Asset Packaging Tool [AAP02] (aapt), to
get the package name from the app. aapt is located in the build-tools
folder of the installed Android SDK version.
The path to aapt can look like this:
/../daniel/android/sdk/build-tools/android-4.4/

With the following command, the package name can be read out
from the apk file:
./aapt d badging /daniel/myApp/myApp.apk | grep 'pack'

The output will look like this:

Testing Experience 27/2014

...
package: name='com.myApp' versionCode='' versionName=''
...

When the package name (in this case com.myApp) is available, execute
Monkey with the help of adb (Android Debug Bridge) [ADB03].
The following command will start Monkey:
./adb shell monkey -p com.myApp -v 2000

The 2000 indicates the number of random commands that Monkey


will perform on the app. With an additional parameter -s for seed,
monkey will generate the same sequence of events again. This is really important when reproducing a bug, which happens during the
Monkey execution time.

UI AutoMonkey
For iOS apps there is a similar tool available, called UI AutoMonkey
[UIA04]. UI AutoMonkey is also able to generate multiple commands
in order to stress test an iOS app. To use UI AutoMonkey, a UIAutomation Instruments template must be configured within Xcode. After the
template has been configured, a JavaScript file needs to be written
to tell the tool how many and which commands should be executed
during the stress-testing session.

Sample UI AutoMonkey Script


1.

...

2.

config: {

3.

numberOfEvents: 2000,

4.

delayBetweenEvents: 0.05, // In seconds

5.

// Events that will be triggered on the phone

6.

eventWeights: {

7.

tap: 300,

8.

drag: 12,

9.

flick: 15,

10. orientation: 20,


11. clickVolumeUp: 10,
12. clickVolumeDown: 10,
13. lock: 1,
14. pinchClose: 10,

15. pinchOpen: 10,


16. shake: 10
17. },
18. touchProbability: {
19. multipleTaps: 0.05,
20. multipleTouches: 0.05,
21. longPress: 0.05
22. }
23. }
24. ...

If the script is written, it can be executed within Xcode to stress test


the iOS app.
At the end of the test run, both tools will be generating an overview
of possible errors or problems that occur within the app.
Note: The detailed installation instructions and the complete sample
script can be found on the tool manufacturers website.
Both tools can be integrated into a continuous integration system to
run automatically after every commit. Stress and interrupt testing
a mobile application is pretty simple and should be one part of the
mobile testing strategy. Besides that, it generates a huge benefit for
mobile testers, helping the team to build a reliable and robust mobile
app.

Sep 29 Oct 1, 2014 Berlin/Potsdam

LEARN HOW TO CREATE


BETTER MOBILE APPS!

GET YOUR
TICKET NOW!

References
[MON01]: Android Monkey
developer.android.com/tools/help/monkey.html
[AAP02]: Android Tool aapt
developer.android.com/tools/building/index.html
[ADB03]: Android Debug Bridge
developer.android.com/tools/help/adb.html
[UIA04]: UI Auto Monkey
github.com/jonathanpenn/ui-auto-monkey

Marketers

Developers

Designers

Testers

> about the author


Daniel Knott has been working in the field of
software testing since 2008. In his career he has
worked for companies such as IBM, Accenture,
XING, and AOE. In different agile projects Daniel
gained in-depth knowledge of software testing,
e.g., in mobile, search, recommendation, and web
technologies. During his time at XING, Daniel
worked as a Team Lead QA in the mobile team and developed a
fully automated testing framework for Android and iOS.
Currently, he is working as Software Test Manager at AOE GmbH
where he is responsible for test management and test automation
in mobile and web projects. Daniel is also a frequent speaker at agile
conferences, author of his blog and of articles in testing magazines.
Twitter: @dnlkntt

LinkedIn: www.linkedin.com/pub/daniel-knott/1a/925/993
Blog: www.adventuresinqa.com

Managers

LAST
TICK
ETS

www.mobileappeurope.com
Testing Experience 27/2014

By Patrick Prill

TEST DESIGN

with Different Perceptions


There is more than one point of view
As a tester, the main thing you bring is a single perception, your own.
That point of view is based on a wide range of experience and a great
deal of accumulated knowledge from your latest projects and experiences. It is also based on your current and often changing mood,
and your personal attitude towards the software, the developers, the
team, the customer, and so much more.
Your current perception also dictates your ability and creativity in
terms of test design. But a tester needs to be able to do more than rate
and evaluate software from their personal point of view. I believe it is
not enough for a good tester to objectively check the software against a
specification and/or a pre-defined set of test cases. You could be missing so many important items of information about the software. Rating
the quality of a product is so much more than counting found and fixed
bugs, or looking at the numbers of test cases executed and passed.
There are several methods of developing your test design skills, of collecting additional information from different points of view, which
go far beyond merely passed or failed. You do not have to reinvent
the wheel to use them, you can just add these approaches to your daily
work when executing your existing test cases or charters.
There are a couple of methods and approaches to help you with that.
Today I want to introduce you to two approaches that a good tester
should have or add to his or her repertoire, to collect additional information and insights in a simple and time-saving way and, of course,
to find issues, bugs, and points to discuss with architects, analysts,
and stakeholders.

Six thinking hats


Edward de Bonos six thinking hats were initially intended to be a
creativity technique for structured discussions in groups. The goal is
to introduce at least six different points of view into a discussion. The
mind mapping software XMind 2013 introduced the six thinking hats
into its templates (see Figure 1).
This approach is also extremely suitable for testing software. You
can split the six hats between several people in your team or you can
alternately wear them yourself. To stay focused, it is very helpful to
use color-coded elements, for example colored base caps, or colored
cards that also list the most important properties of each hat (color).
That helps you to get into and stay in the right mood for the respective aspect of the hat.

Testing Experience 27/2014

Now to the different colors. The blue hat is about managing and moderating the whole discussion of the other hats. The blue hat is objective
and should help to focus the discussion. If you are using the approach
on your own, you always have to wear the blue hat so you do not lose
track. When splitting the hats between different team members, this
could be given to the test manager or team lead.
The white hat stands for objective information and analytical thinking. This hat focuses on the requirements and how to achieve them.
In test design the white hat helps to create a model of the application.
Wearing the white hat involves executing a test case as intended and
concentrating on the facts. This persons task is to collect facts in order
to inform the ongoing discussion, value-free.

The red hat symbolizes emotional thinking, both positive and negative.
This hat should help you to observe your own emotions. During testing
you develop emotions about the software under test. In my view, this
also, to a great extent, involves the hard-to-measure characteristic
of charisma. Do I like to use the software, is it annoying to use, or
much too complicated? Such information is often hard to put into a
bug report, but it should be reported at least to the stakeholder so
they have a chance to react. Software that upsets you when you use
it might be functional and technically correct, but the user will not
view it as high quality.
The yellow hat stands for an optimistic response. It is all about the best
case. This hat only sees the good things and benefits in the software,
so it is a good hat for happy path testing. The yellow hat is for a sunny
day experience, but you should be very careful if the yellow hat does
not add much information, as this is a bad sign!
The black hat is all about critical and pessimistic thinking, about discernment. This hat is the little devil on your shoulder and is excellent
at identifying defects and risks. The black hat is skeptical and critical.
Listen carefully to the black hat, as it can find many new error scenarios
or unknown risks.
The green hat, last but not least, symbolizes creative thinking. This
hat develops new ideas and thinks differently. In testing, the green
hat can find new ways of testing or using a function. The green hat is
creative in helping to optimize the software and you can also use it
to find workarounds. My tip is to try thinking like a child. Children use
things in a lot of different ways that grown-ups are no longer capable
of imagining because of their fixed ideas. Try using the green hat to
get rid of your ingrained thinking habits. This is difficult, especially
in the beginning, but you will come across a lot of interesting ideas.

Figure 1. De Bonos thinking hats, template from XMind 2013 (www.xmind.net)

Some you will try to put aside in the beginning, but it is best to write
them down and come back to them later.
When using the six thinking hats, you create a great deal of potential
for collecting information. Your project environment should be ready
to accept information not just on bugs, otherwise that would really
be a waste of creativity and feedback.
Some of the hats can be used at the same time during test execution.
For example, the yellow, green, and red hats can be combined if the
red hat is giving positive input. If the output of the red hat is more
negative, then it should be combined with the black hat to find more
risks and problems. It is important to always combine them with the
blue hat to keep the sources of information apart and to have some
structure in your process.
You can collect your information in a mind map (see example from
XMind), which helps to provide structure and present all the information together.

Personas
Quality is value to some person who matters.
Jerry Weinberg, extended by James Bach.
Personas is an approach to defining several groups of users of the
software by creating fictional representatives of those groups. This
method or approach is far more than role testing or using user stories.
You focus not on the job or tasks, but on the person as a human being
and you create a profile of sample users that captures as many facets
of the user as possible. This is similar to describing and creating a
movie character for the actress or actor.
This approach is especially good for testers who test software that will
be used by lots of different users. In business software, the user is given
training or at least a short introduction to the system. This is not possible for many types of software, so the software needs to be intuitive
and to provide simple help texts or self-explanatory forms and flows.

You, as a tester, have been working with that product for weeks and

know every inch of the specification. You found out a lot of ways, hints,
and dodges. For you, it is easy to work with that software. But how do
you get rid of everything you know? Alcohol and drugs are no solution
here, because you should not lose what you know completely, just put
it aside for a test scenario or two. That is the point at which personas
try to help. You play a role, you try to put aside as much knowledge as
you need to, you try to act out of character, and you will see and learn
new aspects of the software. One of the first issues you might find is
the basic knowledge you expect your users to have.
It is important to stay in or return to the role at every point of your test
session. For example, get into the role of Frank, 67 years old, a retired
millwright who is a bit short-sighted. Frank used computers in the
last few years when he was at work, but that was several years ago
now, and he does not have one at home. Think of a screen where it is
not obvious what you should do next, or not described. Do not push
that button down there because you know that is the button to get to
the next page. What would Frank do? Is there something missing that
would show where the button is?
It is not easy to group your users and it is impossible to take care of all
your users problems. You have to find the right mix of characters and
try out the necessary depth of definition of your persona. The type of
software and its distribution determine how important it is to put
personas in your test plan. Business software has a different set of
requirements here than, for example, the software used in a ticket
machine for commuter trains.
The last example is an especially good opportunity to see why you
should use personas. Go to a train station and observe the users of a
ticket machine in the wild. Who are those people, what is their background? How easy is it for them to see where to go next? Is the person
reading a lot of the text displayed on the screen?
Using the right persona, you can find out that timeouts might be too
short, because you do not have the time to slowly read all the help
text on the page. That timeout scenario might be described in the

Testing Experience 27/2014

specification and also in some sort of test case. But usually that thing
works against the clock, and not in terms of whether it is possible to
read every piece of information on the screen slowly and thoroughly.

> about the author


Patrick Prill has over 10 years experience in soft-

When DHL introduced into Germany those big yellow boxes where you
can send your parcels to and catch them any time you want, I personally
thought that the user menu was one of the best I had ever seen. But
when you wait in line and observe the problems others have with the
system, it makes you think about what you need to improve in order
to create a good user experience for them, so they like using that box.

ware testing. After four-and-a-half years as a


tester, he became a test manager, coordinating
the work of ~50 people for another five years on
a big test project.
His new job as test lead for a software and consulting company for the automotive industry
brought him back to a smaller test team and the hands-on experi-

Conclusion
It is very important not to test only from your own point of view.
Whether methods and approaches like the two I have just described
help your test design and help to gather new and important information depends strongly on the project context. But knowing those
approaches and methods, and using them in the right context, should
be part of every good testers toolbox.

ence of testing software again. This experience, and following the


discussions and articles of the context-driven testing community,
relit his fire for testing and bug hunting.
Patrick lives outside of Munich, Germany, and is a proud husband
and father of a wonderful daughter. In his small amount of spare
time he is a wood turner.
Blog: testpappy.wordpress.com
Twitter: @testpappy

How the project uses the information you found, besides the bugs of
course, is another kettle of fish. But collecting and presenting information is part of a testers task.

!
s
l
a
n
o
i
s
s
e
f
o
r
Find your p
Place your job offer on the new Testing Experience
Job Board, accessible in the magazine and on our website.
Reach tens of thousands of IT professionals
worldwide!

Download our media kit at


ads.testingexperience.com
and get in touch with us
to get started!
8

Testing Experience 27/2014

Size 2
Size 1

Missed Part I?
Read it in
issue No. 26!

By Vladimir Belorusets, PhD

A Unified Framework
For All Automation Needs Part II
Introduction
In the first part of this article [1], I described the main principles applied
in the development of a unified test automation (UTA) framework that
serves as the foundation for testing multiple application interfaces.
The UTA was built on JUnit and JUnitParams. We covered test data
management, data-driven testing, and automated updating of the
test results in the test case management system. In the second part,
I will describe the details of implementing the automated testing of
browser GUI and REST API.
We can group the testing of web applications and REST API into one

isOnCorrectPage(String title) (Listing 2), elementHasText


(WebElement we, String text), and isTextOnPage(String pattern).
1.

public void isOnCorrectPage(String title) {

2.

try {

3.

new WebDriverWait(driver, 15).


until(ExpectedConditions.titleIs(title));

4.

5.

catch (TimeoutException e) {

6.

throw new AssertionError(String.format(

7.

"Expected page title %s but was %s",

category. From the testers perspective, the difference is in the format of


the pages that the server returns. For the web interface, it is the HTML
format; for REST API, the format is either XML or JSON.

8.

9.

UTA uses the following popular open source tools:

Listing 2. Synchronizing page load

Selenium WebDriver [2] for web applications running on multiple


browsers
Spring Framework [3] for REST API

Synchronization
WebDriver may not wait for the page to load. In some circumstances,
WebDriver may return control before the page has finished, or even
started, loading. In this case, the next action on the page fails. To
ensure robustness, you need to wait for the element(s) to exist in the
page before continuing with operations.

title, driver.getTitle()));

Page Objects
The Page Object design pattern is an extension of the idea implemented
in the best test automation tools, such as WinRunner, in the 1990s. One
of the favorite questions I used to ask interviewing test engineers at
that time was: You have a thousand debugged and running scripts.
Each script has a statement: button_press("OK"). The developer has
decided to change the button label from OK to Done. Do you need
to edit one thousand scripts to make them run without errors?

All pages under test are modeled as page classes. These classes describe the page elements and services/operations they provide by
following the Page Object design pattern. We will discuss this pattern
in detail in the next section. To avoid synchronization failures, each
page class is a subclass of the generic BasePageObject class. Instead
of using direct operations on web elements, this class contains wrapper methods, like clickElement, which guarantee that links, buttons,
radiobuttons, checkboxes, and other elements are visible before you
operate on them (Listing 1).
1.

public void clickElement(WebElement element) {

2.

new WebDriverWait(driver, 15).until(ExpectedConditions

3.

.visibilityOf(element));

4.

element.click();

5.

Listing 1. Operation with synchronization

Some other wrappers in the BasePageObject class utilizing


the WebDriverWait object for synchronization purpose are:

Figure 1. WinRunner GUI map editor

The answer lay in a concept called GUI Map. WinRunner used a logical

name in the script to identify each object: for example OK for an


OK button. The logical name is actually a nickname for the objects
physical description. The GUI Map was a file that contained a mapping
of the logical names to their corresponding physical descriptions. The
physical description of a GUI control consisted of a list of its physical
properties: such as label OK and other properties that uniquely

Testing Experience 27/2014

identified that GUI object. Getting back to my previous question, the


answer was: You do not need to edit your thousand scripts at all
since OK was just a logical name. It can be anything, like John, for
example. All you had to do was to edit the label name in one GUI Map
file (Figure 1) from OK to Done leaving the same logical name OK
in the scripts without changes.
WinRunner employed a special procedural scripting language, which
statements used the logical names of GUI elements. Since Java is an
object-oriented programming language, the Page Object design pattern goes further. In page classes, it encapsulates the mapping of the
element physical description (tags and attributes) to the class variables
(logical names) and exposes page services or operations to the test
classes. All services use the element logical names inside and do not
need to change when the element physical description changes. Tests
in the test classes use page services only.

1.

public class LoginPage extends BasePageObject {

2.

private final String pageTitle = "Management Console";

3.
4.

// page elements used in tests

5.

@FindBy(name = "Username")

6.

private WebElement username;

7.
8.

@FindBy(name = "Password")

9.

private WebElement password;

10.
11. @FindBy (name = "UserLoginButton")
12. private WebElement buttonLogin;
13.
14. @FindBy(xpath = "//td[3]/font/div")
15. private WebElement errorMessage;

This provides the following advantages:

16. .....

Having a few dedicated places where you collect all information


about the page layouts makes it easy to find and modify the element physical descriptions.

18. // constructor

Test scripts and page services are robust to the page element
changes since they do not use any physical descriptions.

17.

19. public LoginPage(WebDriver driver) {


20. super(driver);
21. isOnCorrectPage(pageTitle);
22. }
23.

Logical names (class variables) improve the understanding and


maintenance of code.

24. // page services

Another important point is that the page class should contain only
those elements that participate in the tests. You do not need to mimic
the full page to replicate the developers work.

26. driver.get(loginPage);

25. public static LoginPage open(WebDriver driver,


String loginPage) {

27. return PageFactory.initElements(driver,


LoginPage.class);
28. }
29.

Page Objects for web applications


For web applications, the physical descriptions of the elements are
provided through the element locators: id, name, XPath, CSS, etc. Selenium WebDriver supports GUI mapping with the @FindBy annotation.
A simplified example of a page object class is presented in Listing 3.
With Page Objects, the tests look easy readable and succinct (Listing 4).
In that test, we use valid username and password read from the global
configuration file to log in to the application home page mapped to the
HomePage object. We then verify that the product name and version
on the home page are correct.
One of the tedious routines is to create page object classes. You need
to find a locator for the element under test and enter the mapping
record in the class file. Usually the Firebug add-on for Firefox is used
to inspect the HTML code. However, in the old days, that process was
automated in WinRunners GUI Map Editor. All you had to do was to
point to the GUI element and click the Learn button. It immediately
created a mapping entry for the element in the GUI Map file.
Fortunately, there is an open source utility that provides similar functionality for the page elements. It is called SWD Page Recorder [4].
The SWD Page Recorder allows learning web elements not just in
Firefox but also in Internet Explorer, Chrome, and Safari. It can generate Page Object mapping statements in Java, C#, Ruby, and Python.
The default locator is XPath (Figure 2). However, you can manually test
all other locators as well.

10

Testing Experience 27/2014

30. public void submitLogin(String username,


String password) {
31. // enter login form
32. enterUsername(username);
33. enterPassword(password);
34. clickLogInButton();
35. }
36.
37. public HomePage validLogin(String username,
String password) {
38. submitLogin(username, password);
39. return PageFactory.initElements(driver,
HomePage.class);
40. }
41. .....
42. }
Listing 3. LoginPage object class

1.

@Before

2.

public void setUp() {

3.

driver = BasePageObject.getWebDriver(browser);

4.

loginPage = LoginPage.open(driver, startPage);

5.

6.

7.

@Test

10. // specify entity body

8.

public void loginWithValidCredentials() throws IOException {

11. Map<String, String> map = new HashMap<String, String>();

9.

HomePage homePage = loginPage.validLogin(userName,

12. map.put("firstname", "Vladimir");

password);

13. map.put("lastname", "Belorusets");

10.

14. map.put("city", "Foster City");

11. String product = homePage.getProductName();

15.

12. String version = homePage.getVersion();

16. // requestEntity includes both headers and body

13.

17. HttpEntity<Map<String, String>> requestEntity =


new HttpEntity<Map<String, String>>(

14. assertEquals("Product Name", product,


prop.getProperty("product"));
15. assertEquals("Product Version", version,
prop.getProperty("softwareVersion"));

map, requestHeaders);
18.
19. String result = restTemplate.postForObject(url,
requestEntity, String.class);

16. }
Listing 4. Using page objects

Listing 6. A REST POST request with the headers and entity body

Using Spring Framework

Page Objects for REST API

Spring Framework provides comprehensive programming support


for web applications and RESTful web services. Test engineers play
the role of consumers of the RESTful services when testing REST API.
The framework offers a class called RestTemplate that allows you to
easily submit all HTTP requests and review the responses in JSON or
XML formats through the Page Object classes (they are called domain
classes in the Spring Framework). As I mentioned in the Page Objects
section, testers only need to map the elements that they want to test.

Web applications are intended for human consumption, and REST


API is used for communication between the machines. But the same
Page Object design pattern that is applicable to HTML pages can also
be applied to XML and JSON pages.

RestTemplate provides the getForObject() method to perform a GET,


postForObject() for a POST, and exchange() as a general method

for all HTTP requests. The simplest code for executing a GET request
is presented in Listing 5 where the FacebookResponsePage class is a
response page object.

I found the Firefox plug-in RESTClient [5] very useful for testing REST
API and for developing page objects. It supports all HTTP methods and
displays the response pages in XML or JSON format.
The beauty of Spring Framework is that one page object class recognizes
both formats. You can apply JSON and XML annotations to the same
elements simultaneously and the content of the page object will be
based on the format of the response page.

1.

-<GeocodeResponse>

2.

<status>OK</status>

1.

RestTemplate restTemplate = new RestTemplate();

3.

-<result>

2.

FacebookResponsePage page = restTemplate.getForObject(

4.

<type>street_address</type>

"http://graph.facebook.com/safenet-inc",

5.

+<formatted_address></formatted_address>

FacebookResponsePage.class);

6.

+<address_component></address_component>

7.

+<address_component></address_component>

8.

+<address_component></address_component>

9.

+<address_component></address_component>

Listing 5. A simple REST GET request

An example of a POST with request headers and entity body does not
look much more complex (Listing 6). Here the entity body contains
parameters in the JSON format. However, you specify them as key/
value pairs in the Map, and the RestTemplate converts them to JSON.

10. +<address_component></address_component>
11. +<address_component></address_component>
12. +<address_component></address_component>
13. +<geometry></geometry>
14. </result>

1.

String url =
"http://www.htmlgoon.com/api/POST_JSON_Service.php";

2.

RestTemplate restTemplate = new RestTemplate();

3.
4.

// specify request headers

5.

HttpHeaders requestHeaders = new HttpHeaders();

6.

requestHeaders.setContentType(MediaType.APPLICATION_JSON);

7.

requestHeaders.setAccept(Arrays.asList(MediaType.
APPLICATION_JSON));

8.

requestHeaders.set("MyRequestHeader", "MyValue");

9.

15. </GeocodeResponse>
Listing 7. An example of the REST XML response page

An example of an XML response page is presented in Listing 7. Its


corresponding page object class (Listing 8) must be decorated with
the @XmlRootElement annotation. I found it is easier to map a set of
complex elements containing other elements to an array of objects
instead of a Java List object, i.e. Results[] vs. List<Results>. The
default mapping is provided by creating class variables with the same
names as the names of the page elements. I also noticed that the response page for some REST API can have different element names for

Testing Experience 27/2014

11

Figure 2. Learning XPath locator with the SWD Page Recorder

the same elements depending on the page format. For example, the
response page in JSON corresponding to Listing 7 contains the results
element instead of result in the XML format. In this case, you need to
apply the explicit @JsonProperty annotation to the element variable.
In addition, any page object class must define getters and setters for
the elements under test.
1.

@XmlRootElement(name = "GeocodeResponse")

2.

public void GeocodeResponsePage {

3.

@JsonProperty("results")

4.

private Results[] result;

5.

private String status;

6.

public String getStatus() {

7.

return status;

8.

9.

public void setStatus(String status) {

10. this.status = status;

If a complex element contains other complex elements, then those elements must be mapped to their own page object classes. In Listing 7, the
result element contains formatted_address, address_component,
and geometry elements that are represented by the separate page
object classes.

Summary
In the second part of this article, I described how to test the browser
GUI and REST API in the UTA framework using the open source Selenium
WebDriver and Spring Framework. I covered synchronization issues
for web applications and explored the commonality between HTML,
XML, and JSON pages returned by the server. Those pages are mapped
to Java classes following the Page Object design pattern. In the third

part, I will describe in detail test automation of the command line


interfaces within the UTA.

11. }
12. public Results[] getResults() {
13. return result;
14. }
15. public void setResults(Results[] results) {
16. this.result = results;
17. }
18. }
Listing 8. An example of a page object for REST API

12

Testing Experience 27/2014

Read Part III of this article in the December issue (No. 28)!

References
[1] Vladimir Belorusets. A Unified Framework for All Automation
Needs Part I. Testing Experience, pp. 6670, Issue No. 26, 2014.
[2] Selenium:
seleniumhq.org
[3] Spring Framework:
projects.spring.io/spring-framework
[4] SWD Page Recorder:
swd-tools.com
[5] RESTClient:
restclient.net

> about the author


Dr. Vladimir Belorusets is a Quality Architect/
Director at SafeNet, Inc., responsible for the development, evaluation, and research of engineering and test automation tools. Dr. Belorusets is
a Certified Tester Foundation Level and Certified ScrumMaster. He is the author of various
articles on test automation and testing methodology published in Testing Experience, Agile Record, Software Test
& Quality Assurance, Software Test & Performance, and StickyMinds.
com. Dr. Belorusets was a member of the Strategic Advisory Board
and Conference Program Board at Software Test Professionals. He
was a speaker at HP Software Universe, Software Test Professionals,
and STARWEST. Vladimir has held development and QA management
positions at Xerox, EMC, Siebel, CSAA, and various startups.
Dr. Belorusets obtained his PhD in Control Systems from Moscow
Institute for Systems Analysis, Russian Academy of Sciences, and his
Masters Degree in Theoretical Physics from Vilnius State University,
Lithuania. Vladimir has taught numerous courses on functional
and performance testing in various computer schools in the San
Francisco Bay Area.
LinkedIn:
www.linkedin.com/pub/vladimir-belorusets-ph-d-csm-ctfl/0/2/416

or review
f
in
s
le
ic
t
r
a
r
Get you
by October 15!
ce.com
n
ie
r
e
p
x
e
g
in
t
write.tes
Testing Experience 27/2014

13

Security

Column by Manu Cohen-Yashar

Identity Management
Identity, authentication, and authorization are key requirements in
almost every web application and API. Applications need to know the
users identity and then use that identity to decide whether to allow access to a given resource based on an authorization policy. While simple
in concept, identity management is a complicated task. Identity is a collection of attributes that describe an entity, such as personal information, group membership, contact information, business information,
and so on. Such information is sensitive by nature and must be handled
with care. In some cases, this information is managed by an external
party rather than being directly managed by the applications owner.
Managing identities involves challenges such as securely storing identity information, securely handling credentials, federating identities
between organizations, revocation of identities when required, implementing an authentication protocol, and the list goes on.

Identity management is also a burden for clients. For each service that
manages identity, clients often have to create a separate identity with
a unique set of credentials. In many cases this is too much of a burden
for users, so it is not uncommon for a user to share credentials across
applications (i.e. they use the same username and password for multiple systems). This is not best practice and most clients are unaware
that their identity is as secure as the weakest application managing it.
In large and well-managed enterprises, these challenges are met
through the use of so called single sign-on (SSO) solutions. These
allow a user to log in once, typically via their workstation network
login, and authenticate to all services. In order for this to be seamless
and secure, a complicated set of technologies is involved. This works
well within an organization as it is within a managed IT infrastructure
and on the secured corporate network. However, the same solution
does not translate well to web services available on the internet.
To address these problems and create a solution that works well both
within a corporate network and on the internet, identity management
standards and infrastructures have been developed and standardized.
One of the key aspects of these solutions is the extraction of identity
management from applications and services. Instead, applications and
services outsource the task of identity management to a third party.
Identity is managed by this central identity provider, which is trusted
by the relevant services, applications, and users. Users authenticate
with their identity provider which, in response, provides a token that
users present to applications. This token is evidence of their identity
and may optionally include additional claims.
Applications validate these tokens using cryptographic keys that are
used to establish trust with the identity provider. This allows an application to determine who the user is and determine what they are

14

Testing Experience 27/2014

allowed to access. Identity providers will only generate tokens targeted


to the applications they know and trust. This Triangle of Trust (i.e.
user application identity provider) principal is implemented in
standards such as SAML, SAML2, OpenId, OAuth 2.0 and OpenIDConnect. These standards were developed using different assumptions
and focusing on different use cases.

SAML and WS-Federation


SAML was defined in the early 2000s together with other WS-* standards. In those days, mobile platforms and applications were not a
major form factor. SAML is based on complicated xml stacks that can
implement xml cryptography. SAML tokens are complex and large and
can carry a large set of claims that are digitally signed and encrypted.
SAML supports active and passive federation as well as delegation use
cases. SAML standards describe the structure of the token and the protocols for obtaining and exchanging it. SAML protocols are supported
by most commercial identity management infrastructures such as AD
(ADFS 2.0), IBM Tivoli, and Ping Federate. SAML-based infrastructures
enable enterprises to federate identities and provide a single sign-on
(SSO) experience to their employees. It has optimal integration with
WS-* web services technologies, such as WCF, and classical web site
platforms such as ASP.NET. There are two major versions of SAML:
1. SAML 1.1 (WS-Federation) implemented mainly by Microsoft.
2. SAML 2.0
Unfortunately the two versions are NOT compatible.
Since the days of SAML standardization a new form factor has come
into play mobile. Mobile platforms and applications are compact
and constrained. Mobile stacks do not typically support heavy XML
cryptography. The network a mobile application relies on is often
slow and expensive. Mobile applications require efficient transport
that uses compact artifacts. Mobile applications simply cannot handle
large tokens such as SAML tokens.
Mobile runs on multiple platforms, so applications in the mobile era
have to be compatible with multiple platforms. The promise of ultimate
compatibility that SOA architecture meant to provide was not fulfilled
because WS-* standards were too complex.
Mobile pushed the market towards a simpler model of collaboration
REST. Restful web services are based on the HTTP protocol and a set of
standards that are compact enough for mobile to handle and execute.
Some of these standard concerns identity.

OpenID
OpenID was designed to handle authentication on the web. With
OpenID, a web site can obtain a signed token from a trusted identity
provider with information about the client that issued the request.
OpenID allows users to use an existing account to sign in to multiple
websites, without needing to create new passwords. Users may choose
to associate information to their OpenID that can be shared with the
websites they visit, such as a name or email address. With OpenID,
users can control how much of that information is shared with the
websites they visit. In OpenID, credentials are only given to the identity
provider, and that provider then confirms the users identity to the
websites he or she visits. Other than the provider, no website ever sees
the password, so users do not need to worry about an unscrupulous
or insecure website compromising their identity.
OpenID is decentralized and is not owned by anyone. Anyone can choose
to use an OpenID or become an OpenID provider for free without having to register or be approved by any organization.

OAuth 2.0
OAuth 2.0 was designed for delegated access on the web and not for
authentication per se. With OAuth 2.0, websites and web services can
obtain access to a clients resources stored somewhere on the web (e.g.,
Facebook Friends). To get access to a resource, an application has to
obtain an access token from the OAuth identity provider.

OpenIDConnect is a simple identity layer on top of the OAuth 2.0


protocol, so clients should follow one of the OAuth flows to obtain an
ID and access tokens. Client applications will request an access token
from an OpenIDConnect provider according to their relevant OAuth 2.0
flow, which will redirect clients for authentication by their appropriate
tenant identity provider or issue an authentication session using the
credentials provided.
OpenIDConnect provides a simple, safe, and interoperable solution
for authentication that is supported by almost all vendors. Infrastructures such as Auth0 and Azure Active Directory make it really simple
to integrate.

Summary
Identity management is a serious matter. In the last decade, various
standards and technologies have been developed to address this issue. Today we see SAML-based solutions in enterprise applications
and OAuth 2.0/OpenIDConnect solutions in web and mobile applications that require simplicity and interoperability. Choosing the right
standard is important when implementing simple and safe identity
management.

> about the author


Manu Cohen-Yashar , Technion Graduate in Electrical Engineering and Computer Science, MCT,

Different types of applications will use different OAuth flavors to


acquire an access token:

is an international expert on distributed systems,


cloud computing and application security. He
works as Lead and Senior Consultant at SELA

Web applications will use the OAuth 2.0 Authorization Code Flow
grant flow.
Java scripts clients running inside a browser, such as HTML5 Single
Page Applications, native client, and mobile clients, will use OAuth
2.0 Implicit Flow.
Trusted Clients with no user interface will use the OAuth 2.0
Resource Owner Password Credential Flow.

Group. He currently consults for various enterprises in Israel and worldwide, architecting developing and testing distributed applications using a wide range of
technologies.
Manu Cohen-Yashar is known as one of the top identity management, Azure, WCF, and WF experts in Israel. He has written a few
of the official Microsoft Courses (MOC) and conducts lectures and
workshops for developers and enterprises that want to specialize in

Trusted Applications (there is no user) will use OAuth 2.0 Client


Credential Flow.

distributed and secure application development.

Once the application has the access token, it will put it in the authorization header of all http requests for protected resources.

be Microsofts representative in the team that designed the identity

OpenIDConnect

teams in the Israeli Army and leads the architecture of large scale

OpenIdConnect was designed to handle authentication on top of the


OAuth 2.0 framework. With OpenIdConnect, websites and web services
can get a signed token from a trusted identity provider with information about the client that issued the request, using the OAuth flow.

and Couchbase.

Because Manu Cohen-Yashar is one of the leading experts in application security and identity management in Israel, he was chosen to
infrastructure for the government of Israel.
Manu Cohen-Yashar is a member in one of the leading big data
systems using databases such as Cassandra, Mongo Db, Raven DB
Manu Cohen-Yashar writes a popular blog about application security, cloud computing and big data at blogs.microsoft.co.il/applisec.

Testing Experience 27/2014

15

By Nicolaas Kotze

Internationalization and
Localization Testing
Introduction
How many businesses really know what their profits can be when
targeting other countries? The digital market is very different to what
it was 15 years ago. The mobile phone and the internet have especially
contributed to the complexities of marketing and software development since the 1980s. Ironically, even after 30 years there continue to
be organizations and well-known brands that fail in targeting these
new markets by not properly doing research to understand why
other organizations struggled in the past. These failures result in
high financial loss and probably negatively impact brand confidence
just as much. These days, retail internationalization has also become
a subject of much academic study.
Well-known brands such as Nike attempted to target the Asian youth
market by featuring Lebron James and a kung fu master. But it failed

because it was found offensive by the Chinese markets [1]. Puffs tissues
tried to enter the German market, only to learn that Puff in colloquial
German means brothel.
Gerber tried to sell baby food in Africa and use the same packaging as
in the US with a Caucasian baby on the label. The problem was that,
in Africa, companies prefer to put pictures on the labels of what the
packaging contains, since many people cannot read.
Often software projects start with no intention of targeting international markets, thus funds are not allocated accordingly. So it is of
no surprise that many project plans consider these testing activities
late in the development life cycle. Introducing localization late into a
development cycle is sure to provide managers with a nerve-racking
surprise when it comes to budget, resources, processes, and skills.
Generally stakeholders are not aware that localization is not just about
testing the supported languages. Internationalization should be implemented before considering localization, unless the team customizes the
system in such a way that it supports this from the beginning. Many
teams do not know or understand this, but there is a clear difference
between internationalization and localization, and the types of defects
that will be identified are different.

i18n and l10n explained


First, let us begin by clarifying the differences between internationalization (aka globalization) and localization, and also understand the
type of defects each will present.

Internationalization/globalization (i18n)
The abbreviation i18n is widely used and derived from the fact there
are 18 letters between the I and the n.

16

Testing Experience 27/2014

Internationalization is about designing and developing a product to


ensure that it can adapt to users cultures, language, or even regions.
The system is designed to support the universal character set, Unicode,
and other technologies to support the more challenging languages
such as Japanese, Korean, Chinese, and Mongolian that can be written
vertically. The majority of these defects can be found by developers
and testers without the need for translators to understand a foreign
language and culture.
Typical issues that appear in localized versions of the product should
not occur only in one language, such as different character sets and
special characters not displaying correctly; strings truncated due to UI

limitations; string concatenations; incorrect formats for dates, time,


currency, numbers, and so on. As illustrated in the images below,
one shows the date examples and in the other the strings are not
displayed correctly.

Localization (l10n)
The abbreviation l10n is widely used and derived from the fact there
are 10 letters between the I and the n. Localization is the translation
of the actual content to another language.
Typical issues in a specific language should be grammatical errors,
non-translated strings, missing localization such as incorrect URLs
showing, mistranslations, or terminology errors.

References, standards, and services


Not everyone follows standards, wants to follow a standard, or even
knows about standards. Sometimes there is no choice, but nevertheless
they are good resources to learn from and gain a better understanding. Making use of services is a good option, as employing translators
for every language can become quite an expense. Expecting a single
tester to cover all configurations and languages is not realistic either
and using an online translation application such as Google Translate
is definitely not a solution.
Many of the developer networks provide good coverage on internationalization. Some common sources are the IBM Globalization[2]
web site, Microsofts MSDN Go Global Developer Center[3] and Oracle
Globalization Support[4].
There was the Localization Industry Standards Association (LISA)[5], but

they closed down around 2011 and in its place today is the Language
Terminology/Translation and Authoring Consortium (LTAC)[6].
For interest, read up on Term Based eXchange (TBX) which is an XMLbased standard for exchanging structured terminological data that
has been approved by LISA and published by ISO as ISO 30042 Systems

Figure 1. Date format on SAQA website

to manage terminology, knowledge and content TermBase eXchange


(TBX)[7].
There is also the W3C Internationalization Activity[8] that works with
the W3C working groups and supports other organizations to make
web technologies work with different languages, scripts, and cultures.
For services, have a look at Globalization and Localization Association
(GALA)[9],which is a non-profit organisation and trade association for
the language industry that provide resources, education, advocacy, and
research for global companies. There is also TerminOrgs[10], which is a
consortium of terminologists who promote terminology management
as an essential part of corporate identity, content development, content management, and global communications in large organizations.

Internationalization
'' Formats for numbers, dates, times, addresses, and phone numbers
'' International paper sizes

'' International units of measurement and currencies


'' Display text and fonts correctly

'' Do not use language to assume a users location, and do not use

location to assume a users language

'' Colloquialisms and metaphors

'' Technical jargon, abbreviations, or acronyms


'' Images that might be offensive

'' Avoid political offense in maps or when referring to regions


'' Sorting

Localization
'' Localizable resource files

'' Resources of app that require localization

Good practices and tips

'' Text size

Good practices can be like the time when everyone finds out you will
be a parent soon. Suddenly, everyone is an expert on preparing for a
baby, how to raise it, and which colleges to target when they become a
young adult. As most of you know, it is always good to take the advice
with a pinch of salt. The following supported me when I was involved
with internationalization and localization.

'' Image and audio files for localization

1. Consider the global strategy from the start and prioritize your
language sets and languages.
2. Do not assume things, because when something goes wrong, it
makes an ass of u and me.

'' Right-to-left (RTL)

'' Strings in an entire sentence


'' Strings in different contexts

7. Find hardcoded resources of strings and get them out into a


resource file.
8. Find out if there are any strings that are concatenated (joined),
such as adjectives that describe surroundings or items in more
detail. Many times this will work in English, but does not for
some other languages.

4. Use technology and services from third party specialist agencies.

9. Find any hardcoded dates, time, or currency formats. Also find


out on what day the week starts. Some countries start their
weeks on Mondays, Fridays, Saturdays, or Sundays.

5. Start testing early and often. A clever test strategy is required


because supporting multiple browsers, operating systems, and
devices will quickly grow the testing effort required beyond what
is budgeted for initially.

10. If the product needs to support Asian languages, check that


UTF-16 is used. If not, UTF-8 should be used unless there is a good
reason. Read up on UNICODE because as of writing they are
already drafting version 7.0 [11].

6. Make use of freely available checklist and bug taxonomies. Just


search localization checklist. Here is a simple example of a
checklist:

11. Identify any areas where there will be little or no room for
strings to fit nicely. If the product needs to support German or
Chinese there will be quite a few areas like this and UI designers

3. Involve the regional stakeholders.

Testing Experience 27/2014

17

Figure 2. Translation failure on Tomb Raider: Underworld page

will need to come up with clever ways to work around this. Try
pseudo-localization testing to prevent common internationalization defects. This is a process that creates a localized product
in an artificial language that is identical to English, except that
each character is written with a different character that visually
resembles the English character. This should be entirely machinegenerated to save time, and the pseudo-localized builds should
be created in exactly the same way as the localized builds. Even
monolingual English software developers and testers can read
pseudo-localized text and this has proven to be an excellent
way of discovering internationalization problems early in the
development cycle.
Early on in development, make it a priority to find internationalization defects by concentrating first on five languages including
English. Experience has shown that we are most likely to find
specific defects in the following languages:
German: It contains long words that can reveal dialog size
and alignment defects better than other languages.
Japanese: With tens of thousands of characters, multiple
non-Latin scripts, alternative input method engines, and an
especially complex orthography, Japanese is a great way to
find defects that affect many East Asian languages.
Arabic: It is written right-to-left and has contextual shaping
where a character shape depends on the adjacent characters.
Hindi: This will help finding legacy, non-Unicode defects that
affect all such languages.
12. If you know the libraries used to develop the products, do some
research on their forums and bug tracking system to find limitations or issues. These are good sources for ideas on designing
tests.

18

Testing Experience 27/2014

13. Identify areas where there are sorting capabilities. Using internet browsers to sort tends to be problematic.
14. Test automation is your friend. Use data-driven test automation
as much as possible, but take note that there might be limitations as to what the test automation framework can support.

Managing defects
If logging defects by category using the quality characteristics according to ISO/IEC 9126-1[12] and ISO/IEC 9126-2[13], which is now replaced by
ISO/IEC 25010[14], internationalization could fall under Portability and
localization could fall under Usability. Categorizing defects provides
valuable statistics to the Quality Assurance (QA) team to help them
evaluate the success of the current processes in place throughout the
SDLC and build on lessons learnt for future projects.
Also, make sure the defect tracking system the team use supports all
the supported languages. Switching over to spreadsheets halfway
through development just adds unnecessary manual administration
and complications. It is also recommended to use a single management system for all supported languages as this will streamline and
improve the reporting efficiency.
Once test reports for the different languages stream in, it can become
quite a challenge to manage this efficiently. Do not try to handle multilingual bug reports using manual processes by examining individual
bugs and then manually translate one-by-one for appropriate followup by the feature team that owns the affected component. This is a
time-consuming and error-prone exercise that scales poorly to large
and diverse programs. Create a language detection API for the defect
tracking tool that is able to automatically detect the language of the
customer defects as they are reported. Or just have an option available
for the reporter to select the language. If the defect tracking tool is able

to forward raised defects according to a specific value in a specific field


to a specific group or individual, then this can help as well.

[10] TerminOrgs:
www.terminorgs.net

Defect impact (severity) must be in place and clearly defined to help


quickly prioritize defects. Generally, projects use four severity levels.
Examples of clearly defined severity levels could be as simple as illustrated in the table below.

[11] Unicode version 7.0:


www.unicode.org/versions/Unicode7.0.0

Severity

Description

1 Critical

Crash or hang, complete loss of essential functionality


with no workaround, copyright issues, offensive text,
serious financial loss, loss of confidence and reputation
in organization, losing >$100k/day

2 Major/High

Essential functionality not exactly as specified but


workaround exists, critical text not visible, loss of reputation, losing between $50k/day and $100k/day

3 Minor/Medium

Less essential functionality, mainly cosmetic, losing


between $1k/day and $50k/day

4 Trivial/Low

Punctuation, cause irritation, losing between $0/day


and $1k/day

[12] ISO/IEC 9126-1:


www.iso.org/iso/catalogue_detail.htm?csnumber=22749
[13] ISO/IEC 9126-1:
www.iso.org/iso/catalogue_detail.htm?csnumber=22750
[14] ISO/IEC 25010:
www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.
htm?csnumber=35733

> about the author


Nicolaas Kotze is a confident realist pessimist
with a quirky and warped sense of humor that

Closure

ironically coalesce well with Murphys Law and

As you can see, the success of a product targeting an international

tively explains pretty much how quality is per-

audience is not always as easy and cheap as some might believe. Getting it wrong can lead to serious brand damage that might make you
a laughing stock for years to come.
Our role as testers is to try can get involved very early and convince
stakeholders to plan carefully and allocate sufficient funds to see this
through successfully.

understanding human behavior which collecceived. He was introduced to testing in the games
industry while in London, UK, working on numerous AAA titles. A career in testing formally started on returning to
South Africa testing GIS software systems that utilise Google Maps
from the public service delivery domain for clients in the Netherlands
and then later he moved on to the busy retail credit and financial
services. He chose testing as career path because it enables people
to blend creative thinking around formal processes or regulations

References

and still have the exhilarating pleasure of breaking things. The fact

[1] Lebron James Nike Ad Banned in China:


www.youtube.com/watch?v=bPJPe6Kti7g

sions are primary drivers keeping things interesting and riddled with

[2] IBM Globalize your business:


www-01.ibm.com/software/globalization
[3] Microsoft Go Global Developer Center:
msdn.microsoft.com/en-us/goglobal/default
[4] Oracle Globalization Support:
www.oracle.com/technetwork/database/database-technologies/
globalization/overview/
[5] LISA:
web.archive.org/web/20110101195336/http://www.lisa.org/
Homepage.8.0.html

that testing intertwines with so many other disciplines and profeschallenges. Lately his responsibilities with Dynamic Visual Technologies (DVT) Cape Town as SQA Competency Lead are to direct his
enthusiasm and energy towards mentoring, motivating, assist and
make people aware about the benefits of testing and improving
processes for more effective testing. Having gained experience from
printing, digital video production/sales/support/training and special
effects as well as being an office automation field technician before
being introduced to testing grants him the skills to understand
problems that frustrate people but also what is required to support
people effectively.
LinkedIn: za.linkedin.com/in/nicolaasjkotze
Blog: njkotze.wordpress.com

[6] LTAC:
www.ltacglobal.org
[7] Systems to manage terminology, knowledge and content
TermBase eXchange (TBX):
www.iso.org/iso/catalogue_detail.htm?csnumber=45797
[8] W3C Internationalization Activity:
www.w3.org/International/about
[9] GALA:
www.gala-global.org

Testing Experience 27/2014

19

By Christian Kopsch

Localization Testing

One-Year Status Report for a Localization Project


In times of globalization, offshoring and Web 3.0, more and more companies are rushing
onto the market, eager to provide their software solutions to an international customer
base. After successful internationalization of the software, localization is performed in the
second step. Generally, localization is the process of adapting software to a specific geographic or cultural region. The following article is a report on the experience of working
for one year as a tester and test manager on a localization project.
The beginning
After the successful release of our online knowledge database to the
German market, our team received an additional order to deliver
our web application to the Chinese market. Our core team consisted
of seven developers, two testers, one product owner in the role of
a requirements engineer, one scrum master and one lead project
manager. When required in the event of a higher workload, the core
team could be supported by external developers and QA resources at
a moments notice.
The initial method used in our agile project was Scrum; later the project
switched to Kanban. This project involved many exciting components
with a lot of new challenges and opportunities for us. At the beginning of the new project we had little or no experience in the field of
software localization.
In order to overcome this challenge we were required to obtain this
knowledge within just a few months. Initially there were a lot of different questions, for example:

of functional and nonfunctional requirements, the test configurations


of both hardware and software, and the differences between linguistic
and cultural realities. At the beginning of the project it quickly became
clear that none of the stakeholders had enough experience to immediately find a clear path towards fulfilling the customer`s expectations.
Unfortunately, I ascertained that there were only a few practical case
studies and reports on the topic of localization and software testing
when our project started in the middle of 2013. With the limited material which was available to me, I worked intensively over the next few
weeks, carrying out extensive research and acquiring new knowledge.
Additionally, we conducted a workshop with an external expert who
introduced us to the topic of internationalization and localization
of software. Following the workshop, we began to develop a general
feeling for what to expect in the next few months, including which
questions we should address beforehand and which obstacles still
lay ahead.

How far would Chinese censorship limit and influence our work?

Main issues

How would Chinese censorship affect and restrict our customers?

From the project point of view, the following main issues needed to
be reviewed and specified more concretely:

Would we be able to communicate smoothly with our partners


and business owners in China?
Would it be possible to transfer data without any problems?
Our initial doubts quickly dissolved at this point and we worked unhindered. Fortunately our contractor in Beijing did not have any issues in
this case either. From the outset, we learned to ask the right questions
and this formed a solid base for our later interactions.

Preparation
In our first brainstorming meeting we discussed various scenarios in
order to examine possible implications in the context of software testing. In these discussions we found that development and testing are
difficult to separate from one another. Basic questions arise in terms

20

Testing Experience 27/2014

1. Hardware and software standards and requirements


At first we discussed the question of which hardware and software
would be used by our potential clients in China. Do they use different
external input devices, different monitors, or different components?
What is the relationship between mobile devices and desktops? And
which operating systems, browsers, or third-party components would
be used? In a detailed analysis we found out that there are no significant
differences in these points when compared to the German market.

2. Character set and special characters


We recognized that Chinese character sets and the specific special
characters would affect our application. This could be relevant, for

example, in the registration or login functions, when entering search


queries in our application, or in sorting lists or tables in our software.
Also, when sending emails via our application we would need to know
whether errors could occur if there were special characters in names,
email addresses, or domains.
Because our software had used Windows-encoded data in Windows
1252 thus far, we did have some errors in displaying Chinese characters. The project switched to Unicode data in UTF-8 and switched
the internal search function from Optisearch to TRS, a widely used
application in China.

3. Number formats
What additional number formats might be required? Of significant
concern are different formats of dates, times, measures, weights, and
numbers (e.g., decimal places), or zip codes and telephone numbers.
Consequently, would we need additional input fields?
Primarily our software is concerned with the formats of dates. At different points in our application we use the date for labeling or sorting

different document types. So we switched the format from the German


to the Chinese standard.

4. Graphical user interface (GUI)


In the course of localizing software, the translations of the buttons,
labels, navigations, and menus would be essential. This poses the
question as to whether the translated texts would still fit into the existing layouts and buttons. What styling bugs should we expect? And
would these bugs then require some new styling? In several manual
tests across the surface of our software, we found a lot of styling bugs,
as expected, caused by the translation of German text into Chinese
characters. Sometimes buttons and labels required different quantities
of words, space and character distances when translated.
These bugs were fixed and the new labels were adjusted to comply
with the style guide.

5. Different output formats


An essential feature in our application is the ability to export different
doc types into different formats. What document standards are there
in China? Would we need new output formats? Are existing doc-type
formats irrelevant in the existing Chinese market? After analysis by
the business owners, we found out that there are no significant differences to our existing supported formats, so it was not necessary
to modify anything in this case.
In addition to these five main topics, we had to consider, specify and
adjust a lot of other issues. We took care of external systems and technical interfaces, licensing, and tracking. We also focused on language

Localization testing
At the beginning of the project our team worked in Scrum. Regular
releases were developed and delivered, but the essential basic functionality of the original application was still included. In addition, some
new features specially required for the Chinese market were developed.
After several sprints we recognized that, in some cases, Scrum was too
static for us. New features and user stories were often too complex to
define and implement efficiently in a single sprint.
Later the project switched to Kanban. Fixed sprints and commitments
became more fluent and our team became more agile in the whole
process. Besides the actual localization of our software, new requirements from our Chinese customer were sent to us, then specified,
developed, and tested by us immediately.
The whole localization and development process, as well as testing at
all test levels, was conducted in Germany. Production also remained
in Germany; however we needed to establish a process of transferring
software and data to the Far East.
After project completion and delivery to China, a user acceptance
test would be performed in China by our customers. Upon successful
completion of the acceptance test, it would go live. Afterwards, our
software would be rolled out to the Chinese market and our application would be hosted locally by a Chinese service partner in China.
At the beginning of our testing we were confronted with questions
about test systems and test environments. Primarily, we re-used our
pre-existing test systems with Chinese language packages.
We decided to execute our tests on the most popular browsers Firefox,
Chrome, and IE8-11 in the German market because we would like to
be able to support them later in the Chinese market. Additionally, we
were testing in the Baidu browser which is widely used in China and
is based on a Chrome engine.
For a testing environment we first used different local test instances. At
the same time we were building up a staging system that was hosted
in Germany. Later we improved our testing capabilities and began to
run performance tests in this environment.
Initially we had some small issues in developing and executing the HP
Loadrunner scripts, due to the parameter files being incompatible with
the Chinese character set. We quickly solved the limitations and found
some workarounds so we could continue working with the scripts.

Communication between Germany and China


In this case, we rarely used automated tests since our user interface
was initially too unstable.

accuracy, and the need for native speakers or interpreters became


obvious.

Iteratively and in an agile fashion, new features and changes were


implemented into the GUI. These were often semi-finished because
some of the requirements were incompletely specified.

Despite all these obstacles in the main project layer, the team overcame
them. I was working out of my comfort zone on test-specific questions
and issues in my role as a tester and test manager.

This caused further questions and increased the need for better communication between the project team, the customer, and our colleagues in China.

Testing Experience 27/2014

21

Due to the six-hour time difference between Germany and China, there
was only a small time window for daily verbal communication via
telephone, for example. Alternatively, we had to wait one day to receive
e-mail feedback. However, we successfully overcame these challenges.
The testers were rarely involved in direct communication with the
Chinese customer. The responsibility for direct communication rested
with the lead project manager and some selected developers who
sometimes worked closely with our Chinese colleagues. The most
important aspect was the regular communication and dialogue with
all the participants and stakeholders in the project team.

Functional testing
Our functional tests were primarily executed manually. We combined
them with exploratory testing. As our GUI developed more and more
Chinese labels and texts, the question arose as to how to differentiate labels, buttons, and their associated functions in our application
from each other, because nobody in our core team spoke Chinese. We
examined various possibilities.
One possibility was to consult an interpreter. Fortunately, this problem
was solved sooner than expected: one colleague was a native speaker
and supported our core team with all the necessary translations and
text changes. Additionally, we built up an English-language reference
product which was identical to our Chinese product.
This simplified the test activities, especially for the external colleagues
and staff members supporting the local QA team, and particularly with
the identification of the Chinese buttons, labels, and texts.
In the expanded GUI testing of all our supported browsers, we found
a lot of styling bugs as expected. These bugs were fixed or the style of
the GUI was adjusted accordingly.

Lessons learned
All these challenges occurred last year. We rapidly adapted to the
difficulties and we overcame these obstacles to ensure optimal implementation. After a twelve-month project phase we will hand over the
finished localized software to our Chinese customer. After an acceptance test in China, our software will be rolled out to our new Asian
customers in the Chinese market.
Looking back over the last year, my conclusion is that both software
testing and software development are confronted by similar challenges. It is almost impossible to look at software testing and software
development separately, and therefore objectively in many cases
they are synonymous.
Especially at the beginning of our project, we regarded software testing as an isolated activity. Continuous communication was the most
important aspect of the project; the more closely the team cooperated
in our agile project, the more immediately we achieved an improvement in the quality of the project.
It is not yet clear whether the project will continue in maintenance
mode or whether new features will be required after our handover,
because we have to see whether our potential Chinese customer will
accept our software or, indeed, whether it will function in their current market.
This is our first attempt at entering this market and we will have to
see whether our investment over the past year results in a satisfied
customer.

> about the author


Christian Kopsch (33) has been working for the

Testing tools and processes


The primary testing tool for creating and executing test cases was HP
ALM. Our initial doubts that major issues would arise with regard to
the Chinese special characters were luckily unsubstantiated and the
work was unimpeded.
In fact, it was not possible to display Chinese characters easily in our
existing projects, but after switching to Unicode in HP ALM everything
progressed without difficulty and Chinese characters were displayed
without any issues. We decided to attach these relevant characters in
a text file, as we only used them sporadically.
For bug tracking we still used JIRA. There was no need for us to change
this tool because all the JIRA processes were performed in Germany.
Furthermore the project language remained both German and English,
and there were no issues in this case.

22

Testing Experience 27/2014

Freiburg-based company Haufe-Lexware since


2011 as a Test Manager in the field of web applications.
Kopsch is a big fan of agile processes and prefers
working on Scrum and Kanban projects. In addition to his project management tasks in his role
as Test Manager, he has always maintained a passion for testing and
tries to test things (even exploratively) himself, whenever possible.
Christian was encouraged to utilize his full range of skills in this
most recent localization project. Adapting a web application for the
Chinese market was a new and exciting challenge for him, and for
the whole team, and was a landmark achievement.

By Nadia Soledad Cavalleri

Localization Testing is More Than


Testing the Translation
lations and laws that restrict the system in that particular country.
For instance, the correct use of the local currency and date formats.
Sometimes we use colors to convey information. For example, we usually alert the users with a warning in red. By doing localization testing
we must bear in mind that colors do not have the same meaning in
every culture. For example, Buddhists associate red with death. Other
examples are the colors white and green. While in most countries the
white color is associated with purity and is even the color chosen for
marriage, in China it is often linked to death. Moreover, in Wall Street,
Americans use the green color to represent increases in share prices,
while in Asia green represents a drop in prices.
Finally, it is important to include user manuals, help files, and installers as part of localization testing. All of these must be translated and
checked in the same way as the application.

Localization is the process of adapting an internationalized product


to different geographic regions. The goal of localization testing is to
verify that it has been correctly adapted.
When you are testing localization, you do not just have to verify
whether the translation is correct in linguistic terms, you also need
to consider cultural issues. Even if we restrict localization testing to
the language field, we should not only verify that the translation is
correct from the source language to the target language, we also
need to consider the particular characteristics of each destination. For
instance, I have participated in a project created in Spain that involves
its internationalization for different Latin American countries such as
Paraguay, Uruguay, Argentina, Chile, and Peru. While the language
of those countries is Spanish, there are some subtleties that you must
take into account, such as words and pronouns.
As a result of the translation, there are also some esthetic checks that
need to be performed. For example, you need to check that the labels
still fit on the screen, whether they were cut off, that they are not
overlapped with other objects, and that the texts still fit in the title
bar, among others If shortcuts are used and translated, you must
also verify that the newly selected combinations are representative
and not repeated.
The defects you usually find in localization testing are related to:
text expansion, truncated strings, overlap of GUI elements, misalignment, date and time format, corrupt characters, untranslated images,
shortcut assignment (i.e. duplicated, missing), hard-coded strings (i.e.
untranslated) and unsupported code pages.
The application may also behave differently in different countries, so
you must not dismiss the functional testing of these business rules.
Other adaptations you must also bear in mind are the different regu-

So far we can see that localization testing is not limited to verifying the
translation because it also includes many other things to test. However,
you may only want to evaluate the translation of the application into
different languages. If the number of languages is significant, these
tests are candidates for automation. The great advantage of automation in these cases is that you can use a single script for each scenario.
This script is written once and run many times as needed according to
the languages you want to test. This script will reference a data pool
that contains the corresponding labels according to the language you
want to evaluate, so the translation will be in a separate data file.
In general, automation tools were not created to support localization
testing, but they are useful in this process. Either way, the best strategy
for addressing localization testing is to create a mix of manual and
automated testing.

> about the author


Nadia Soledad Cavalleri lives in Buenos Aires,
Argentina. She is an information systems engineer and psychologist. In 2009, she obtained
certification as an IBM Certified Solution Designer Rational Functional Tester for JavaScripting with a grade of 100/100.
Nadia has been working for Baufest for the last
eight years as SQA Lead. She also works as Assistant Professor for
UTN-FRBA and as a psy chologist for a mental health institute. She
has worked on several projects, mainly in the financial, government,
and telecommunications industries. She has also delivered courses
in different types of establishments, such as schools, universities,
and companies.

Testing Experience 27/2014

23

Column

by Erik van Veenendaal

Test Process Improvement and Agile:

Friends or Foes?
Currently I am leading a project to describe the implementation

representatives from these organizations to find out they (still)

of TMMi in an Agile environment. There is much debate as to

have many problems and are looking for concrete answers. Using

whether software and test process improvement still have added

Agile makes a strong contribution to being more flexible (e.g., in

value when using Agile methodologies. Many Agile purists state

terms of requirements to be implemented) and providing business

that there is absolutely no added value and we should complete

value. However, it is not a silver bullet that will solve all our quality

ignore all process improvement methods. Coming more from

problems and make testing obsolete. There is little to no proof

a practical background and approaching this with an open

that introducing Agile will automatically also improve product

mind, I strongly beg to differ. Many organizations struggle when

quality (see Figure 1). In this column I will briefly discuss some of

they are in transition from a sequential life cycle to an Agile

the aspects that need to be taken into account when performing

iterative life cycle. It is interesting when you discuss testing with

test process improvement in an Agile context.

Of course, using the Agile life cycle model has a decisive influence
on the way in which test process improvement is approached. The
improvement culture here is closely aligned to the iterations and can
be characterized as follows:

Level of (test) documentation

Improvement is considered at frequent intervals (e.g., at the end of


a sprint when using SCRUM).

Within projects that use Agile life cycle models, improvements generally take place in frequent feedback loops that enable test process
improvements to be considered frequently, e.g., when applying SCRUM,
at the end of a sprint, or even as part of a daily stand-up meeting.
Retrospectives are a standard and important tool that will drive (test)
improvements. A team-based improvement focus is already embedded
in Agile. As a test improver, the challenge is to make use of this improvement cycle, take the improvements to another level (e.g., facilitate
cross project learning), and institutionalize them where necessary.

The scope of the improvement is often limited to the cycle (e.g., a


sprint) that has just taken place, the aim being to improve little
and often.
Improvements are closely coupled to the problem, and waiting
times for improvements to be implemented are minimized.
The principal aspects to be considered when applying an Agile life
cycle model in the improvement context are:
Improvement cycle frequency
Organizational aspects
Scope of improvement
Source of improvements

24

Testing Experience 27/2014

Improvement methods
Support from test process improvement models

Because the scope is often limited to the previous sprint, small but
frequent improvements are made that focus mainly on solving specific project problems. The focus of these improvements is often not
on cross-project learning and institutionalization of improvements.
Looking at the organization of test improvement, we find that there is
likely to be less focus on test process improvement at an organizational
level and more emphasis on the self-management of teams within the
project. These teams generally have the mandate to change the testing

process within the project according to their needs, resulting in highly


tailored processes. However, some organizations also use weekly test
stand-up meetings to take things to a higher and cross-project level.
Since there is a more project-specific focus on (test) process improvement, less emphasis is likely to be placed on broader issues affecting
testing across the organization. This could mean, for example, that
fundamental testing problems may not be fully addressed because
they are beyond this project-centric context. A typical example here
is the approach taken to testing certain quality attributes, such as
performance and reliabity. These issues may be deferred from sprint
to iteration because they often require more skills and resources than
the project team has available. In these areas, it is hard to make a substantial next step without major investment. Solving problems only
on a project level could also easily lead to suboptimization and losing
touch with the bigger picture.
In Agile contexts, the range and number of alternative improvement

ideas to be considered may be considerably more than with sequential


life cycle models. Since most members have a part-time testing role
within the project, these ideas can come from any project member.
This places a stronger emphasis on evaluating and prioritizing improvement suggestions, which may be more of a team effort than a
task assigned to a test process improver. Since this may require the
specific testing knowlegde of a test process improver, they can also
act as a consultant to the team if required to do so.

Defects

1000

100

10

10

100

1000

Project Size (KLOC)


Agile defects are randomly scattered around the industry average. No pattern or
conclusion for higher product quality (less defects) in Agile projects can be derived.
Figure 1. Agile defects vs. industry average

Improvement methods and models


The methods used to propose test process improvements when using
an Agile life cycle will tend to be analytical methods for evaluating
the root causes of problems, such as cause-effect diagrams. These
are particularly useful methods for the problem-solving mindset that
prevails at the end of a sprint. Note, however, that the life cycle used
does not dictate the improvement method used.
Analytical approaches often go hand-in-hand with model-based approaches to test process improvement, and this is also true for projects
that use an Agile life cycle. However, more tailoring of the models is
required. When using a process improvement model such as TPI NEXT
or TMMi, more help is available to make the necessary adjustments
for Agile and iterative life cycles.
The official TPI NEXT book includes chapters that show how to use
the model in Agile and iterative projects. This includes, for example,
a list of the principal key areas to be considered and how their checkpoints should be best tailored and interpreted. In addition, the TMap
NEXT content-based methodology (which forms the methodological
foundation for the TPI NEXT model) is tailored for SCRUM projects in
TMap NEXT in Scrum, so that TMap can also be applied in Agile and
SCRUM contexts.

Test Defects

10000

In projects using the Agile methodology and practicing Agile testing


techniques, such as extreme programming (XP), do not expect to find
the level of test documentation you would expect from projects using
a sequential life cycle. There may be a single combined test document covering the essential elements of a test policy, test strategy,
and even a high-level test plan. Test process improvers should avoid
making improvement suggestions that call for more rigorous and
thorough test documentation. Like it or not, this is not part of the life
cycle approach. One of the main Agile principles is that documentation is created only when there is a clear and unambigious need for it.

The TMMi website provides case studies and other material on using
TMMi in Agile projects. I have personally provided consulting services
to a small financial institution while achieving TMMi level 2 and to a
medium-sized embedded software company while achieving TMMi
level 3, both employing the Agile (SCRUM) life cycle using the standard
TMMi model. Note that within TMMi, only the goals are mandatory,
not the practices.
As stated with TMMi, a special project has been launched to develop
a special derivate that focuses on TMMi in Agile environments. The
main underlying principle is that TMMi is a generic model applicable
to various life cycle models and various environments. Most (specific)
goals and (specific) practices as defined by the TMMi have been shown

Testing Experience 27/2014

25

to be also applicable in Agile environments. Remember, testing still


needs to be done in a professional way. However, many of the subpractices and examples and their interpretation are (very) different.
As a result, the TMMi Foundation is not developing a new maturity
model but will document the way TMMi can be applied in an Agile
environment. It will determine whether each standard TMMi goal
and practice is also applicable for testing in an Agile life cycle. Some
goals (or practices) may just not be. For each goal and practice that is
applicable, typical lightweight Agile sub-practices and examples will
be defined. Watch the TMMi website (and my tweets) for the latest
updates and results of this project.

> about the author


Erik van Veenendaal is a leading international
consultant and trainer, and a widely recognized
expert in the area of software testing and quality management. He is the founder of Improve
Quality Services BV (www.improveqs.nl). He holds
the EuroSTAR record, winning the best tutorial
award three times! In 2007 he received the European Testing Excellence Award for his contribution to the testing
profession over the years. He has been working as a test manager
and consultant in various domains for more than 20 years. He has

Focus on business value


As always, anything you like to improve through testing needs to have
added value. Never improve for the sake of following a model. This
sounds obvious on paper, but in practice I have seen so many organizations making this mistake. Whatever you do, make sure you know why
you are doing it and what it means in the Agile context. If you cannot
identify the added business value, do not do it! Process improvement
must be constantly reviewed against the business drivers. Following
this essential principle will help you to be successful, including in Agile
environments.

26

Testing Experience 27/2014

written numerous papers and a number of books, including Practical Risk-Based Testing: The PRISMA Approach and ISTQB Foundations
of Software Testing. He is one of the core developers of the TMap
testing methodology and a participant in working parties of the
International Requirements Engineering Board (IREB). Erik is also a
former part-time senior lecturer at the Eindhoven University of
Technology, vice-president of the International Software Testing
Qualifications Board (20052009) and currently board member of
the TMMi Foundation.
Twitter: @ErikvVeenendaal

Website: www.erikvanveenendaal.nl

By Felix Krger

Field Report:
Test Automation and Quality Assurance in the
Context of Multi-Platform Mobile Development
The word app still suggests that we are dealing with little applications. While that may be true in some cases, this field report is about a
pretty big app that is used to remote control and monitor the statuses
of different parts of a machine, such as light, air flow, and position.
The machine uses a mobile communications network to be accessible
by a backend server, which our app accesses via the internet. In all, the
complexity is comparable to a desktop application.
One major aspect of this app is variant management. Different customer groups receive different feature sets and different machine
types require specific data presentation. This results in a very dynamic
app both in terms of its composition during the build and also at
runtime, depending on which machine type we want to use.
So this is anything but a small project. It is not a mobile app that
accompanies an existing business application it is the only solution

for this business process. There will be a longer maintenance phase


with enhancements, new features, and far more variants to support.
The app is an intrinsic part of the machine and has to fulfill the same
high standards concerning quality, usability, and user experience.
The report provides an overview of the project, as well as our decisions
and experiences regarding quality assurance and test automation,
continuous integration, and project management.

Project setup: one concept, two apps, two teams


The project uses the SCRUM agile software development framework.
A sprint takes two weeks, including a total of about one day for sprint
review, retrospective, and planning. Sprint reviews are held with the
whole development team, one or two customer representatives, and
sometimes additional stakeholders from the Quality Assurance or
Infrastructure departments. The review often results in specification
refinement by the customer. During the first part of sprint planning
the user story selection a customer representative also participates.
This allows the developers to ask questions about specification details
and sometimes to propose specification changes to allow more native behavior of the app.

The main development phase is planned to take about nine months.


During this time, the team size will vary between 8 and 13 persons
including the Product Owner and the Scrum Master. The fluctuation
is partly due to students who are with the project for a set amount
of time and partly due to experts for special tasks joining the team
temporarily.
Our target devices are iPhones and Android phones, specifically 3GS and
above (iOS 6+) for the iPhone, and Version 4 and above for Android.
The machine to be controlled and the server backend already exist, so
our sole task is to develop the app, including the user interface, backend
communication, variant management, and integration of platformspecific services (e.g. for push notifications, maps, or social networks).
To assure the best possible user experience, we are not using a crossplatform toolkit; instead, two independent native apps are being
developed. The developers are split into sub-teams with experts for
their respective platform. To promote communication, both teams
work together on a single site.

Because of the two teams, the product backlog contains most user
stories twice one version for each supported platform. For most
user stories, both versions are planned in the same sprint, depending
on the development progress, which does differ for iOS and Android.
When a story is implemented, the result is compared with the other
platforms app. During the sprint review, we prefer to present a feature
in parallel for Android and iOS. By doing this, we can ensure that we
achieve feature-identical apps and a very similar user experience for
both target platforms.
Beyond the user experience, the Android and iOS apps have very similar
software architecture, despite being implemented independently. The
data model, layered design, screen flow management, variant management, and domain-specific algorithms are specified in a common
software architecture document. So, when implementing a function
for the second platform there is a template which is easy to understand
because it is implemented on the same basis. This does not work for
view implementation due to different widgets and user interaction
concepts here the development is completely platform-specific.

Figure 1. Communication setup from mobile device to machine

Testing Experience 27/2014

27

Automated testing
The challenge for our quality assurance in terms of testing is to have
tests for each app, at multiple levels (unit, integration, and acceptance
(UI tests). Since we want to automate as much as possible, we have
a QA consultant who is a part of the team and who drives our test
automation. He is responsible for test specification and review of test
implementations. The actual implementation of automated tests is
done by the developers, triggered by a test task that is generated by
default for each user story.
Depending on the implemented feature, there are acceptance tests
(automated UI tests), unit tests, and integration tests. Tests are always
implemented platform-specifically. In the case of the UI tests, they
are synchronized by using the same acceptance criteria. For each acceptance criterion defined in a (UI-related) user story there is at least
one UI test.
To implement all these different automated tests, we use platformspecific frameworks. Our lower-level tests are implemented using
SenTest or JUnit respectively. On iOS, additional libraries like nocilla
and JRSwizzle are used for mocking. For UI tests we use KIF for iOS and
Robotium for Android. In order to get more stable Android tests and
eliminate false negative results, the Robotium Recorder (commercial

product) has proven useful. Even though we place great importance on


the apps being feature-identical, the differences in navigation and user
experience between iOS and Android mean that the steps required to
access and use each feature are different. Unlike in the desktop world,
it is only theoretically possible to use one UI test to cover cross-platform
apps that have gone beyond simple concepts. This has the disadvantage
of increasing the technology stack and effort, but has the advantage
of being able to use specific tools for specific problems.
In terms of ratio, it is often said that UI tests should be the smallest
chunk of the testing pie. This is partly due to their execution time,
but also because they are still often seen as the hardest to write and
maintain. Our experience is that it can be worth re-evaluating the
ratios of test levels specific to each project. With an increased focus on
customer acceptance (both in agile projects and in the mobile domain)

ideal

For this project, we have around 10% unit tests, 40% integration tests
(mocked and against the real backend), and 50% UI tests. The amount
of integration tests is only that high because of quality issues (poor
interface specification) in the product we received from the independent backend supplier.

Continuous integration
We use Git with a basic branching model as our version control system.
It defines a master branch, release branches, and branches for each
feature and bug fix. The developer merges a feature as a whole into
the master branch when the user story is done. To ensure that no
incomplete features are merged into the master branch, default tasks
are generated for each user story. There are default tasks for acceptance
(reviews by product owner and QA consultant) and code quality (code
review, static code analysis, and automated tests).
The basis for continuous integration is the master branch, because
it should always contain a project ready to release. Each commit
(merged feature) triggers a full build cycle consisting of the following jobs:
Updating/building dependencies
Building the app (currently three different build variants)
Static analysis
Unit tests
UI tests (one job for each build time variant)
App distribution via intranet

mobile

this project

UI

UI

UI

Unit/Component

Unit/Component

Unit/Component

Integration

Integration

Integration

Figure 2. Test level ratio (ideal, typical mobile development project, this project)

28

and the improved suitability of UI testing tools, testing through the


UI and of the GUI logic itself should not be neglected; indeed, some
projects may find that the pie is more heavily weighted towards UI
tests than unit tests.

Testing Experience 27/2014

We use Jenkins for iOS and Android, as it is the company default and
well supported by the IT department. Especially in the iOS development,
we had teething problems with Jenkins that could have been avoided
had we been able to choose the Xcode server. However, additional
plugins in Jenkins did eventually make it possible to integrate our
iOS systems into the CI, for example the Clang Analyzer, and plugins
to manage environment variables or share workspaces between jobs.

Final thoughts: where automation cannot help


As described, our process contains various factors to help us fulfill our
customers high quality expectations. The whole team is involved in
the quality process, and quality accompanies the project in each sprint.
This is considerably helped by our automated tests.
However, we have identified areas where quality assurance must be
done manually. Since these are not areas that teams used to desktop
development would instantly think of, they are listed below:
Usability and user experience: This is an area where it is widely
accepted that manual testing is required, but customers of mobile
apps weight this quality attribute much more heavily. With the
added complexity of gestures and orientation changes, we find
that we have to place more focus on this area of quality and the
tests are performed manually by the team as well as by customer
representatives. In this setup, it is the customer who ensures that
different devices are tested as part of their manual acceptance
test. Our automated tests are restricted to one version of each
platform.
Internationalization: The normal process in multi-lingual desktop
applications is to have strings checked by native speakers, outside
of the context of the application. Due to the decreased size of the
screen, our planned internationalization into over 15 languages is
more time-consuming than it would be for a desktop application.
Our translation is performed by external employees, and each
translation must be manually checked to ensure that it uses the
correct amount of space on the screen. We use our UI tests to support this by creating automated screenshots that can be reviewed
by the translation team.

Quality counts even (especially) for apps


This field report shows that app development requires a test strategy
and quality assurance activities to at least the same degree as a desktop business application. In some respects, the requirement for good
quality practices is even higher due to the challenges of developing
one app for two platforms.
In many ways, we can see that mobile development does not change
the quality activities required. What can change, though, is the relevance or importance of specific activities. For us, this was clear in the
distribution of our unit, integration, and acceptance tests, as well as
in the areas of manual testing that would have been less important
or time-consuming in a non-mobile project. Our conclusion? Quality
counts and it is important to know what you want to test, why, and
how. Even for mobile applications.

> about the author


Felix Krger works as a software engineer at
BREDEX GmbH (www.bredex.de) in Brunswick,
Germany. He studied computer science at the
Brandenburg University of Technology in Cottbus. He is involved in the development of various
mobile and desktop clients for enterprise software systems.
Over the past eight years he has worked in different fields of automotive software development. He is experienced in handling (Java and
Eclipse-based) client-server systems with desktop clients or mobile
clients, as well as test automation systems and embedded software.

By Jeisson Alfonso Cordoba Zambrano

Strategic Design of Test Cases


Introduction
Sometimes when we enter into a testing project, we initially focus on
evaluating the available literature, the time we have for the design, the
methodology, the available human and technological resources, and
we leave aside or do not give enough importance to having a strong,
clear strategy for both the design and the test case life cycle throughout the whole project, regardless of its size, complexity, and duration.

The basics: what is a test case?


IEEE Standard 610 of 1990 defines a test case as follows: (1) A set of
test inputs, execution conditions, and expected results developed for a
particular objective, such as to exercise a particular program path or
to verify compliance with a specific requirement.
This definition indicates what a test case is and what it consists of.
Starting from this base, we have a general outline to start the analysis
and the strategy selection process.
The strategy for the approach, design, and execution of test cases is
established progressively. This means it is built carefully and in detail,
making sure that you have everything that you need and that it will
evolve constantly over time. Additionally, the strategy should come
with tools that allow the testing process itself to be optimized and
made more effective.

Initial assessment
Requirements
identification
Evolution and
optimization

Case creation

Defining the
techniques

Execution

Association

Adaptability

Classification
and focus

Tools and
setup

Figure 1. Test case life cycle

To start with, you need to identify and define the life cycle stages
the test case will go through, so that all efforts are focused on fully
developing its components. Before we start writing the test case, it is

30

Testing Experience 27/2014

imperative to make sure that we have identified all of the test objects
requirements, which will give us an initial idea of how to create the
cases. Once we have this new knowledge, the best possible design
technique can be chosen. Later it will be necessary to set a focus and
a classification for the cases that need to be set up.
After that, the formal set up process can begin. All the set up cases
need to adapt easily to whatever stage the project is in, securing all
necessary coverage. Within the project it is necessary to identify tools
that support both design and execution. Cases that test the same object need to have a clear association that allows them to be optimally
managed. After that the test execution begins and this is where we
assess their quality and how effective they are. Finally, the cases go
into a repetitive evolution and optimization stage that runs alongside
the test objects evolution.

Requirement identification
In most projects nowadays, a clear document design and requirements
methodology is followed. However, depending on the size of the project, available time, approved budget, and human and technological
resources allotted to the project, a strong and organized methodology
that allows you to easily establish the requirements and derive the
test cases might not be followed. This would make a more in-depth
analysis necessary and you might need to choose a different way to
design test cases, because it would not be possible to use a known
technique for the task at hand.
Test requirements can be generated in a formal and detailed document, which should have gone through a cycle of stand-still testing.
They can be the result of a technical and functioning team or they can
be a punctual request for change or maintenance on the test object.
Regardless of the source, it needs to be clear that all the materials to
identify what is being tested are being looked for. In the same way,
the team looks for a how validation of the what by following a step
by step procedure. It is also necessary to establish the expected result,
which the test case should arrive at by following each of the steps. An
important thing to bear in mind when identifying the materials is to
the need to establish what should not be done so that the team gets
the raw materials to design both positive and negative scenarios which
apply the requirements in full context.

Case creation
When the requirements have been identified and the materials clarified, both start coming together to create a test case. During this first
step of the creation process, it is important to bear in mind that both
set up and design of cases look to guarantee full test coverage of the
object whose requirements are being validated. It is also necessary
to bear in mind that test cases improve their productivity during the
execution stage, decreasing the time when functions are not understood, which makes it necessary to be very clear when writing them,

using simple, easy to understand language, and using your voice while
writing to guide the executing analyst. It is good to start sentences
step-by-step using verbs that indicate the task, and the result needs
to include future tense verbs that indicate the expected result after
the case execution is over.
Test cases must have a series of components that let us be sure that
each of them is using the identified materials and that fully apply the
requirement they are designed for. For this you need a base template
like the one below, which shows the important elements that every
test case needs:
Element

Description

Case ID

Test case unique identifier

Requirement
name or number

Requirement identifier or test function within the


project

Objective

The what

Case name

A clear name that indicates what the case is testing, (if


there are use cases, the name can include one of their
flows)

Description

A clear and full case explanation

Assumptions and
preconditions

The necessary conditions for case execution

Steps

Step-by-step guide for the person running the case,


the main interaction between the test object and that
person

Conditions for
execution

It includes the scenarios under which the case should be


run. It also mentions the data pool in which testing data
is stored.

Status

It must indicate the possible final state of the test case

Expected result

A result with full details of what should be obtained


after running the case. It can be positive or negative
depending on the test case orientation.

Defining the techniques


The technique used for case design depends on the type and size of
the project and on whether or not it uses a methodology to specify
requirements as previously indicated. Once the materials have been
identified, the most convenient technique for dynamic test execution
for the project can be selected.
Among the various available techniques you can choose to go with
black-box testing to validate inputs and outputs from the test object,
validating all possible combinations (bearing in mind that testing all
of them would not be possible). To accomplish this, the specification
should have clearly defined each possible input and output in order
to optimize the design. Also, depending on the available documentation, other techniques can be used, such as: state transition testing,
equivalence class partitioning, boundary value analysis, decision table
technique, use case testing, cause-effect graphic, among others. The following can be applied at any stage of the testing process: unit testing,
integration testing, system testing, and acceptance testing.
If technical documentation is available and if the project has access
to the test objects detailed code, white-box testing can be used for
detailed validation of software flows that apply with a particular input combination, thus providing low level control. These are some of
the techniques available nowadays: branch coverage, path coverage,
statement coverage, among others. Using these techniques requires
high technical and programming skills due to the complexity of their

design and execution. These can be applied at the following stages of


the testing process: unit testing, integration testing, and system testing.
Depending on the projects type and requirements, it might be necessary to run security tests. These require an advanced technical level to
define the reach and the way in which such tests will be designed and
run. Another type of testing that may be necessary is loading tests,
which need tools that allow you to create and run scenarios for the
various tests of this kind.

Classification and focus


The previously indicated requirements produce an initial test case
group. This group organizes cases according to functionality that each
of them tests. Nevertheless, it is necessary to create a more complete
classification that gives the testing team more information about the
cases, their focus and duration, and that enables the testing process
to be made more dynamic and flexible through the rest of the testing
project.
For example, if the project aims to develop a website for a banks online
banking feature, the documentation indicates that there are functional
requirements such as website access, and the ability to perform queries

and online transactions. On the other hand, there are non-functional


requirements like withstanding the load of 2,000 users at the same
time and ensuring that each user can only have one active session.
With all of this we have a series of components that must be classified
depending on the approach the team wants for the test cases in order
to validate all of the requirements.
Initially a Level I can be defined with two main categories: Functional
Test Suite and Non-Functional Test Suite. This classification provides general guidance on the objective that the test cases within this
category aim to validate.
Next, after the testing technique has been selected, more details can be
obtained about the technique on which the test cases will be built. For
this example, a Level II can be defined with categories like: BlackBox
Test Suite, WhiteBox Test Suite, Loading Test Suite and Security
Test Suite. These categories inform the testers about the case types
and elements that should be borne in mind for each of them. Also, the
lead tester can see the abilities each tester needs to have in order to
run each case type, and the estimated design and execution times for
the cases, and it can also help in understanding the tool types needed
to support the design and execution of the cases. The categories for
the specified functionalities or components are defined in Level III. In
this level, functionalities can be repeated in the form of each of the
categories on the previous level (Level II). This way the group can make
sure that all tests are being performed for each given component in
order to provide the necessary coverage to validate all the specified
requirements. Next comes Level IV, in which the focus and functionality of the cases they aim to validate is made explicit. As shown in the
example, categories can be defined such as GUI, which validates all
graphical components and navigation flows, Logic, which checks
the functionality of the component in itself, or Messaging, Log,
Volume, and Ethical Hacking. Depending on which technique was
used and the component type, different categories can be defined at
this level. Finally there is Level V, at which the team tries to see positive
scenarios where a happy or successful path of the functionality is
evaluated. It also contemplates negative scenarios where abnormal
conditions can appear that the test object should be able to control.

Testing Experience 27/2014

31

Test object

Test type

Testing
techniques

Functionality or
component

Login
Test Suite
Black Box

Test Suite
White Box

Website

Test case

GUI

Validate
username and
password
access

Logic

Inquiries
Transactions

Functional
Test Suite

Focus

Login
Inquiries

Test Suite
Load Test

Login

Test Suite
Security Test

Login

GUI
Logic
Messages
LOG
Stress
Volume

Non-Functional
Test Suite

Access Controls
Ethical Hacking

Look and feel


verification
Validation
message
communication
structure between
the front end and
the back end
Validate access
of 2,000 users
Validate unique
session access

Figure 2. Example of test case classification structure

When this kind of classification is reached, there is a broader and


clearer perspective of all the cases in order to use them at the right
time and with the right human and technological resources. It also allows reutilization of cases on other similar projects because a person
could use a whole set of tests for one functionality and then know the
full context of which test type is covered, which technique has been
applied and which components are being validated.

Tools and setup


At this stage a lot of important elements have already been identified and they provide guidance on the formal setup of test cases, The
structure of the tests is clear, as are the techniques that will be used to
design them, the focus, the reach, and the way to organize them. The
next step is to choose a team of test designers with the necessary skills
to develop each identified case and select the tools that will support
the design process. Depending on the type and size of the project, the
team has to evaluate whether a client-server tool should be chosen,
or whether it is better to use a cloud solution. They also have to bear
in mind that whatever tool is selected should allow them to set up
all the previously identified elements in order to completely map the
defined strategy.

Adaptability
During this stage it is important to evaluate the adaptability of the test
cases to the current stage of the project. Cases have to go through a
stand-still validation before being run in order to guarantee that they
are complete and that they cover all the functionalities for the specified
components. Another validation stage takes place while the tests are
being run and it checks that the cases do contain all the interaction
flow between the test object and the testing analyst, providing him
with a step-by-step orientation to achieve the expected result.

32

Testing Experience 27/2014

In the classification made in the example seen in Figure 2, in the Focus


group, the abstraction level of the cases is brought down and they adapt
to each of the elements that comprise a functionality. In this group,
special care needs to be taken to ensure the cases adapt to the focus
they are being used on. This indicates that the reach has to be clearly
defined. The cases from the other categories in the same group are a
direct complement, but this does not mean that you have to go through
the same test object flow numerous times to run the cases. This division is designed to run the cases from a determined classification at
a specific moment. Back to the example of the banks website: if the
project is managing prototypes it would not be possible to fully test
the components logic. Also, validation of the interaction with back-end
components could not be assessed. The only option would be to start
with the execution of cases that evaluate graphical and navigation
components from the GUI category. Later on, the development GUI
cases can be dropped and the team can focus on Logic cases, where
the test analyst will have to navigate and prove whether there is any
kind of impact on the graphic component or on the navigation flows.
Even if this is not the specific objective of these cases, it progressively
guarantees that functionality is complete and fully set up.

Association
In order for the structure for the cases defined in the Classification
and Focus section to be complete and for it to be manageable and
scalable, it is necessary to identify all the relationships in the test cases,
whether they are direct or indirect. A direct relationship is the logical
order of test execution. For example, you would not be able to test
the result of a balance query without having logged into the website.
An indirect relationship is one that tests a different element of the
test object and guarantees that all of the objects components will
work correctly when put together, complying with all of the specified
requirements. For example, the GUI and Logic cases that validate

login can be run, but it will be necessary to run the loading and
security tests to finally OK full functionality.
Taking all this into account, it is important to include two more elements in the base structure (case creation) of the cases: Dependencies
and Completion. In the former we indicate the Case or Cases ID of
the associated cases that must be run before others, and in the latter
all cases from other testing sets that complete validations of all the
elements that comprise the test object are connected.

Execution
At this stage it is important to identify the tools that will support the
process itself, to set up automatic test execution and to support the
analysis of the results depending on the case focus. To run the cases, you
might need a particular tool that allows you to access a database, or to
analyze the communication between the front end and the back end.
It is important to be clear about the way versions will work in terms of
the documentation of the tests run on each cycle. This definition must
go hand-in-hand with the structure defined for the Classification and
Focus given to the test cases. It is also important to have traceability
on the component status or the functionality of all the testing sets to
ensure that all the requirements of each of them has been tested. To do

this, you can create a keyword that identifies a component through all
the testing sets where there are cases that apply their own functionality
or one associated with them. This keyword will become a part of the
base structure of a test case as explained in Case Creation. Traceability of the case status should be supported by a control board that
evidences the status flow through which the case has been during all
the cycles that have been run. This is key to flagging up early any possible component damage in case they were working correctly. As can
be seen in Figure 3, test cases 1 and 2 achieved a successful outcome
on versions 1.0 and 1.1 respectively, but were affected on version 2.1,
where the outcome was unsuccessful.
ID Test Case

Release

Date Execution

Result

not recommended to run loading or security tests because they will


get stuck midway because of an error on the component that will not
allow them to reach the punctual objective of these types of tests.
Test cases can also evolve and go to other similar projects where they
can be used as a base guide for design or, depending on the quality
and functionality similarities, they can even be adapted to run with
minimal adjustments. As the project goes on, the test cases can go from
being highly detailed and having a long list of steps, to more general
cases which, supported by the knowledge of the executing analysts,
can end up optimizing execution times.
A test case can evolve from being manual to being automatic, where it
will acquire additional characteristics so a robot will be able to run it.
To achieve this, it is imperative to have a well structured and complete
case so automation assembly is not traumatic.

Conclusions
Test case design needs to be strategic, guaranteeing that while the
project advances, the team will acquire all the necessary elements to
achieve full test cases. This means they can be understood by everyone
involved, are easily adaptable, can be grown and completed, and ensure
total coverage of all requirements.

You need to think of having different complementary test sets that


can be used in parallel at different times, depending on the type of
project. But they also need to evaluate the elements from different
perspectives, observing all functionalities holistically, so that when
a test set is run you can achieve full coverage.
Nowadays a test case cannot stay with its first design. Cases evolve
and change constantly as the project moves forward. Designing cases
based on a strategy means you can consider using them again and you
can progressively and constantly optimize them to bring a great deal
of value to test management itself without creating a burden for the
team. This enables them to find new paths and tools to achieve all the
objectives that have been set and to benefit all the people involved in
the project.

1.0

15/05/2014

SUCCESS

1.0

16/05/2014

FAILED

1.0

15/05/2014

FAILED

1.1

10/07/2014

SUCCESS

1.1

15/07/2014

SUCCESS

1.1

15/07/2014

FAILED

seven years experience of programming in dif-

2.1

15/09/2014

FAILED

ferent programming languages, and four years

2.1

15/09/2014

FAILED

experience in software engineering, including

2.1

15/09/2014

SUCCESS

Figure 3. Example of basic control board

> about the author


Jeisson Cordoba is a systems engineer with

the quality and certification processes, and


knowledge of high-level testing techniques. He
is a Certified ISTQB Tester at foundation level and
has three years experience in developing automation and code
generation tools based on model-driven architecture. He is cur-

Evolution and optimization

rently employed as the testing leader in a banking organization,

Through the projects life cycle, test cases should evolve in synchrony
with the project. They have to grow and improve as the documentation matures or new knowledge is acquired by the testing team. Direct
and indirect relationships between cases can be improved, and it is
even possible to create associations from the testing sets to indicate
what has to be tested first. For example, if a software components
navigation and logical tests have not run through successfully, it is

testing projects on different technologies and applications such as

where he has the opportunity to participate in a wide variety of


Web, ATMs, POS, stand-alone applications, legacy application migration to leading technologies, CORE business applications, solutions
for market segments such as personal and corporate banking, and
internal management solutions for the organization in the areas of
HR and accounting.

Testing Experience 27/2014

33

ADVERTORIAL

By Emil Simeonov

Increasing Profitability and Decreasing


Defects with TenKod EZ TestApp
Mobile test automation
Smartphone applications keep on changing our world on a daily basis
and are being positioned as outstanding growth engines for the global
economy in general and the online economy in particular. However,
the hurdles and barriers that software developers are facing are rather
challenging. The online economy demands top-notch applications from
the community of developers and a rapid response time in order to
shorten the route to market, while application complexity is increasing.

New and exciting software testing trends are emerging, which can help
testing teams tackle the challenges of mobile testing to some extent.
Crowd testing, device sharing, and the like are effective and fun. Of
course, they are already an improvement, but the cost of resolving
even trivial issues during the post-production testing phase of any
project is still too high. Hence, only issues with a really critical impact
usually get resolved.
And what if most issues that end users will hit could be detected and
resolved earlier? What if test harnesses with a growing number of
automated tests could guarantee regression-free mobile applications?
Of course, the need for exploratory testing would still be there, but
then such activities could be focused on figuring out corner cases not
covered by the automated testing suite. They would also provide input
for new automated tests. Both development and post-production testing cycles could be shorter and more effective. This is not an imaginary
situation anymore, and therefore the field of mobile test automation
has been explored with great interest.

Challenges and pains


Thorough research would quickly reveal the existence of a number of
open source and commercial solutions in this space. Yet the domain
area is to say the least complex and does not provide a comprehensive and focused solution to the challenges and pains that are
being experienced on an ongoing basis by the apps development and
testing teams
All the challenges faced by mobile testing are applicable to mobile
test automation as well. In addition, the idea of automating testing
activities has certain implications, as if the diversity of mobile platforms and the increasing complexity of mobile applications were not
challenging enough.

34

Testing Experience 27/2014

1. Essentially, test automation is programming, so a basic knowledge of common programming languages such as Java, C#, JavaScript, etc. is required. This ultimately raises the entry barrier
for newcomers.
2. The test automation tools should be really close to (if not the
same as) the ones used by the development teams producing the
tested mobile applications. However, this is not the case with
most of the mobile test automation tools out there. They are
either designed to run in web browsers or as standalone desktop
applications. What about integrated development environments (IDE)? There are a couple of issues. First, the diversity of
mobile platforms already implies the same variety in terms of
development and testing infrastructure. Second, there are some
commercial offerings betting on IDE, but their user experience
is flawed with proprietary ideas and concepts inapplicable in
any other context. This makes it even harder for test automation
engineers to ramp up and use these tools effectively.
3. Proven agile development and testing practices mandate that
automated tests are most beneficial to organizations, when updated and run regularly as close as possible to the involved development teams. In this way any detected issues could get timely
resolution at the lowest possible cost. Unfortunately, the testing
solutions out there still encourage mostly post-production
testing cycles. Continuous integration (CI) is a chimera. Open
source test automation tools do not take this into consideration
and commercial offerings fail to provide an easy and intuitive
approach to CI mostly due to the proprietary concepts implemented as part of this kind of software.
4. Multi-level testing is another important component needed
for effective mobile test automation. There is basically logical
(and often technological) separation between tests ensuring
that a number of non-functional requirements are going to be
consistently met by a tested mobile application. In this sense,
depending on the specific requirements for any given application, it may be necessary to pay special attention to security,
performance, accessibility, API, etc. and create the necessary
tests. The multi-level testing concept increases the reliability
and maturity of tests. It also gives them sharper focus and makes
it easier for both the development and testing teams to better
understand and react to test failures, thus increasing the quality

of the tested product. Although extremely useful, most of the


existing mobile test automation tools and frameworks do not
use this technique at all.
Consequently, there is not a single offering out there that fully satisfies the mobile test automation use cases end-to-end. Installing and
configuring the open source ones is a tedious task. Commercial tooling
is better here, but often relies on proprietary techniques like image
recognition, which results in an increased number of instabilities for
the automated tests produced. All in all, you need to be extremely
careful when picking up tools for Mobile test automation, as there
are often hidden pitfalls. For example, some automation solutions
use jail-broken/rooted devices and application instrumentation, even
if not publicly.

TenKod makes developers lives easier


TenKod the company that standardizes the mobile testing experience was founded because of all these gaps. We believe that what
the market currently needs is respect, openness, and trust incorporated as part of a superior offering for automating mobile tests. So,
meet the TenKod EZ TestApp the unique and disruptive Mobile test
automation suite.
TenKod EZ TestApp fits seamlessly into the existing development and

testing landscapes and makes tedious automation tasks a breeze,


while practices, such as vendor lock-in, device jail-breaking, or ap-

plication instrumentation, are strictly forbidden. How is this done?


Technically the product is based on cutting edge, industry-standard
open source components, which guarantee the right degree of plugability, extensibility, and standard compliance. Of course, this is just the
technical aspect. What really makes a huge difference, when compared
to others, is that TenKod designs and plans its software increments
not only according to the needs of the professional test engineers,
but also together with them. The diverse and highly skilled TenKod
team regularly gets together with seasoned mobile test experts, so
that different aspects of the solution can be discussed and improved.

TenKod EZ TestApp in a nutshell


The ultimate goal behind the TenKod EZ TestApp is flawless and simple
end-to-end mobile testing experience independent of the mobile
platform used.
The solution is easily installable on Mac OS and Microsoft Windows.
It is just a matter of using a one-shot, wizard-based, installation procedure. Then it is ready to use.
The entry point to test implementation is the TenKod Studio an IDE
based on the award-winning Eclipse IDE. Creating a test project there
is just a matter of few clicks. Every test project is seamlessly integrated
with Apache Maven for the sake of robust dependency management
and continuous integration (CI). Running tests using the market leading CI Platform, Jenkins, for example, requires little effort to create the

Figure 1. TenKod Studio running Snooper

Testing Experience 27/2014

35

Figure 2. TenKod EZ TestApp Test code automatically generated by TenKod Snooper

corresponding CI job in a fully standardized manner. Furthermore,


it is quite easy to re-use any given test in various contexts, since the
TenKod EZ TestApp provides test parameterization support out of
the box. If you need parallel test execution, Jenkins offers a number
of approaches for achieving parallelization under any circumstances.
Is the usage of a source control system mandatory? Well, welcome to
CI now everything is possible. This is also how TenKod tackles the
expensive and ineffective post-production test cycles by building
up a safety net for the sake of regression avoidance throughout the
whole development and testing phases.
Developing test implementation using TenKod Snooper has never
been easier. Snooper is the unique visual test-recording tool, which
facilitates Mobile testers in understanding the structure of tested
applications better. Snooper deploys the application under test on the
real device, simulator, or emulator, and provides a visual representation
of this application in the TenKod Studio. From there on, you are able to
interact with the application as is normally done on the real device,
e.g. using gestures like tap and swipe, sending text to search fields,
selecting values from lists, etc., which are all recorded. When a given
application scenario is executed using Snooper, a thorough Selenium
WebDriver-compatible test code is automatically generated. In other
words, TenKod EZ TestApp generates 100% compatible WebDriver test
code with zero development effort from the test developer. If needed,
the generated test code can later be changed or refactored as per the
test developers requirements.
The recorded scenarios could always be replayed by just running the
respective WebDriver test implementations as JUnit tests. They follow well-established industry prescriptions for increased maintainability, reusability, and extensibility of automated tests. Generic Page
Objects are used in order to hide the complexity of interacting with
mobile applications. The TenKod test runtime based on Selenium for
Mobile takes screenshots automatically whenever a test case fails to
execute correctly, so the investigation process can start immediately.
As mentioned, it is also capable of executing parameterized tests and
generating thorough test execution reports. These can be reviewed
both in the TenKod Studio and Jenkins.
The generated code is self-explanatory simple to read, understand,
and modify. The sample test implementations below illustrate a TenKod EZ TestApp test case. Note the clean code and lack of any abnormal,
non-test related constructs in the automatically generated test code.

36

Testing Experience 27/2014

Now and tomorrow


TenKod EZ TestApp fully supports iOS and Android test development
and execution. Considering how the world is steadily going in the
direction of wearable devices and the Internet of Things, wider platform coverage support is about to come soon. The good news here is
that common user experience, concepts, and interaction patterns are
applied everywhere, so that the differences of the underlying mobile
platforms are transparent from the test automation perspective.
If you need an outstanding and reliable mobile test automation
suite, do not hesitate to try out the TenKod EZ TestApp. You will be
intrigued.

> about the author


Emil Simeonov, Product Owner at TenKod, has
an in-depth experience in software development,
specializing in structuring high productivity agile development teams. Emil is a mobile application testing solutions expert with avast knowledge of project and quality management
including definitions, test automation, governance, processes, strategy, procedures, and rollout.
Prior to TenKod, Emil worked on strategic SAP products as a Senior
Product Owner and Chief Development Architect. His roles included
building fruitful collaborations with development organizations all
over the SAP labs worldwide; defining and designing business-critical
software solutions in the area of in-memory computing and realtime processing of big data; leading their implementation, delivery,
and development support.
Twitter: @TenKodLTD

LinkedIn: www.linkedin.com/company/4986424
Blog: www.tenkod.com/category/tenkod
Website: www.tenkod.com

By Ravi Kumar BN

Software Testing in the Era of


Quick Response (QR) Code Technology
1. Introduction
Quick response (QR) codes have been on the market for quite a long
time. However, people have often complained that their QR codes do
not seem to work and there are several reasons why that could occur.
Most people do not create QR codes in the right way and this greatly
affects the scanning ability of the codes. It is important to take your
time when creating these codes to ensure that you do so accurately.
Quick response codes can be a great marketing strategy, but only if
executed correctly. Many businesses have shown great returns from
the use of these codes. One of the secrets of ensuring your QR codes
actually work is testing them before they are made available to your
audience. This is the only way for you to know if the codes actually work.
If you fail to test your codes, you run the risk of offering your audience
a bad user experience, and once they have this bad experience they
will be reluctant to scan your codes in the future. A bad experience
with your business code may mean loss of clients and this could affect your profit margins. Bad QR codes can also negatively affect your
marketing strategy.

If you are still using QR codes without testing them, it is time for you
to start testing your codes. Testing your codes simply means trying to
use them after you create them and just before you take them public.
Until you are able to read a QR code just by looking at it, you should
always test the proofs with a variety of smartphones and scanning
applications before you release a campaign. This is the simplest way
to spot scanning problems.

load and install applications, can access the internet, and have
cameras. These types of phones are loosely referred to as smartphones; the most common examples are iPhones, Blackberries
and Android phones.
The application: There are a number of applications which can be
used to decode a QR code, all of which work in similar ways. Here
are a few:
Red Laser (iPhone)
ScanLife (Blackberry)
Quick Mark (Android)
Use ScanLife, a free app which has versions for a wide range of phones.
Going to the website www.getscanlife.com on a phones internet
browser will automatically detect the type of phone and guide you
to the appropriate version of the application.
The connection: Because the QR code is a link to online content,
you need to be able to connect to the internet in the location
where the codes are placed. Smartphones can connect to the Internet in two ways: through a 3G data connection, or through WiFi.

Tester/
end user

Smartphone
with code reader
Uses

2. How to use QR codes


A consumer can use QR codes in a simple four-step process.

To test

Pick up a smartphone that has a QR code reader/scanning


software

Refers to

Scan the QR code


Reader decodes the information and connects the consumer
instantly to a website or sends them a text message

View the website with the information

3. What do you need to test QR codes?


Three things are required in order to successfully decode a QR code: a
smartphone, a QR code scanning application, and a connection to the
internet (either through the phones data plan or over a site-generated
wireless network).
The phone: In order to use QR codes, you need to have a cell phone
capable of running decoding software. These phones can down-

Test
scenarios

QR codes
under test

Figure 1. QR code test setup

4. How to test your QR code


After you have created a QR code, you need to test it before you use it
to make sure it is working. At a high level, there are three major dimensions for testing your QR code to make sure it works in the real world.

Testing Experience 27/2014

37

Internet connection

Visibility

Prominence

Background color

Scanning time

Foreground color

Contrast Accuracy

Aesthetics

Usage instructions

User experience
Functionality
Link directly to a URL

Send a text message

Call a phone number

Placement on media

Download Vcard contact info

Decode a secret message

Figure 2. User experience dimensions

4.1 User Experience Focus


You can use a QR code for various business purposes, such as:

You need to test internet accessibility in the area where you plan on
placing your code. This is because poor internet reception may cause
issues when scanning the code. Find places with good internet access
to ensure that people have an easier time scanning your code.

Linking directly to a URL

4.1.3 Usage instructions to customers

Sending a text message

Until QR codes are more widely recognized, businesses must help

4.1.1 Functionality of the decoded Information

Calling a phone number


Decoding a secret message
Downloading Vcard contact info
Here are some possible reasons that you might want to use a QR code
on a web page:
To provide a way for mobile users to easily bookmark your site.
Some web sites will display a code linking to the page you are
viewing, while others will provide a link to the mobile version of
the page you are viewing.
To allow users to view a printed copy of the page so they can
return to it without having to retype the URL.
To encode contact information (name, phone, email, address, etc).
To direct customers to a micro site. A regular webpage can be
pretty hard to read and navigate on a smartphone. Landing sites
for QR codes need to be dimensionally smaller; in other words
designed for a small smartphone screen.
You should test the functionality of the decoded information depending on the purpose of usage.

educate consumers. Give instructions to your consumer so they know


what to do with the QR code. For example, if you print the code on the
cover of a book, print a statement like this below/beside it: Scan this
with a smartphone app that reads QR codes. Check if such an instruction is presented to end users.

4.1.4 Placement/prominence/visibility
Placement checks the position (top, bottom, left, right) of code in the
media and its relevance in the context of the business purpose. Visibility focuses on the existence of code in communications media. For
instance, can the consumer make out if the medium has a QR code or
not? Prominence checking ensures that the code is evident, transparent, obvious, or hidden deep in the media with other information.

4.1.5 Aesthetics
An ideal QR code would be dark in color on a white/light background
(contrast is imperative). Request a final proof, if possible, from the
printer to ensure color and contrast accuracy.

4.1.6 Scanning time


Depending on the smartphone, camera and scanning applications
that you use, scanning time may vary. Check that the scan time is acceptable to the end user.

4.1.2 Internet access in the area


Giving people a QR code with no Internet connection is like giving
them a car with no wheels. Using a QR code at a trade show is a great
idea, but not when it is in that part of the exhibition hall that has a
notoriously bad internet connection. You will not know until you try
to scan the QR code at the very spot others will be scanning it.
Make sure people have internet access at the spot where the code will
be scanned. Everyone knows how temperamental mobile and Wi-Fi
services can be one spot might be a dead zone, but ten feet away
you might have coverage. It is frustrating, but a reality you need to
prepare for.

38

Testing Experience 27/2014

4.2 Environment focus


4.2.1 Lighting conditions
Check the scanning of the QR code in poor lighting conditions. This
will give you an idea of what your clients are likely to experience when
they use the code in different lighting conditions. If you find that your
code is hard to scan in different lighting conditions, you can change
the codes contrast, such as using a brighter background.
If it does not work, you can increase the contrast of the code (a darker
color on a lighter background if you started with something lighter than

a black code and darker than a white background) or you can take steps
to make sure it is displayed in an area with the right amount of light.

Android applications to test against: i-nigma, Barcode Scanner, QR


Droid, Scanlife, AT&T Code, Google Goggles, BeeTagg, and others.

4.2.2 Cross-platform compatibility

iPhone applications to test with: Redlaser, Scan by QR Code City,


and others.

Make sure the code can be used on multiple devices. Try it out on multiple devices. Scan the code with as many different types of devices
old and new and QR code readers as possible to make sure it works.
Importantly, make sure the QR code can be scanned with older phones.

4.2.6 Monitor resolutions


If you are using a desktop or laptop to scan QR code on a web page
using Chrome browser or an online decoder, the differences in monitor
dot pitch, brightness, and contrast can negatively impact the accuracy
of the decoded information. Hence you should be testing on different
monitor resolutions, too.

4.2.3 Scanning distances


Sure, the code works correctly when you scan it on the table in your
office. But what about when the QR code is on a billboard and hundreds
of yards from where people will scan it? Will it work then?

4.2.7 Communications media options

You need to consider where you plan on placing the code and ensure
that the distance does not affect the scanning ability of the code. Scan
recognition should not require significant distance adjustment. The
QR code should successfully scan at the distance people will normally
be away from it, (i.e., billboard effect).

QR codes are normally used in print advertising, signage, magazine


adverts, oron billboards, t-shirts and other physical products. However,
they are increasingly appearing on web pages as well. Hence it is wise
to test QR codes on these different communication media.

4.2.4 Camera resolutions

4.3 Technical focus

There could be differences in camera quality, resolution, and toler-

4.3.1 Code that embeds short vs long URLs

ance. If you are using smartphones to scan the QR codes, you should
test with different phone cameras that vary in quality and resolution.

If QR code embeds a URL, it is recommended to use short URLs. The


longer the URL, the more complex the QR code. The more complex the
URL code, the more difficult it is to scan (generally, you have to make
complex codes proportionately larger).

4.2.5 Scanner variants


Try it out on multiple scanners. Different phones use different QR code
scanners. It is important to ensure that your code can be scanned using
the different scanners available, so try as many readers as possible.

A small placement (less than an inch) will often be too dense to scan
if you have encoded a longer URL. For example, if you use a long URL
you will be unable to successfully scan .5 QR code. But when you blow
the code up to 1, you will able to scan it.

Scan the QR code with multiple scanner applications. Check scanning on an old phone, with the worst QR code app, in poor lighting
conditions.

If you do not have a short URL, you can shorten it using bit.ly or goo.gl.
If you have shortened the URL, you will be able to scan it at .5. This
illustrates how important it is to have a short QR code, and how important it is to test QR codes for short vs long URLs.

Some software is more sophisticated/tolerant and able to deal with


a wider range of discrepancies. Here are few that you may want to
consider:

Billboard

Someones T-shirt

Print advertising

Physical products

Building wall

Magazine advert

Environment

Communication media
Light conditions

Signage

Monitor resolutions

Camera resolutions

Scanner variants

Scanning distances

Smartphones

Figure 3. Environment dimensions

ZXing Online
Decoder

Chrome
QReader

Smartphone
QR code reader

Security scams

Scan source
Code version
Version 1
21 21 module

Version 2
25 25 module

Version 3
29 29 module

Wounded codes

Long URL

URL type

Technical

Version 40
177 177 module

Short URL

Error correction capability


Level L
about 7 %

Level M
about 15 %

Level Q
about 25 %

Level H
about 30 %

Figure 4. Technical dimensions

Testing Experience 27/2014

39

4.3.2 Sources of scanning media


Generally these QR codes are meant for mobile devices, as mobile
devices can decode QR code and show the information contained in
the code. But there may be a scenario where you have a cellphone that
does not have a QR code reader and you want to decode a QR code.
QReader chrome extension allows you to read any QR code directly from
your Chrome browser. Once you have installed the QReader, you can
read any QR code just with a right click of your mouse. If you do not use
Chrome then you can use ZXing Online Decoder to read any QR code.
Now it is evident that you should be testing QR codes using various
scanning media.
With smartphone
QR code reader/scanner
Without phone
QReader in Chrome browser
Online tools such as ZXing Online Decoder

4.3.3 QR code landscape


Symbol version of QR code, or size of matrix is not the version of the
spec, rather the size of the matrix. Valid values are 1 40. Some
implementations can set the symbol version automatically based on
text size, text type, and ECC level. Module size is the number of pixels
that make a block (a bit) of the matrix barcode. Each symbol version
holds a number of modules. Version 1 holds 2121 modules, version2
holds 2525, version 3 holds 2929, up to version 40 with 177177
modules.
QR code that you create can be of different sizes and types based on
the requirement. The largest QR code that has worked so far was a
maze field measuring 312,000 square feet.

4.3.4 ECC (error correction capability) level


QR codes use a ReedSolomon error correction-based technology to
help recover from errors in reading (e.g., caused by a smudge, badly
printed code, or other deformity).
ECC compensates for dirt, damage, or fuzziness of the barcode. Valid
values are L (low ECC), M, Q, and H (highest ECC). A high ECC level
adds more redundancy at the cost of using more space. A damaged
barcode can be restored based on the error correction capability (ECC)
level: Level L about 7%, Level M about 15%, Level Q about
25% and Level H about 30%.

find out if it is a web link, coupon, or a code for free products or some
other goodie. Many people will readily scan any code they find in the
hope that it is associated with a prize of some sort.
Most scanning applications will recognize the fact that the decoded
message is a link and will automatically launch your smartphones web
browser and open up the link. This saves you the hassle of having to
type the web address into your phones tiny keyboard. This is also the
point where the hacker can take advantage.
Hackers have discovered that they can also use QR codes to infect your
smartphone with malware, trick you into visiting a phishing site, or
steal information directly from your mobile device.
All a hacker has to do is encode their malicious payload or web address
into QR code format using free encoding tools found on the internet,
print out the QR code on some adhesive paper, and affix their malicious QR code over top of a legitimate one (or e-mail it to you). Since
the QR encoding is not human readable, the victim who scans the
malicious QR code will not know that they are scanning a malicious
link until it is too late.

4.3.6 Wounded codes


QR codes may be naturally distorted by means of water, dust, worms,
etc., or a hacker can intentionally try different distortions in an attempt to make the code unreadable. Such codes are called wounded
codes. In either case, you should check to see if it scans and decodes
all kinds of such codes.

Conclusion
Test QR codes over and over again with multiple apps and phones before releasing to the public. The bottom line is test, test, test as closely
as possible to where, when, and how regular people with ordinary
technology will be scanning the QR code.

> about the author


Ravi Kumar Bhadravathi Narasimha is a Senior
Technology Specialist with the Information Technology Solutions & Services Testing Center of
Excellence Services at Honeywell, Bangalore, India. He is a masters graduate from IIT Kanpur,
UP, India, and has a BE (CSE) from SDM College
of Engg & Technology, Dharwad, Karnataka, India.
He has almost 12 years of experience in software quality processes
and testing at Honeywell. He is an ISTQB Foundation Level Certified,

Since QR codes feature up to a 30% error correction rate, there is flexibility for creative branding and tweaks. But if the designer accidentally
overdid it, test-scanning is an easy path to catch those issues.

ITIL Foundation Level Certified, and Six Sigma Green Belt Certified

4.3.5 QR code safety (scams/malicious use)

as ERP (Peoplesoft), CRM (Siebel, SFDC), BI (Cognos, OBIEE), and emerg-

There is a rise in QR codes that point to fraudulent sites. One of the


warning signs seems to be a sticker with the code, rather than a code
embedded in an advertising poster.
Some advertisers and marketers will randomly place QR codes on
billboards, the sides of buildings, on floor tiles, or anywhere else they
can think of to make someone curious enough to scan the QR code to

40

Testing Experience 27/2014

Professional. He is a Six Sigma, lean, and agile testing expert and a


Six Sigma techniques and tools trainer. He is actively involved in
building and deploying testing strategies for various platforms such
ing technologies such as Mobility, Cloud, Analytics, Voice, Responsive
Web Design, and Wearables. He has attended design thinking workshops and has expertise in deploying design tools in problem solving
and usability testing. He has authored and published several testing
articles in QAI conferences and online magazines such as Testing
Experience and Testing Circus.

Book Corner

NEW

Book Review:

Personal Kanban:
Mapping Work | Navigating Life
Authored by Jim Benson, Tonianne DeMaria Barry
Published by CreateSpace Independent Publishing Platform. 2011. 216 pages. Soft Cover. US$24.95
This is one of those small, readable books that has
great mileage. The two authors do a great job in

translating the Kanban concept into an actionable


approach for personal use (Personal Kanban PK).
The same Lean-inspired concept, which worked successfully in the automotive industry and also made
its way into the IT industry, can be applied to our
personal workload. While Kanban could be explained
as simply as: Visualize your work and limit your
multi-tasking (work-in-progress), the book adds
a lot of value through the authors experience of
situations in which they applied Personal Kanban.
These real-life stories make it easier for the readers
to understand where the different aspects of Kanban
come from and why they are valuable.

It seems that in our current modern culture, it is


hard to differentiate personal from business life and

this should also not be the goal. On the contrary, we


should work towards the goal of enjoying our work
hours as much as our personal hours. And one big factor is to manage and improve the way we handle our
workload. My usage of Personal Kanban is natural.
Some parts of the year I use Personal Kanban, when
my workload increases too much, and in the other
parts of the year I dont use it.
If you want to give Personal Kanban a try, this is the
book for you.
Maik Nogens

New Releases:
Testing in Scrum:
A Guide for Software Quality Assurance in the Agile World
Authored by Tilo Linz

Published by Rocky Nook. 2014. 240 pages. Soft Cover. US$39.95

The Software Test Engineers Handbook:


A Study Guide for the ISTQB Test Analyst and Technical Test Analyst Advanced Level
Certificates 2012
Authored by Graham Bath, Judy McKay

Published by Rocky Nook. 2nd Edition. 2014. 560 pages. Soft Cover. US$49.95

Testing Experience 27/2014

41

By Rudolf de Schipper & Abdelkrim Boujraf

Test-Driven Developments are Inefficient;


Behavior-Driven Developments are a Beacon of Hope?
The StratEx Experience (A Public-Private SaaS and On-Premises Application) Part I
At StratEx (www.stratexapp.com), apart from developing the StratEx
application, we obviously worry about quality and testing the app.
After all, we are committed to providing quality to our users.
StratEx uses a concept of code generation to shorten the development
cycle, hereby being able to quickly implement new features and functionality. It also provides for a nice uniform user interface, because
we have kept the code generation as simple as we dared to. So it is
not very easy to create a (generated) screen that looks very different
from the others.
The code generation concept gives us reliable code (the part that is
generated at least), leading also to reduced testing time. Still, next to
coding efficiently, we wanted to find ways to efficiently test as well.
The obvious (from our perspective at least) solution was to automate
and to generate our tests. While this sounds easy, in reality it is not so
obvious. We spent quite some time figuring out what the best testing
strategy could be and how to implement this (using test automation)
At first we settled for user interface (UI) testing, using Selenium. We

hand-recorded some testing scenarios and included them into our


continuous integration build process (I will talk about this in another
post). It helped in deploying builds that were smoke-tested, but it did
not help at all when we made changes to the screens. And this was
exactly what we were doing all the time, because it was easy to do
with our generated code and we needed this flexibility for our users!
And, of course, there is no way that we could compromise on this, not
even to ensure our code was tested automatically. Yes, you are reading
this right we dare to deploy code that is not fully end-to-end tested!
We prefer to deploy often and fast, with the risk that our users find a
bug from time to time.
Still we were not happy about this, so we kept on looking for better
solutions. A long investigation ensued, looking at better (faster) ways
to test, ways to generate tests, ways to improve our hand-written
code, reading many books and articles on testing (see the books and
articles lists below). Even today the search is not over, and we still did
not manage to generate our tests (fully). We do run automated tests
these days, however, before deploying a new build.
Meanwhile, we would like to share some observations on the various
test/development approaches we have encountered:

TDD (Test-Driven Design/Development)


The basic premise of TDD methodology is that your development is
essentially driven by a number of (unit) tests. You first write the unit
tests, and then run these against your (not yet existent) code. Obviously
the tests fail, and your next efforts are directed to writing code that
makes the tests pass. TDD and its strong focus on unit testing is, in
our opinion, an overrated concept. It is easily explained and therefore

42

Testing Experience 27/2014

quickly attracts enthusiastic followers. The big gap in its approach


is that (unit) tests are also code. And this code must be written and
maintained, and it will contain bugs. So if we end up on a project
where developers spend x% of their time writing new (test) code and
not working on writing production code, we have to ask what the
point of this is. In our view, you are just producing more lines of code
and, therefore, more bugs.
Another problem or weakness of TDD is that there are a lot of bad
examples of TDD around. Many sites that advocate TDD give sample
code that essentially goes like this: you have a unit test that needs to
test a new class. The unit tests usually set a property on the class to be
tested and try to read back this property. With any class of reasonable
size, this will give quite quickly a nice set of unit tests. But, lets back up
a bit, what exactly are we testing here? Well, all things being equal, in
such scenarios we are testing whether the code is capable of accepting
a value and the possibility of retrieving this value. This comes down
to testing the compiler or the interpreter, because, as a developer, the
amount of code you wrote (and its complexity) to achieve this behavior
is close to zero. In other words, such unit tests only provide a false sense
of security, as in the end you are not testing anything.
One argument against this is that over time such getter/setter code may
evolve into something more complicated and then the unit test code
becomes useful to avoid regression. Our experience shows, however,
that a) this rarely happens at a scale large enough to make the initial
effort worthwhile and b) if you are making such drastic change to your
code, moving from elementary getter/setter pairs to more involved
computations, it is very likely that you want your unit test to break.
What is the moral of this story for unit tests? Should we abandon
them altogether? No, but we maintain that it requires some thought
about what exactly you want to unit-test. 100% unit test coverage for
us means we are wasting effort.
Let us take another point. TDD as a concept is not bad, in the sense
that it forces you to think about what you actually need to build and
how you can get it accepted (tested).
We believe, however, that its fundamental approach puts too much
power in the hands of the developer. It gives the strong impression
that the developer (under time pressure) is tempted to build something that passes the test, and that is all. So if the test is wrong, the
developer is not to blame! We believe that this undervalues the strength
of a good developer and gives anyone with a code editor the chance to
position himself as a developer. This cannot be right.
Looking for software testability, we found another issue that we could
not accept. It is the burden that the approach of producing testable
software puts on your architecture. To make a system testable is one
thing, but to enforce rock-hard principles on the architecture (Inversion
of Control, Dependency Injection, or a very strict separation of layers)

in our opinion only dramatically increases the complexity of the code


and, very importantly, it serves no purpose for the end objective of
the system, namely to solve a business problem. For highly complex
systems, such approaches are probably defendable and even very good.
However, in the fast-paced, ever-changing world we live in, the least of
our problems is whether a software architecture can be sustained for
ten years, if we already know that in two years time the entire business
system will be obsolete. We may even argue that in two years time the
insights on what a good architecture should be will have dramatically
changed. So why bother with this? We should concentrate on software
that works, and preferably that ships in record time.

Referenced books

So we concluded that any testing methodology that requires extensive re-engineering of what is basically a workable and dependable
architecture should be looked upon with a certain suspicion. As a
consequence, we do not use unit tests that require our code to be
able to work without a database. We do not use mocking with all
its complexities and we do not spend time on making our classes and
objects independent of each other. It does not make for the most correct code, we know. However, we do not mind. If tomorrow we find
a better way to do it, we will change our code templates and simply
re-generate our application code (well, most of it). So we are not overly
worried about having the right architecture to begin with in fact we
have already changed it twice, but that is another blog.

Referenced articles

So does this mean the end of test-driven development? Not at all. There
are things you can test very well with unit tests. Plus, there is a new descendant that we have also investigated, and which shows promise for
other areas we would want to test: behavior-driven development (BDD).
BDD has the same initial outset as TDD, in the sense that it starts the
development process with the definition of the tests the future application will need to pass to be accepted. But BDD is more appropriate for
this task because it seems to focus more on the functionality of a system
than on how it should be built. So it is less prone to the criticism we
have about TDD. For one thing, it provides for a way to bridge the gap
between users and developers by using a specific language in which
to specify tests (or acceptance criteria, if you wish). This language,
Gherkin (github.com/cucumber/cucumber/wiki/Gherkin), is so simple
that the learning curve is as flat is it comes, meaning that everyone
can be taught to understand it in record time. Writing proper Gherkin
requires a bit more time.
For us, its main advantage is that Gherkin provides for a way to communicate the functionality of a system at a level that is understandable
to a developer. Its main downside is that you will end up with A LOT of
Gherkin to fully describe a system of a reasonable size.
In the end, this is the main criticism we have about most of these
methodologies, (UML included). If you have a system that goes beyond
a simple calculator (the usual example), no modeling language (as they
all are, in a way) is powerful enough to describe a full and complete
system in such a way that you can understand and describe it more
quickly than by looking at the screens and the code that implements
these screens.
So the search goes on

The Cucumber Book (Wynne and Hellesoy)


Application testing with Capybara (Robbins)
Beautiful testing (Robbins and Riley)
Experiences of Test Automation (Graham and Fewster)
How Google tests software (Whittaker, Arbon et al.)
Selenium Testing Tools Cookbook (Gundecha)

Model Driven Software engineering (Brambilla et al.)


Continuous Delivery (Humble and Farley)
Domain Specific Languages (Fowler)
Domain Specific Modeling (Kelly et al)
Language Implementation Patterns (Parr)

> about the authors


Rudolf de Schipper has extensive experience in
project management, consulting, QA, and software development. He has experience in managing large multinational projects and likes working in a team. Rudolf has a strong analytical
attitude, with interest in domains such as the
public sector, finance and e-business.
He has used object-oriented techniques for design and development in an international context. Apart from the management
aspects of IT-related projects, his interests span program management, quality management, and business consulting, as well as
architecture and development. Keeping abreast with technical
work, Rudolf has worked with the StratEx team, developing the
StratEx application (www.stratexapp.com), its architecture, and the
code generation tool that is used. In the process, he has learned
and experienced many of the difficulties related to serious software design and development, including the challenges of testing.
LinkedIn: be.linkedin.com/pub/rudolf-de-schipper/3/6a9/6a9
Abdelkrim Boujraf owns companies developing
software like StratEx (program and project management) and SIMOGGA (lean operations management). He has more than 15 years of experience in different IT positions, from software
engineering to program management in management consulting and software manufacturing firms. His fields of expertise are operational excellence and collaborative consumption. Abdelkrim holds an MBA as well as a
masters in IT & Human Sciences. He is a scientific advisor for the
Universit Libre de Bruxelles where he is expected to provide occasional assistance in teaching or scientific research for spin-offs.

Read Part II of this article in the December issue (No. 28)!

LinkedIn: be.linkedin.com/in/abdelkrimboujraf

Testing Experience 27/2014

43

Performance
Column by Alex Podelko

The Skills
Performance Testers
Need and How to
Get Them
Periodically I see pretty vigorous discussions about the skills needed
by performance testers. It looks like most experts agree that performance testing requires more skills and knowledge than just creating
and running scripts using a particular load testing tool. While it is
still possible to imagine a performance tester in a large corporation
who only creates scripts and mechanically runs them while other
performance experts monitor the system and analyze results, I do not
think there are many prospects for this person, nor for the approach.
Systems have now become so complicated that the sum of the views
of specialized experts does not give the whole performance picture.
Thinking about the skills needed for performance testing, the following
areas come to mind as a minimum in addition to load testing proper:
What is going on with the system?
Monitoring and performance analysis.
We see an issue. What should we do?
Diagnostics, tuning, and system performance engineering.
Tuning doesnt help; is there something wrong with the application?
Software performance engineering.
What if?
Modeling and capacity planning.
And, of course, how can we get it all done?
Communication, presentation, and project management.

44

Testing Experience 27/2014

You probably need to know something about all these areas to be a good
performance tester (often more qualified professionals in this area are
referred to as performance engineers or performance architects, even
if performance testing remains their main focus although the use of
these terms varies). You do not need to be an expert in, for example,
database tuning most companies have DBAs for that but you do
need to be able to speak to a DBA in his or her language to coordinate
efforts effectively; or raise concerns about the performance consequences of the current application design. Unfortunately this is not
easy you need to know enough to understand what is going on and
communicate effectively.
The question is how to get such skills. Through constant self-learning
and gaining experience gradually? Yes, of course, but that takes a lot of
time. Moreover, many areas are pretty hard to jump into from scratch.
You need to gain some basic understanding before you will be comfortable enough to learn further on your own. Go to a class? Definitely go
to a class for performance testing and for your main tool. But what
about the many other different products you are working with? This
might mean several week-long performance-related classes for each
product. But these are developed for specialists making a living tuning
these particular products and you do not have time to go to all these
classes and do not normally need to go into so much depth. Talk to an
expert? Sure, if you find one around. Performance experts are scarce
and busy, so you had better have some well-prepared questions, which
is hard to do if you only know a little about the subject.

When you have gone far enough along the road, you will fall into
another trap. You already know enough that basic training will not be
beneficial, but there are almost no advanced classes at all for performance testers. When you go beyond the basics, things such as details
of environments, tools, systems, applications, etc. become so different
that it makes no sense to create a class for specific combinations. You
know areas where you need more information, you need to verify
your approaches and practices against other experts, you need more
advanced tips and tricks, and you need to find somebody you can
discuss your problems with.
I believe that a good conference is a solution in both cases. Somebody
digests information and presents it back to you. Not that it is absolutely

ideal, as the quality of the presentations and presenters varies, but it


is probably still the most effective way when you need to jump into
many different topics.
However, there is no perfect event for a performance tester. I believe
that the closest is the Performance and Capacity conference run by
CMG (www.cmg.org) a practical conference devoted to performance
engineering and capacity planning with a strong performance testing
track, although the emphasis of the conference is more on performance
than on testing.
Workshop on Performance and Reliability (WOPR) www.performanceworkshop.org is probably the only event devoted exclusively to performance testing (and some adjacent areas), but due to its format it is
limited to 20-25 people, by invitation only.
There are many great testing conferences such as STAR conferences
(www.sqe.com/Conferences), Agile Testing Days (www.agiletestingdays.
com), or CAST (www.associationforsoftwaretesting.org) where you may
find some presentations related to performance testing but they are
rather few and far between.
A little more performance-related material can be found at the Software Test Professionals (STP) conference (www.stpcon.com) (I still remember time when STP stood for Software Test and Performance) but
the emphasis is much more on testing than on performance, and there
is usually not much on performance engineering.

few sessions actually touch on classical performance testing. Surge


(surge.omniti.com) is another good web performance and scalability
conference with stress on scalability but you probably wont hear
much about testing there.
There are several more specialized and academic conferences related
to different aspects of performance, which you could consider if you
are interested in a specific aspect of performance, but testing aspects
are not usually covered.
And, of course, there are many vendor events covering their particular
products, which may interest you if you are using these products.
Have I missed any good events related to performance testing?
Let me know if I have by sending an email to me at
apodelko@yahoo.com.

> about the author


For the last 17 years, Alex Podelko has worked as
a performance engineer and architect for several companies. Currently he is a Consulting
Member of Technical Staff at Oracle, responsible
for performance testing and optimization of
Enterprise Performance Management and Business Intelligence (a.k.a. Hyperion) products. Alex
periodically talks and writes about performance-related topics, advocating tearing down silo walls between different groups of performance professionals. His collection of performance-related links
and documents (including his recent papers and presentations) can
be found at www.alexanderpodelko.com. He blogs at www.alexan-

derpodelko.com/blog and can be found on Twitter as @apodelko.


Alex currently serves as a director of the Computer Measurement
Group (CMG) www.cmg.org, an organization of performance and
capacity planning professionals.

The Velocity conference (velocityconf.com) is the primary event for


web performance. At Velocity you see quite a few performance testers and many vendors showcasing their performance tools, but very

Testing Experience 27/2014

45

By Venkatesh Sriramulu, Venkatesh Ramasamy, Vinothraj Jagadeesan & Balakumar Padmanaban

Mobile Test Automation

Preparing the Right Mixture of Virtuality and Reality


Introduction
Mobiles are no longer talk and text devices, but are intelligent companions that provide fully-fledged entertainment capabilities, financial
services, and enterprise mobility. IDC predicts that smartphone shipments will reach 978 million this year. According to Forrester Research,
by 2016, smartphones and tablets will put power in the hands of one
billion global consumers. Unlike web applications, the user experience has become a key driver in the success of mobile applications. As
handsets grow to support business-to-consumer, business-to-business
and business-to-employee applications, users expect performance to
match what they have experienced with their laptops and personal
computers. Virtual devices can be functionally automated but the

user experience or performance cannot be measured. To solve this to


some extent, remote device testing using the cloud was introduced.
However the touch and feel experience requires a sixth sense, which
is not on the market at this moment. Given the options, and the fact
that no option is a complete solution, lets dive deeper to prepare the
right mixture of virtuality and reality.
Mobile test automation can be classified as follows based on different
levels of virtualization (see Figure 1).

Mobile test automation based on browser add-ons


This is applicable only for web-based mobile applications. Browsers like
Safari, Mozilla Firefox, and Google Chrome provide browser add-ons
that can render web-related contents. This approach leverages user
agents that come inbuilt within them. The user agents help render
the specific web content that would be displayed on the device onto
a regular desktop browser. This can be exploited for automation by

using popular tools such as QTP, Selenium, or RFT each of which


supports all desktop browsers.
Advantages: There are a host of open source automation tools/frameworks readily available on the market and this is the cheapest and
easiest method of automation.
Disadvantages: Only functional automation is possible. Device compatibility, screen resolution and performance parameters cannot be
measured using this. Native or hybrid apps cannot be tested. Only a very
limited QA confidence level can be established with this automation.

Simulator/emulator-based mobile test automation


A handset simulator is a software application that mimics all of the
typical hardware and software features of a typical mobile device in
its real environment. Simulators and emulators are available on the
market for all OSs and come with compatibility over a wide range
of devices. Automating the simulators has less latency than real devices connected to the local network or in the cloud. Depending on
the application, this latency should be carefully considered when the
application runs on real devices to avoid negatives effects in the application. Most of the current simulators are free, and mobile phone
manufacturers have made enormous efforts to ensure their platforms
are easy-to-test and that there is a wide range of solutions available.
The tools to automate them are also free; the quality of these tools is
very high and they are very reliable.
Advantages: A big advantage of a simulation is the level of detail it
provides that is not experimentally measurable with the current level
of technology. The simulators can be easily automated and there are
industry-ready frameworks to automate these. Different interrupts

Mobile
test automation
methods

Browser add-onbased automation

Figure 1. Mobile Test Automation Methods

46

Testing Experience 27/2014

Simulator/emulatorbased automation

Remote device-based
automation using cloud

Real device-based
automation using bots

and device-specific characteristics can be tested as well as functional


automation. Simulation testing is cheaper and faster than performing
multiple tests on the design each time.
Disadvantages: It is critical to bear in mind that simulators do not
replace on-device testing and they can present issues that do not exist
in the real device and vice versa:
Simulation errors: The first of these disadvantages is simulation
errors. In simulators, we are usually programming using theories
of the way things work, not laws, and theories are not often 100%
correct. An incorrect key stroke can alter the results of the simulation. We first need to run a base line to prove that it works.
Hardware-software differences: Another aspect of testing on a
simulator is the difference between software and hardware. Simulators do not reflect the specific hardware and software features of
each supported device.
Performance: Comparing the processing power of the PC running
the emulator and the type of handset or smartphone, with limited
CPU and memory, being used for testing, performance on the
emulator cannot be guaranteed.
Security: People are sensitive to data, such as bank account numbers that remain on handsets, or if passwords are displayed on
the screen. The security designs vary across each handset model.
Testing for these types of security concerns in simulated environments is not a good use of time because it is the behavior of the
actual handset that needs to be tested.

Remote device-based mobile test automation


using the cloud
If you have an application that targets multiple handsets, all with different form factors, technical specifications, and service providers, how
can you go about testing your applications? It is simply not feasible
to acquire all the phones you need to test on. But if your company
could acquire all the phones you need, it would take a lot of effort to
perform the testing activities on all the phones. Not all handsets have
the same security designs, so each device must be individually tested.
Cloud-based remote device testing solves this problem by utilizing a
wide selection of actual, working mobile device hardware and software
accessible via the web. The ability to automate from anywhere is a
great advantage, because all that is needed is connectivity to the cloud
lab. The automation-from-anywhere feature is a distinct advantage
when using tools such as Perfecto Mobile and Device Anywhere. They
enable developers and testers, located anywhere in the world, to access a comprehensive range of the latest mobile handsets and tablets
online. And because the entire infrastructure resides within a network,
testing teams have dedicated connectivity performance equivalent to
their current local environment. Some organizations have a similar lab
structure created locally and not in the cloud.
Advantages: It is easier to expose performance defects, as well as
defects that are the result of the handset itself or its environment.
Crashes and memory leak issues which cannot found on an emulator
can be found using this automation. Metrics like processor utilization,
memory utilization, application launch time, battery usage, network
traffic, and network latency can be measured. Interoperability testing
is possible if a carrier test lab has been established. Commercial tool

CMAP Certified
Mobile App Professional
The new certification for Mobile App Testing
Apps and mobiles have become an important element of todays society in
a very short time frame. It is important that IT professionals are up-to-date
with the latest developments of mobile technology in order to understand
the ever evolving impacts on testing, performance, and security. These
impacts transpire and influence how IT specialists develop and test software in their everyday work.
A Mobile App Testing certified professional can support the requirements
team in review of mobile application, improve user experience with a
strong understanding of usability and have the ability to identify and apply
appropriate methods of testing, including proper usage of tools, unique to
mobile technology.

EN

October 1314, 2014 Berlin

DE

November 34, 2014 Berlin

DE

December 1516, 2014 Berlin Christmas Special: 200 off!

For further information visit cmap.diazhilterscheid.com


or contact us at info@diazhilterscheid.com.
All our courses are available as inhouse courses on demand!

Daz & Hilterscheid


Unternehmensberatung GmbH
Kurfrstendamm 179
10707 Berlin, Germany

Phone: +49 (0)30 74 76 28-0


Fax: +49 (0)30 74 76 28-99
Email: info@diazhilterscheid.com
Website: www.diazhilterscheid.com

Testing Experience 27/2014

47

add-on mechanisms such as HP QTP and IBM RFT are also available for
test engineers who are familiar with industry-wide products.

Tailoring the automation methods to different


testing phases

Disadvantages: The major disadvantage is the licensing cost. Many


companies provide pay-as-you-use services, but the investment is still
very high when compared to any of the above automation approaches.
Moreover, continuous investment is required for testing updates on
the mobile applications. Also, certain types of testing like Bluetooth
and some interruptions cannot be tested. This may be significantly
closer to real device testing, but is still not exactly real device testing.

As we saw above, no method is a complete one, and each has its own
advantages and disadvantages. It is very important to plan for the
right methods at the right phases of testing.

Real device-based mobile test automation using


bots
Real device automation refers to automating the tests that are done
by a manual tester on a real mobile device. This cannot be deemed
impossible, but sounds like an alien technology. However the grassroots
of this technology are already out of the ground.
In 2012, T-Mobile introduced Tappy, the first automated phone-testing
robot. Tappy presses buttons, bushes rollerballs, and navigates touchscreens in the same way a regular phone user would. It has been
programmed with countless usage scenarios that customers might

experience in their everyday lives, including ways to test the keyboard,


user interface speeds, battery life, music, voice calls, gaming, text
messaging, email, web browsing, and app downloads.
Jason Huggins, one of the co-founders of Sauce Labs, has designed a
robot that can play Angry Birds. This robot, named Tapster, is made
out of 3D-printed plastic, powered by Arduino, and completely open
source. And Tapsterbot and Appium are to work together more closely
in the future.
The OCULUS robot has been built at Intels headquarters in Silicon
Valley and uses two fingers with rubbery pads on the ends to crisply
tap and swipe with micrometer precision. Intel built OCULUS to try
to empirically test the responsiveness and feel of a touch screen to
determine if humans would like it.

Browser add-on
automation

Virtualization

Easiness to
automate

Speed of
automation

Simulator/emulator
automation

Unit testing automation: Unit testing automation is functional


and is repeated several times due to multiple deployments.
Browser add-ons automation or simulator automation meet the
need as the cost of rerunning the automation packs is relatively
insignificant.
Black box testing automation: Simulator automation balances
the purpose and the budget, however for high-profile apps, you
can even opt for remote device test automation. The intelligent
approach would be to have both, but to use simulator automation
packs initially and then later switch to remote device automation
once the app under testing is stabilized.
Regression automation: Simulator automation is recommended,
but a few rounds of remote device automation can be optimized
based on the criticality once in a while.
Interruption testing automation: Automated interruption testing
is possible with emulators, but the recommendation is to go for remote devices, because several performance features and patterns
can be recorded, which is not possible with emulator automation.
Integration testing automation: Simulator automation is possible,
but if integration is critical and there is a need for performance
parameters like CPU usage, speed, and power consumption patterns, there is no choice but to go for remote device automation.
Performance and security: As this testing has to be done only on
a real device, it is pretty obvious that we have to go for remote
device automation.
Usability testing: Automated measuring of usability parameters
is not yet out of the labs, but will be on the market pretty soon.

Remote device
automation

Real device
automation

Closeness to
real devices

Cost of testing

Quality
assurance

Figure 2. Mobile test automation methods comparison of key parameters: virtualization, closeness to real devices, easiness to automate, cost of testing, speed of automation,
quality assurance

48

Testing Experience 27/2014

However, the cost of testing will be not at all comparable with any
of the above methods.

Test Phase

Browser Add-on
Automation

Simulator/
Emulator
Automation

> about the authors


Venkatesh Sriramulu currently works as a Project
Manager at Cognizant Technology Solutions,

Remote Device
Automation

Chennai, India. He has 9+ years of experience in


the IT industry and has focused on software test-

Unit Testing

Functional Testing

ing and project management throughout his


career. He has excellent experience in managing
end-to-end enterprise IT project life cy cles and
setting out roadmaps for streamlining and introducing testing tools

Regression Testing

Interruption Testing

ing software test processes. He also specializes in Agile and SOA

Integration Testing

ment of testing process.

Performance and
Security Testing

in complex business engagements, which is a driving force in improvtesting and has come up with many innovative ideas for the improve-

Venkatesh Ramasamy is an ISTQB-certified testing professional and has been working as Project

not possible possible but not recommended recommended

Lead in Cognizant Technology Solutions, Chennai,


India. He has excellent experience in managing

Table 1. Testing spectrum vs. mobile test automation methods

the enterprise IT project life cy cle through all the


testing phases. He has developed many software

Conclusion
Technology strives to grow equally at its opposite extremes. While the
functional complexity of apps is increasing day-by-day, the ease-ofuse parameter has played a key role in the success of various mobile
apps. Correspondingly, when advancements on virtualizations keep
increasing, efforts to maintain the reality have kept up equal pace.
While automating on a virtual device saves significant time and money,
there is nothing that gives as much quality assurance as real device
automation. Every mobile automation strategy must be tuned to
multiply the positive advantages of using virtual and real automation methods, while collectively negating the disadvantages of both.
A well-orchestrated automation strategy significantly reduces effort
and accelerates the time-to-market.

products for performing end-to-end test management activities which optimize the testing costs and improve
the quality of the application. He has presented around 16 research
papers in various technologies, such as information technology,
quality engineering and assurance, embedded systems, micro electronics and communication.
Vinothraj Jagadeesan has a degree in Computer
Application from the University of Madras and
has extensive testing experience in niche areas
including open-source test automation, testing
SOA and Agile implementation across locations.
Having successfully completed more than nine
certifications, he is an insurance domain expert
and is certified in both the US and the UK by AICPCU and CII respectively. He is also an HP-certified professional in both Quality Center

Acknowledgement
We wish to extend our gratitude to Mr. Prasad Ramanujam, Senior
Project Manager, Cognizant Technology Solutions for his constant
guidance and continued support that has helped to shape this article
and bring it to fruition.

and QuickTestPro. Currently he is overseeing testing for a leading


specialist insurer in the UK in implementing a solution using Scrumof-Scrum.
Balakumar Padmanaban is an ISTQB-certified
testing professional and has been working as a
Test Analyst in Cognizant Technology Solutions,
Chennai, India. He is an inimitable tester who

References

always blends creativity with the testing approach, using efficient tools and methods to solve

1. bitbeam.org
2. sauceio.com/index.php/2013/04/build-your-own-angry-birdplaying-robot-at-our-first-nyc-robot-hackathon
3. us.pycon.org/2012/schedule/presentation/470
4. www.technologyreview.com/news/522501/intel-robot-puts-touchscreens-through-their-paces

critical problem statements. He has focused his


expertise on automation testing and web services testing, and is
adept in identifying the right technology for the right places. He has
a strong passion for the insurance domain and is a LOMA-certified
professional.

5. www.pcmag.com/article2/0,2817,2409695,00.asp

Testing Experience 27/2014

49

By Prasad Ramanujam, Alisha Bakhthawar & Mathangi Pollur Nott

Demystifying DevOps
Through a Testers Perspective
Introduction
Having been involved in analyzing DevOps practices across several
projects for various clients, based on our experience, we find that
there is a certain reluctance within the testing community to adopt
DevOps practices. It can be attributed to several reasons, the most
prominent of which is that testers have very little idea of how DevOps
is likely to affect their routine testing activities. However, with DevOps
positioned to become the next step in going agile [1], testing teams
need to overcome their trepidation and embrace DevOps practices.
This can only be achieved through proper understanding of DevOps
from a testers perspective, which we endeavor to do with this article.

Organizations are often unable to match up to the business needs


and market trends, with the result that customer demands are not
satisfied
Inefficient utilization of automation, both in testing and deployment processes, resulting in too many manual processes overheads
Insufficient
Automation
Inefficient
Testing
Slow
Deployments

What is DevOps?
The term DevOps generally refers to the rising movement that promotes a collaborative working relationship between the Development
team, where the term refers not just to developers but to all the individuals who are involved in the Development life cycle including
testers, business, PM, Scrum Masters, etc. and Operations, which includes DB administrators, support analysts and networking personnel.
DevOps is a process that allows the project team to deliver speedier
results in a predictable way and it is derived from the abbreviation
of Development and Operations. It leads to the fast flow of planned
work to production. The concept behind it is to have developers and
operations teams working closely together so that it ultimately benefits the business, with the key idea being to maintain quality while
maximizing velocity.

Why DevOps?
When the code is not moved to production as soon as it is developed,
IT operations are faced with a pile up of deployments, customers do
not get as much value, and the deployments are often chaotic and not
as organized as they should be. Agile practices have made it easier for
the development teams to quickly create changes, but manual procedures and irregularities between the various processes and tools have
resulted in too high a percentage of errors for the operations teams
to confidently deploy safely to production every change they have
developed. Some of the problems they face whilst trying to deploy
continuously include:
Expensive error-prone manual process in deployments often leading to roll-back and re-release
Slow deployments to development and test environments result in
the project teams being left unproductive
Inability of testing teams to keep up with the pace of changes
being made and, even if they are able to, an increasing number of
defects being identified in later stages of the life cycle

50

Testing Experience 27/2014

Application
Life Cycle
Unsatisfied
Customers

Error-Prone
Manual Processes

What happens in DevOps?


DevOps involves a set of processes that enables the code to be production-ready as soon as the Development team has finished with
it, and subsequently routing the feedback from production back into
the application life cycle.
Develop

Plan

Test

Release

Feedback

Ideally, once the Development team finishes a small feature that is fully
functional, it gets moved to production as soon as possible through
the Operations team. This involves a code that is continually evolving
and continually integrating. But these processes of continuous integration and delivery do not make any sense without parallel continuous
testing, which leads us onto the next question.

What does DevOps mean to a tester?


When any application code is changed, it is only correct that everything
that might depend on it is retested. Software dependencies are typically extremely complex and bugs in features that look to be completely
unrelated are not uncommon. Even the slightest change might require a

retest of the full code and if everything is tested manually, this process
will need a huge amount of resources and time.
Therefore in short cycles that are part of continuous development, each
part of the software is required to be frequently retested as additional
components or features are added to it. With deployments typically
happening every few days, it is quite impossible to test all the features
once every few days manually.
Automated testing offers an ideal solution, since such coded tests can
be run in a short timespan and as many times as required. Only new
stories with more substantial changes need to be tested manually. As
soon the testing for one story is completed, automated testing for the
same can be created and added to a central repository. Hence, even
though the number of tests increases continually as the project grows,
the number of tests performed manually remains relatively constant.

How to automate everything


In DevOps, ideally every phase of testing is expected to be automated
with the result that there is traceability every step of the way and
predictable results can be achieved time and time again. While some
of the practices below have been in existence for a long time, several
teams in our experience fail to leverage the benefits of using them
and so there are significant gaps that can be rectified.

Automated unit testing


The base of automated testing should be thorough unit test coverage.
Unit tests will test single units, often in a class with no dependency
on other systems. While it has not yet become a mainstream practice,
automated unit tests have nevertheless been in existence for a long
time and are more often than not considered to be the responsibility
of the Development team.

Automated integration tests


Integration tests bridge different units and often access databases
and other systems. These service tests run longer than unit tests and
should be triggered on the build server only. It is usually performed by
the Test team in collaboration with the Dev team in a grey-box-like testing. There are several tools such as Fitnesse and NUnit, each of which
caters to a particular technology and can be adopted based on need.

Services testing
The layer of services and components comprises different units. Individual components of the system are integrated and their web services need to be checked by analyzing the responses for set requests.
It is particularly important when performing these tests in sensitive
portions of an application, such as premium validations in insurance

Do you want the


printed version
of Testing Experience?
Get all issues in our shop!

www.testingexperience-shop.com
Testing Experience 27/2014
51

systems and several automated web service validation tools, such as


GreenHat Tester, SOAP UI, SOAP UI Pro, etc., have been in existence
for a long time.

UI testing
UI testing is a black-box testing technique that inherently tests the
application, the middleware, and the infrastructure. These GUI tests
are the most commonly found tests and are expensive to write and
automate. However, for effective releases to production, it is of utmost
importance to perform all system and regression tests automatically.
One of the keys to Devops is, understanding the actors at each level
and the expected level of quality at each stage in the above test cycle.
For example, consider the following requirement: When clicking on
submit an entry should be created in the Database. It is virtually impossible to test this in UI tests. However, we need to make sure that there
is a unit test in place that covers this scenario. Therefore any failure
here can be traced to any one of the tests above and the corresponding
actor (eg: Test team, Development team, etc.) can be held accountable.

Conclusion
The aim of DevOps is not just to increase the rate at which the change
occurs, but to successfully deploy these changes into production while
quickly detecting and rectifying errors as and when they occur. To do
this, the majority of tests need to be automated with a specific person
or team held accountable for that Quality Gate. Also, DevOps is a work
culture that cannot thrive without Agility, encouragement from the
top management and understanding between the development and
operations team, but when properly implemented has the ability to
provide long-term business benefits.

References
[1] www.ibm.com/developerworks/community/blogs/beingagile/
entry/devops_building_on_top_of _agile
[2] Httermann, Michael: DevOps for Developers Chapter 2
(Introducing DevOps)

> about the authors

UI

Prasad Ramanujam currently works as Senior


Project Manager at Cognizant Technology Solutions, Chennai, India. Being a 12-year veteran in
the IT industry and a recognized leader in agile

Service

methodology, he is a frequent speaker at indusAutomated


Testing
in DevOps

Integration

try conferences on the subject of successfully


implementing agile testing techniques in organizations. He has supported many multinational clients in various
industries in becoming high-performing businesses by establishing
a structured and strategic approach to quality assurance.
LinkedIn: in.linkedin.com/pub/prasad-ramanujam/69/114/614
Alisha Bakhthawar is a Test Lead and has been
working in Cognizant Technology Solutions,

Unit Testing

Chennai, India, for several years. She has a passion


for improvising testing processes to eliminate
mundane tasks and is specialized in automation

Other automated processes


While the above represent automation in different testing phases,
several other processes in the development and release cycles can be
automated providing DevOps teams with additional flexibility.
Automated and on-demand build deployment based on the most
recent code that is checked-in.
Automated server and performance monitoring to loop back
feedback from the system to the developers/testers to identify
potential performance glitches.
Automated reporting and batch processes that interconnect with
multiple systems and help geographically distributed teams to
run processes based on need or demand.
Automated infrastructure provisioning and setup that reduce the
need for dedicated operations personnel or huge capital outlay.

52

Testing Experience 27/2014

and web service testing.


With expertise in the insurance and financial
services domain, she has a special interest in the use of GreenHat
for component service testing.
Twitter: @alishatester

LinkedIn: www.linkedin.com/in/alishabakhthawar
Blog: www.secretsofquality.com

Mathangi Pollur Nott is a Senior Test Analyst at


Cognizant Technology Solutions, Chennai, India.
She believes in breaking the monotony that goes
with manual validation and has contributed
towards enhancing testing through the use of
simple but significant tools.
She is specialized in web services testing and
component service testing using GreenHat.
LinkedIn: in.linkedin.com/pub/mathangi-p-n/48/931/324

By Torsten Zimmermann & Frank Maar

Always Know Whats Going On


Successful Quality Management with Visual Studio IntelliTrace
Software development can be nerve-wracking. As a developer, you often think that the
work is completed once the software has been checked in. However, this often does not
take into account quality assurance, which may then discover many surprising errors in
the software after all, even though the subject of examination has already undergone its
component tests successfully. Since these tests are often undertaken in a black-box procedure, analyzing the causes may become a tedious task for the developer.
Now IntelliTrace from the new Visual Studio 2013 could unveil the non-transparent functional behavior of these system tests. Additionally, this article presents you with the latest
features of the recently released 2013 version. The article will be rounded off by a look at
the 2014 Visual Studio version. This way, you can already explain to your colleagues today
where Visual Studios future will take you.
Quality is expensive. Test teams or even test laboratories require an
accordingly large budget. Requests and clarifications initiated by test
engineers keep developers from doing their actual work. The time
frame is threatened by the large numbers of errors and the customer is
easily convinced by the cheapest offer. Decision-makers in the software
industry often bring up these or similar arguments when discussing
software quality or software testing. They sound almost like excuses
to avoid having to introduce extensive quality assurance strategies.
To make it clear from the start: The introduction of a QM system is
fundamentally a cost factor not to be underestimated. Generally, all
processes of a company are affected when introducing a system and
these processes must adapt to the stringent requirements. Test systems are also not estimated high enough with regard to their actual
acquisition and, especially, maintenance costs irrespective of the
line of business. This is also true for the software industry.
Thus, Bill Gates (then CEO of Microsoft) already explained in the December 2002 issue of InformationWeek: We employ as many testers
as we employ developers. [] When we are preparing a new release
of Windows [] over half [the budget] goes into quality control. Consequently, Microsoft has been tackling the following questions for a
long time:
How can QS processes be optimized in the software
development process?
How must QS tools be constituted in order to effectively
support quality assurance measures?
How can we establish a comprehensive transparency of
quality assurance, so that decisions are made on the
basis of good information?

The questions lead to the continuous expansion and further development of tools to support quality assurance in the areas of Team
Foundation Server and Visual Studio. The new Visual Studio 2013 offers
quality engineers in particular new options of working effectively in
software testing. With IntelliTrace, entirely new paths may be taken
within quality assurance, which allow developers to recognize the
causes of unintended effects faster. The bug fixing time per defect is
accordingly reduced.

A fool with a tool is still a fool!


Surely you know this expression: It emphasizes that a highly developed
tool alone does not guarantee success. This is a sentence you should
especially remember when thinking about IntelliTrace and its possibilities. It appears to be that famous all-rounder Swiss Army knife the
quantum leap in software development! But here, too, development
and implementation of suitable strategies are the keys to actually
achieving the quality goals that have been set. Therefore in the area
of software quality it is important to:
Develop suitable test strategies
Achieve sufficiently precise planning that continuously improves
in terms of prediction quality
Introduce effective and efficient verification processes
Build a well-functioning reporting system with meaningful
reports
Microsofts Visual Studio gives you the necessary tools to support these
points. Highly developed testing tools such as IntelliTrace presuppose a

Testing Experience 27/2014

53

profound conception and planning: Only with this, can such a category
of tools develop its full potential. However, illustrating this with an
example of a comprehensive test conception and planning would go
beyond the scope of this professional article. Nonetheless, you will find
some further information regarding this topic in several info boxes.

Team Foundation Server


As can be seen in the diagram, Team Foundation Server (TFS) is the
central system that integrates the various Microsoft systems such as
Visual Studio, SharePoint, or even third-party products.

rver

Agi
le

W
T

.N E

va

Work Item
Tracking

Ja

Brief introduction to Visual Studio

Team Foundation
Server | Service
Continuous
Delivery (Azure)

Build
Automation

Feedback
Management

Figure 1. The various subsystems of a TFS/Visual Studio development environment

Team Foundation Server offers the possibility of depicting the complete


process through Application Lifecycle Management. Requirements
and project management are covered by the Planning function.
Version management lies behind the name SCM (Source Code Management). Work Item Tracking is a central point of the Team Foundation Server, referring to the reproducibility that ranges from the
requirements to the task to the source code and testing. This makes it
possible to understand precisely which requirements were completed
by which source code and when requirements were actually tested.
Continuous Delivery is the continual delivery of added value in the
form of new functionality. This makes it possible to respond faster and
more exactly to customer and market demands. Build Animation
involves the automatic creation and testing of the application and is
usually the second component introduced by customers after version
management. Feedback Management means permanently including the evaluation of users into the development process and making
decisions based on this. The functions are applied in the projects as
needed. You do not have to include all components in all projects.

Demo Image of Visual Studio


For interested developers, Microsoft provides a demo image in
which the new Visual Studio features can be thoroughly examined. Of course this also applies to IntelliTrace in particular: If
you want to try the example used yourself, you can download a
completed demo image for Visual Studio at aka.ms/ALMVMs. The
demo image is called Visual Studio 2013 ALM Virtual machine
and Hands-on-Labs/Demo Scripts and contains several descriptions for using IntelliTrace. Additionally, you can also download a
free test version of Visual Studio Ultimate at www.microsoft.de/
visualstudio and test its applications.

54

Testing Experience 27/2014

Of course, there are already standard process templates available for


Scrum, MSF Agile, and MSF CMMI. These may be adapted to fit personal needs and complemented with additional templates to make
possible an extensive adaption of the development landscape to the
prevalent development landscape. For the developer, Visual Studio
or Eclipse can be used as clients. This way, Team Foundation Server is
suitable not only for .NET development, but also for other platforms
ranging from web to Android. Even an iOS developer can make use of
Team Foundation Server using the Git integration. In addition, project
leaders or product owners and testers can store their files on the server.
The convenient reporting system via MS Office supports a high degree
of project transparency regarding the aspects of time, budget, and
quality critical for success, and delivers the performance indicators
relevant for decision-making to the management.

ro i

Scrum CMM
an
I
nb
a
K

nt

nd

SCM

-Tailored Tools
Role

m
sto
Cu

Planning

C lo u d S h ar e P oi

eb

Se

OS

li

t
en

Nonetheless, you draw the biggest advantage from Team Foundation


Server by using it as consistently as possible.

The new versions offer improvements in many product areas such as the
.NET frameworks (version 4.5.1), programming languages, the ASP.NET,
many modeling tools, team collaboration, and test management.
Many optimizations took place in details compared to the 2012 version.
This way, test results and work items can now be directly associated
with the relevant program code. Thus, the programmer can be informed about new tasks or test results, which are then displayed in the
context of the relevant code. Debugging for asynchronous code was
improved and now also supports Git the decentralized open source
version control system in TFS. Furthermore, there are now migration
paths from the Microsoft Tool supplements Visual Studio Lightswitch
and Webmatrix to Visual Studio.
Visual Studio Lightswitch: Lightswitch is a development environment for data-driven business applications that is now included in
the Professional, Premium, and Ultimate editions of Visual Studio.
The tool simplifies the design of applications that focus on the input,
display, and changing of data in the SQL Server, SQL Azure, or SharePoint. Lightswitch was developed, for instance, to support the rapid
prototyping of business applications. Using the templates provided for
Lightswitch, the complexity of a Visual Studio environment is reduced,
i.e., concealed. There are also tools for generating Visual Basic and C#
code so that the developed business logic code can be implemented
on the entire Visual Studio environment in order to then make use of
Visual Studios advanced development tools. This way, Lightswitch can
be used in the framework of a preliminary study for a large project to
gather deeper insight about the projected use. Since it is possible to
switch away from Lightswitch without migrating to the comprehensive
Visual Studio, the works of the preliminary study can continue to be
used in the framework of the actual development project without a
problem. Lightswitch applications may also be integrated with Office
applications such as Excel, Word, and Outlook and be established as
an independent execution program.
Web Matrix: This is an IDE for the development of websites. The development environment is geared towards small development teams.
The IDE supports PHP and ASP. There are migration paths to Visual
Studio and SQL Server.

A Comparison of Visual Studio Editions


By now, there are different editions of Visual Studio. This overview shows the most important differences regarding possibilities:
Ultimate/
MSDN

Premium/
MSDN

Create solutions for web, desktop, server, and phone with a


uniform IDE

Transfer apps to the cloud, to the Windows Store, and Windows


Phone Store, with additional services as subscription benefits

Access to earlier and current platforms and tools of Microsoft,


as well as the newest releases including Visual Studio

Organize and define test plans with test case management


and explorative testing

Create and manage virtual lab environments for testing with


consistent configurations

Improve code quality with a peer code review workflow in


Visual Studio

Increase developer productivity through interrupting and


resuming tasks when multitasking

Automate tests of user interfaces to examine application


interfaces

Find and manage duplicate code to improve architecture

Determine the scope of the tested codes with code coverage


analysis

Reliably catch and repeat errors that occurred during manual


and explorative testing to avoid non-reproducible errors

Collect and analyze diagnostic data about running time in


production systems

Perform web performance tests and load testing

Design diagrams at an architectural level and review whether


the code implements the architecture

Categories and Possibilities

Maximum user number for a Visual Studio online account

Visual Studio

Visual Studio
2012

Visual Studio
2012 Update 1

2015

Visual Studio
2013
Visual Studio
2013 Updates
(quarterly)

Visual Studio
2012 Update 2

Visual Studio
2012 Update 3

.NET Framework

Visual Studio
2012 Update 4

.NET
Framework 4.5

.NET
Framework 4.5.1
released

.NET
Framework 5
planned

Unlimited

10

Only cloud

2016

Visual Studio
2014

Unlimited

The chart depicts the roadmap of Microsoft for the Visual Studio product series:
2014

Online
Professional

Unlimited

New features in Visual Studio 2013

2013

Professional/
MSDN

Unlimited

Host team projects locally or in the cloud

2012

Test Professional/
MSDN

The Visual Studio 2013 version was presented to the public in November
2013 and has since been delivered by Microsoft. For this year, Microsoft
plans to have quarterly updates for the current version of Visual Studio. The new Visual Studio is projected to be released in late 2014. The
schedule plans to deliver Visual Studio 2014 with the .NET Framework 5.
The current Visual Studio introduced the following four central innovations Plan, Develop, Operate, and Release.
Plan: Agile Portfolio Management enables a company-wide product
backlog. Requirements can be nested into several levels and allocated
to different teams. If a flat list of requirements is not sufficient in
the product backlog, then this function is very helpful. Kanban was
transferred from car production to software development, making a
continuous flow necessary to offer new functionalities to customers.
With the customizable Kanban board, the status of requirements can be
displayed simply and clearly. Which requirements have been approved,
implemented, tested, and delivered? Questions such as this can then
be answered at a glance. Work Item Tagging offers the possibility of
providing keywords for requirements or tasks and searching for these.

Figure 2. Microsoft roadmap for Visual Studio

Testing Experience 27/2014

55

The Advantages of IntelliTrace


Compared to classic software tests, software tests with IntelliTrace have already been able to show the following advantages
in practice:
1. Black-box tests in system tests or system integration tests
almost become white-box tests, which makes it quicker to
reproduce and find error causes.
2. The risk of misinterpretation or non-recognition of all causes
for masked errors is reduced.
3. Defects are fixed faster. Without IntelliTrace, developers can
only draw on the exception information from analysis. With
IntelliTrace, the developer can also examine the steps before
an exception is triggered, which often provide the reason for
the malfunction.
4. The time-to-market in the course of software development
is reduced when consistently using IntelliTrace across all
development phases.
5. Through the newly won transparency in the test and production environment, a better system understanding is achieved
in connection with operation and production-relevant
themes.
6. The need to set breakpoints to gather experience from the
runtime behavior of the system is reduced. Activated IntelliTrace recreates the relationship between the code, variable
contents, and the exceptions thrown from the process in
question, even in retrospect.

Develop: Team members can exchange ideas on work tasks or source


code in a Team Room. Moreover, events such as Check-ins or Build
actions can be protocolled. This kind of documentation is especially
helpful for reviews. Team Foundation Server 2013 allows you to choose
between the earlier version management system or Git for setting up a
new project. Git is highly popular for distributed version management
and avoids unnecessary migration effort when using Git. Similar to
a HeadUp display, Code Information Indicators display information
that is frequently used, such as references or last changes to a method,
in the editor. The .NET Memory Dump Analyzer makes it possible to
narrow down the cause of a crash faster. Load Testing as a Service
enables load testing with Visual Studio Online in the cloud within
5 minutes without implementing elaborate infrastructures of test
agents yourself.
Operate: The term DevOps refers to a closer cooperation between
the operation and development departments in order to fix problems in
the operation of an application faster in development. The integration
of System Center provides an automatic transfer of incidents to Team
Foundation Server with additional information about performance
problems or bugs.
Release: Up to now, the development process finished with the creation of the application and the question was often raised as to how
the application could be transferred, for instance, to the test, integration, or production system. With the new release management, the
distribution processes necessary for this can be defined and monitored.

56

Testing Experience 27/2014

Important Documents in the Area of Software


Quality for Planning and Organization
The complexity of test processes is often underestimated. Software quality relies on the following documents in the design of
quality assurance:
1. Test strategy: Determines the general regulations and rules
that apply to all test projects of test objects. Note: The term
test strategy is also mentioned in the document test plan.
It can, however, be understood to be a kind of set of regulations or collection of the permitted procedures that apply to
all tests. Here the term refers to the latter definition.
2. Master test plan: In the master test plan, the relevant test
parameters for a test object are determined across multiple
test projects. These represent guidelines for the development of the relevant test concepts. The master test plan is
especially useful for long-running projects (e.g., one year or
more) or in the context of product development in order to
implement long-term quality strategies. Thus, a master test
plan could, for instance, cover multiple product life cycles
of a test object. The test plan is not suitable to cover several
releases of a test object. However, establishing a master test
plan is not recommended for short-running projects.
3. Test plan: The test plan describes the actual planning and
conception of one or more test levels for a test object. Here
all relevant information about test phases can be found,
from planning to completion/archiving of a test project.
4. Test organization: The development of a test organization
is especially recommended in test centers and test factories.
Test organization determines topics such as reporting system, roles and responsibilities, organizational structures, or
central processes like change management, escalation paths,
etc., and, if necessary, refers to other documents.

IntelliTrace fields of application


IntelliTrace is very similar to the procedure of a black box as known in
the aerospace industry. This method can be successfully implemented
for .NET-based applications beginning in version 2.0. Furthermore, C#
or Visual Basic applications, which are based on ASP, .NET, WPF, WCF, or
Windows Workflow Foundation, through to SharePoint applications
can make use of IntelliTrace. However, C++, Script, and other programming languages not mentioned here are not supported.
The topic of cloud, or rather SaaS (Software as a Service), has become
an absolute hype over the last few years. Microsoft also supports
developers extensively in the development of such IT and application
systems with the new Visual Studio 2013. This way, IntelliTrace can
also comprehensively serve these areas in the new 2013 version. For
Windows Azure projects, the software developer can specify to activate
IntelliTrace in the options for publishing.
The Visual Studio 2010 version was initially intended for closer coordination between test and development departments. Beginning
with the version Visual Studio 2012, it can also be implemented for

Important Norms and Best Practices


Luckily, in the context of planning, managing, and organization
of test organization it is not necessary to reinvent the wheel. Here
it is recommended to use a few of the numerous norms. This reduces the design and implementation costs, and you will benefit
from the experiences of numerous experts in this field:
BS 7925-1: Glossary of Software Testing Terms
BS 7925-2: Software Component Testing Standard
IEEE 829: Standard for Software and System Test Documentation
IEEE 1008: Standard for Software Unit Testing
IEEE 1012: Standard for System and Software Verification and
Validation
ISO/IEC 9126: Software Engineering Product Quality
ISO/IEC 15504 (et seqq.): SPICE/Software Process Improvement
and Capability Determination
ISO/IEC 25000 (et seqq.): Amended and expanded version of
the 9126 standard: Software Product Quality Requirements
and Evaluation
ISO/IEC/IEEE 29119 (et seqq.): New series of software testing
standards, replaces IEEE 829, IEEE 1008, BS 7925-1, and 7925-2
ISTQB: Test best practices and standardized training contents
on software quality of the ISQI
TestSPICE: Deployment of a process reference model (PRM) and
a process assessment model (PAM) for test processes; deployable in test centers and test factories

Build

IntelliTrace
1001010001
1000010110
1000000110
1111010101

Program Code

Executable

Data

Figure 3. Functional principle of IntelliTrace

The command line tools are provided in the form of the file
IntelliTraceCollection.cab and require approx. 13 MB of memory
space on the hard drive. A directory for log files is also necessary. In my
example, this directory is called LogFileLocation. Since the size of the
log file can be limited, this prevents the filling of the disk system. After
reaching the maximum size, the log file is overwritten using the FiFo
principle (first in first out).
The PowerShell is highly popular among administrators and thus a
small example of how IntelliTrace can be activated should be helpful. In
order to use IntelliTrace commands, import the IntelliTrace PowerShell
module into the PowerShell console using the following command:
Import-Module c:\IntelliTrace\Microsoft.VisualStudio.
IntelliTrace.PowerShell.dll

IntelliTrace is started using the command Start-IntelliTrace


Collection. You can view the parameters using:
Get-Help Start-IntelliTraceCollection

In my example, I start IntelliTrace by calling:


Start-IntelliTraceCollection "FabrikamFiber.Extranet.Web"

With the help of an internet search engine, the relevant information can easily be researched on the web. Questions can also be
directed to me.

operations and as an essential component of the so-called DevOps


initiative to improve coordination between operations and development. One question that is often posed is: Which of the many Visual
Studio editions contains this functionality and who needs it? IntelliTrace is only contained in the Ultimate edition of Visual Studio. This
version is only necessary for the evaluation of IntelliTrace data, but
not for data recording.

IntelliTrace functional principle


IntelliTrace can be used to better describe error situations in development as well as operation, thus accelerating different phases of the
development process.
As is illustrated in Figure 1, IntelliTrace is turned on at runtime. This
can occur, for instance, via System Center Integration or through command line tools. The test engineer will, however, generally activate it
through the Test Manager and the developer directly through Visual
Studio. More on this later.

c:\IntelliTrace\collection_plan.ASP.NET.trace.xml
c:\LogFileLocation

The settings for IntelliTrace are specified in the file collection_plan.


ASP.trace.xml, described in more detail in the Options section.
The current status may be viewed using:
Get-IntelliTraceCollectionStatus -ApplicationPool "FabrikamFiber.
Extranet.Web"

You can stop IntelliTrace using:


Stop-IntelliTraceCollection "FabrikamFiber.Extranet.Web"

After that, a trace file for evaluation will be available in the directory
LogFileLocation. If the IT operation already uses the System Center
Operations Manager (SCOM), integration through the IntelliTrace
Profiling Management Pack will be available.
The Microsoft Test Manager is the standard tool for the tester to plan
and implement tests. Here, IntelliTrace is activated in the data collectors of the test settings. In the case of errors, the IntelliTrace file will
be automatically attached to a bug in Team Foundation Server. The
information of the data collectors reduces the questions from the
developer to the tester regarding under what conditions the error
occurred, making troubleshooting considerably faster.

Testing Experience 27/2014

57

IntelliTrace can also be activated in Visual Studio in the settings


Tools Options IntelliTrace for normal .NET projects. Why would
a developer use IntelliTrace when there is a debugger? With IntelliTrace,
software developers can move back and forth in time and thus move
from the error situation to the actual cause. Often the debugger makes
it necessary for the software developer to recreate the error situation
multiple times. Especially with sporadically appearing errors, this
precondition is usually not met.

IntelliTrace options
The options are the same for all activation variants presented previously. In working with IntelliTrace, I was always concerned about the
question of whether IntelliTrace has a significant impact on the application or system performance. As a matter of fact, IntelliTrace can
distinctly slow down the system. Luckily, there are two main settings
when using IntelliTrace: The setting IntelliTrace events only has a
smaller impact on performance, while IntelliTrace events and call
information clearly reduces application speed. This latter setting
certainly only makes sense in the development environment. However,
it delivers extensive process details and information to the developer,
which are necessary for a rapid cause analysis.
In the dialogue Advance, especially experienced developers can adjust IntelliTrace to their needs focusing on log information. This way,
the maximum size of the log file, the file location, and the symbol and
source paths can be adjusted.
In the settings IntelliTrace Events, you can determine which events

are to be recorded. Experience has shown that it is usually enough


to begin with the standard settings. Nonetheless, it can be useful to
limit the protocolling of events when focusing on a very specific area.
The previously mentioned adaptations can be particularly useful
when combined with the last section of options, because here you
can indicate modules of which events to protocol.

Transfer of the iTrace file


Not all failures are always recognized at the development laboratory. A
considerable portion of defects is recognized in the test environment or
even in the production environment, and often these are not directly
accessible by the developer. This presents a particular challenge to
developers in understanding and reconstructing bug statuses. With
the help of IntelliTrace, this situation become considerably easier, as
records are possible in the previously mentioned environments and
this information is then made available to the development team in
the form of an iTrace file.
In the simplest case, the iTrace file can be given to the developer as a
copy. However, the combination with the Team Foundation System is
much more elegant. Thus, for instance, the tester can indicate in the
data collectors that an iTrace file should be automatically added as an
attachment to a bug. The bug is then given to the developer. He then
opens the iTrace file and begins the debug mode in Visual Studio. Here
developers can simulate the specific process flow line of code by line
of code in their own environment, with the results in question from
the production or test environment. It is apparent that this allows a
fast analysis of causes even for complex error situations or so-called
masked mistakes. Once the causes have been determined, making

58

Testing Experience 27/2014

changes can often be done quickly. After checking in the code correction, the debugging flows into the next build and is accordingly released
after successful processing through quality assurance.

IntelliTrace evaluation
The IntelliTrace file is opened in Visual Studio and first shows the
exceptions and events that occurred. From the events, the developer
can conclude which actions were carried out on the surface, which files
were opened, or which database calls were made before the error occurred. Using this information, the developer can then usually narrow
down the area very quickly and determine the causes.

Figure 4. Example of an IntelliTrace file

In the example, an error message was displayed on the web interface


when calling a service ticket. The exception System.NullReference-

Exception occurs three times in the exception list. When clicking


the first exception, you can see in the autos window that a null value
was assigned to the attribute AssignedTo. This leads to a NullReferenceException. A glance at the database shows a null value in the
column AssignedTo for this dataset. Thus, the developer does not
need customer data or a running application to fix problems within
a short period of time using the IntelliTrace file.

Evaluation of complex problems


IntelliTrace also offers a debug mode. When IntelliTrace is activated
with call information, the developer can debug at a source code level,
view the value of variables, and even move back and forth in time in
Visual Studio, as illustrated in Figure 5. In this figure, a red arrow is
illustrated to the left in the centrally positioned editor window. This
arrow shows the current position in the program. Using the double

Figure 5. IntelliTrace in debug mode

Test Case Derivation on the Basis of Test Coverage


and Test Design Technology
Even today, test cases are usually derived somehow. All of a
sudden they are there! At least this is repeatedly my impression
when I ask employees from the QS department why the test case
was created in precisely the way it was and not in another. Not
even the creator of the test case can comprehensively answer
this question. The reason for this is often that the method for test
case derivation was not defined in planning and design. In reality,
however, the test coverage and the test design technology form
two important parameters in shaping the derivation procedure.

arrows upward, the developer can, for instance, move back in time
in order to find the cause of the error. To the left of this lies the IntelliTrace window with the corresponding exceptions and additional
information about the relevant event. Underneath the editor window,
variables and their contents are displayed with regards to the position
of the cursor in the editor window. Next to that, the output window
shows the screen output with regard to the current cursor position
in the editor window.
The relationship of error messages about the current position in the
code and the respective variable contents especially deliver new insights about the systems behavior. This information can be deduced
without IntelliTrace, but the time required is considerably higher.

Test level

Component test

Component integration
test

> about the authors

System test

System integration test

Since 1985, Torsten Zimmermann has been de-

Acceptance test

veloping software applications for business and


administration. After completing his degree as

Test discipline

Diplom Wirtschaftsinformatiker (1993), he be-

Operation test

came familiar with quality management within

Document review

the software life cycle. As of 1995, he has been

Functional test

contributing to international projects in the area

Load test

of software quality and quality/test management at various corpo-

Performance test
Security test

rations including BMW, DaimlerChrysler, German Railways, Hewlett-

Interface test

Test coverage type


Equivalence analysis
Checklist
CRUD

Decision points

Boundary value
analysis
Path coverage
Profile

Review

Packard, Hoffmann-La Roche, and Logica. Over the years, he has become an expert at the European level.
He has developed a risk-based testing approach which was published

Test design technique

in the professional publication QZ, among others, and is now estab-

Use case test

Data combination test


Data lifecycle test

Functional analysis

Business process test

Test type

Blackbox test

Whitebox test

Guideline test
Real-life test

Semantic test

lished as state-of-the-art knowledge in the area of software quality


assurance. His accumulated experience has led to the creation of
T1 TFT (Test Framework Technologies, 2001) for the advent of a new
area of testing systems.
Torsten Zimmermann is now developing new approaches to enhanced test concepts and test frameworks as T2 TFT (2004) and T3 TFT
(2006). In cooperation with a network of universities, this is creating

Test basis

new solutions for rules and model-based testing systems. Within

Decision tables

software development. In his role as speaker at conventions and spe-

Use cases

Error messages
Data masks

ERD models

Interface documentation
Operation procedure
documentation

Infrastructure plans
Figure 6. Interaction of different test and derivation parameters

Experience has shown that the efficiency of the verification process changes, depending on the combination of degree of overlap
and design technology. Evaluation of the efficiency also greatly
depends on the defined quality goals. However, this goal definition is not always implemented in test projects. Yet without this
determination, no optimal choice of derivation processes can be
made: There are combinations of degree of overlap and test design technology that cannot provide information about specific
quality goals and thus only create costs without benefits when
applied.

this he is also considering near-shore/off-shore concepts in terms of


cialist author in well-known magazines, he periodically presents his
conclusions, results, and concepts at national and international level.
In his architect role for Application Lifecycle
Management at Microsoft Deutschland GmbH,
Frank Maar advises customers to improve their
processes concerning software testing and software development for more than 15 years. Before
Microsoft, he began his career at SQL Datenbanksysteme GmbH and Siemens AG as developer
and software tester.

Testing Experience 27/2014

59

By Mithun Sridharan

Guidelines for Choosing the Right


Mobile Test Automation Tool
According to research conducted by Pinch Media Data in 2009, the
average shelf life of a mobile application is only 30 days. The number
of mobile applications has since exploded and the shelf life figure has
reduced even further. These shifting demographic trends require software quality assurance teams to recalibrate their approach to software
testing and closely align it with both the mobile app development
teams and the customer base.
With the variety of applications available and the growing number of
features that users demand from the applications, ensuring the quality
of mobile applications is indispensable to both retaining the existing
customer base and acquiring new users. Given the short time frame
available for software development and quality assurance (testing),
software testing automation becomes a necessity at some point in a
companys lifetime, even though alternative strategies exist. An app
development company could decide to automate its testing activities
for a myriad of reasons both internal and external.
Regardless of the underlying reasons, once a company has decided
to automate its testing activities, a structured approach is required
to identify the tool for the automation process. Success in testing
automation depends to a high degree on the set of tools employed.
Given the variety of automation tools available in todays marketplace,
selecting the right set of tools that meets a companys unique testing
needs could be a stressful task.
When a product is being developed, it is relatively unstable. During
those phases, manual testing is a relevant way to quickly verify that
the product works as expected. Software testers should use this phase
not only to become acquainted with the product specifications, but
also write test cases for verification and validation (V&V) purposes.

60

Testing Experience 27/2014

Once the product specifications have been finalized, testers should


start thinking about how they could automate the test cases.
Software development companies often have to reconcile between
choosing and investing in select tools for specific short-term client
projects and selecting tools for the long-term projects/products it
is developing to avoid re-tooling and incurring expensive overheads
later. The short shelf life of mobile applications in particular poses a
management conundrum when forging a coherent tool strategy. In
such cases, a scenario-based approach helps managers undertake a
coherent investigation of the requirements, prepare their companies
for mobile test automation, and make the right tool investments for
both tactical and strategic projects.

1. Supported mobile platforms


With any given requirement specification, you need to select the right
set of tools that support not only the target operating systems, such
as iOS, Android, Windows, and their different versions, but also the
underlying hardware configurations. Mobile applications present several unique challenges that quality assurance teams need to consider
when structuring their test efforts. One of the most fundamental issues
is to understand how an application (code base) will perform across
different operating systems, interfaces, and form factors. Though the
major players in the mobile platform market are Google and Apple,
developers still need to take account of Symbian and Windows Phone
users as well. Even within a single platform, there could be a permutation of software versions and form factors to consider. It is, therefore,
extremely important to check the oldest and the newest supported
versions of the platforms.

2. Supported application types


Once an initial set of automation tools has been shortlisted, you need
to check the type of application that could be managed using these
tools. Most tools are so specific that they do not concurrently support
native, hybrid, and web applications. Most mobile testing processes
have not been developed for a one-size-fits-all approach. Therefore,
it is highly probable that several tools will need to be selected in the
automation process chain. Depending on the type of application under
test, at least 80% of testing activities could be automated, following
Paretos Law. However, when factoring an applications functionality
on a range of platforms, some amount of ad hoc manual testing is
required. Leveraging the right set of tools can help increase efficiency
and reduce costs, while providing an objective environment to assess the quality of the application and predict users experience in
the actual environment when the application or service is deployed.

3. Source code requirements


For the best testing quality, native mobile applications should have
some tool-specific framework bundled within the installer so that
software testers can send some instructions to the device/emulator
to perform activities directly with the native application. Most conventional browsers have their own web drivers, so testers can test the

web applications with the help of these browser-specific web drivers. In


most cases, mobile applications are not delivered to the testing team
with their source code or framework, which would mean that they could
simulate the same functionality on different mobile platforms. In some
cases, solutions like the App Package for iOS are available, though this
module does not deliver test coverage in the same way as a process
with source access, but it does provides more capabilities for testing
than the leaner application install file itself. Hence, the source code and
platform frameworks are significant pointers to consider, as it is not
always possible to gain access to the source code for testing purposes,
especially when the testing activities are outsourced to a third party.

4. Application refactoring requirements


The next obstacle in mobile test automation is the requirement to
modify the application, i.e., refactoring to make it testable by the
automation tool. The trick with refactoring is to be able to verify that
the functionality is retained. A testing professional needs to make
sure that that all changes are verified before and after refactoring.
Though it is not necessary to automate this process, this could help
during subsequent regressions. Refactoring complex applications or
code modules is an art, and automating these elements should be
performed with the utmost diligence. The selected tool should meet
the scalability requirements needed to deliver the expected results at
different levels of granularities; it may be necessary to include third
party libraries in the test project, to build the test version of your
product, or modify the existing app version that is delivered for testing.

5. Test script generation


For mobile applications that require extensive test coverage, creating
real-time test scripts could pose a significant challenge. Though test
automation greatly improves execution efficiency, these efficiency
gains come with significant costs, especially when developing a library

of test scripts to ensure testing coverage requirements. Automated


test-case script-generation tools could further improve efficiency and
broaden test coverage by helping to create script test scenarios around
operational requirements. For scalability, the tools chosen to automatically generate test scripts should support script parameterization. This
approach, however, is usually limited by tool capabilities and cannot
deliver the same degree of coverage as a programmatic approach,
which leverages the power coding and capabilities of the underlying
programming language. The programmatic option is not as fast as
the automated test script method, but the outcome is more effective and flexible. It is, therefore, necessary to evaluate the resources
available in order to choose one approach over the other in the tool
evaluation process.

6. Programming language specifications


On a broad note, the programming language used in developing the
application plays a significant role in the quality assurance process.
Testers often choose procedural languages, such as Perl, Python, Ruby,
etc. to create scripts for automating test cases because these programming languages are usually easier to learn, do not require compilation
(which results in significant time savings), and have a large user base
and libraries to choose from to solve various automation challenges.
Object-oriented languages, such as Java, C++, .NET, etc., are often
chosen for automating tests when the test subject has been developed
using an object-oriented programming language, which has a significant influence on the solutions architecture. In addition to selecting
the right tools and programming languages, test staff allocation is
also very important. It is more effective to reuse existing in-house
knowledge, experience, and skills than to adopt new technologies.

7. Runtime object recognition


There is a fundamental difference between functional and load-testing
tools. Functional testing tools operate at the user interface level, while
load-testing tools operate at the protocol level. Runtime object recognition for functional testing tools is almost never 100%. If the object
recognition success rate is less than 50%, the test automation team
will perform so many workarounds that it will defeat the objective of
test automation. In the case of load-testing tools, this question is less
relevant. Application changes and their impact on object recognition in
test scripts are always a challenge for the test automation team. Having unique object identification greatly reduces the impact of changes
and simplifies test script maintenance. You have to understand and
evaluate how object recognition is performed during runtime using
a given tool and if possible, gain access to the specific objects so that
checks can be easily performed on the recognition properties in the
collected object library.

8. Data-driven inputs
Most of todays applications are interactive, requiring users to key in
something at some point. Knowing how the application responds to
the various set of inputs is essential in delivering a stable and quality
product to the market. Data-driven testing helps us to understand
how an application deals with a range of inputs. Rather than having
testers manually enter endless combinations of data or hard-codespecific values into the test script, the testing infrastructure frame-

Testing Experience 27/2014

61

work automatically pulls values from a data source, enters the fetched
data into the application, and verifies that the application responds
appropriately before repeating the test with another combination
of values. Automated data-driven testing significantly increases test
coverage, while simultaneously reducing the need to create more tests
with different variables. An important use of data-driven tests is in
ensuring that applications are tested for boundary conditions and
invalid input. Data-driven tests are often part of model-based tests,
which include randomization to cover a wide range of input data. To
enable test execution with different combinations of data, the data
sources should be properly managed. The chosen test automation tool
should include drivers and support a range of data formats, such as
flat files, spreadsheets, and database stores.

9. Result and error logging


During test case development and execution, it is often necessary
to log messages showing more developer-specific information. For a
test manager, however, it would suffice to know whether a particular
test has passed or failed. Depending on the requirements, it may be
necessary to automatically capture screenshots or videocasts for a
failed test process to make it easier for developers to reproduce the
issue and identify the root cause of the problem. The automation tool
should also have the necessary filters to mine log messages by their
type, text, priority, time, and other important attributes. Tools that
allow log summaries to be reviewed from one automated test run to
another across timelines and for report formats to be configured are
also features to be considered when choosing a test automation tool.

11. Pricing models


One of the main reasons that test automation is often perceived as
an expensive affair is that the automation activities are done in silos,
entirely disconnected from the core development efforts. Effectively
shielded from the ramifications of design decisions that hamper testability, developers continue to create software that is almost impossible
to automate. Effective Agile teams break down these silos, in which
every developer on the team is involved in automating the tests, and
the automated tests go into a common repository where the developers can check their code. As a result, the cost of test automation
decreases dramatically.
There are several free open source and proprietary test tools that are
candidates for evaluation. When open source tools are selected, it
is important to check how stable the tool evolution is and how fast
those tools are upgraded to support the latest changes in technologies. As for proprietary solutions, the price of the tools is one of the
key factors to be considered in justifying the investment and the ROI
calculations. It is also important to check the licensing models, such
as pay-per-use, per node, site license, period of validity, etc. Another
important consideration is the availability of add-ons, support, and
updates, and whether these features cost extra. And, last but not least,
the chosen tools ease of use trumps all other considerations. The tools
complexity should be in line with the test teams ability to adopt new
tools and the programming talent available to the company.

> about the author

10. Continuous testing

Mithun Sridharan is the Managing Director of

The central idea behind a continuous testing approach is to frequently


promote code changes and rapidly get feedback about the impact
these changes have on the existing system. A strong test automation
framework should have the ability to support teamwork and the integration of automated testing infrastructure components, such as
the integrated development environment (IDE), test framework, revision control, test configuration management, issue tracking, report
generation, etc. Continuous testing and integration using the existing
quality assurance tools and technologies are of paramount importance
for the efficiency of QA processes. Not performing continuous testing
leaves too much room for defects to creep in. By the time a defect is
identified, more code has been layered on top of it, making it harder
and more expensive to fix the defects discovered later on. Testing the
changes right away dramatically reduces the cost of addressing defects,
so the automation tools should trigger a build with each commit and
execute the relevant tests automatically or at scheduled intervals
throughout the day. In addition, decomposing a test suite into smaller
batches, running test cases in parallel, and automatically dispatching
the defects to developers working on the code branch where the defect
has been identified is the cheapest and fastest way to achieve quality
outcomes. This approach also gives developers room to experiment,
while simultaneously protecting the master code base from regressions. As a result, each code branch is tested as rigorously as the master.
Such an application of continuous integration to new branches as soon
as they are created helps uncover compatibility problems and ease the
final integration with the master.

based inbound marketing and digital transfor-

62

Testing Experience 27/2014

Blue Ocean Solutions (BlueOS) PLC, a Germanymation company focusing on technology companies. He brings with him over ten years of
international experience in business development, marketing, global delivery, and consulting.
He holds a Master of Business Administration (MBA) from ESMT
European School of Management and Technology, Berlin and a Master of Science (MSc) from Christian Albrechts Universitt zu Kiel,
Germany. He is a Harvard Manage Mentor Leadership Plus graduate,
a Project Management Professional (PMP), and a Certified Information Systems Auditor (CISA). He also served as the Communication
Chair for the German Outsourcing Association in 2013 and is based
in Eschborn, Germany.
Twitter: @jixtacom

LinkedIn: de.linkedin.com/in/mithun
Blog: korporate.blogspot.com
Website: www.blueos.in

By Dr. Jens Calam, Parag Kulkarni & Sven Euteneuer

Multi-Platform Mobile Test Automation


for the Financial Sector
Financial institutions are now serving their customers with a multichannel approach and are making increasing use of mobile technologies. Serving customers in financial services requires the underlying
software to be highly reliable. Developing mobile software is challenged by the need to keep pace with increasing customer expectations
and short release cycles, while still delivering high quality software
especially as poor quality is immediately visible in app stores and can
have an impact on the banks reputation. For this reason, testing, and
especially regression testing, is one of the most important activities
during app development. But how do you meet these expectations
without giving your customers the banks standard smartphone?
Mobile test automation with a multi-platform approach is a way to
tackle this challenge. In this article, we will give you details of an example project where we successfully used this approach.

mean the exact opposite stay manual and perform crowd-based


testing if there werent the buts:
1. We are talking about a bank. Banks are not keen to outsource
their testing to everyone in the cloud or the crowd. One single
test service provider is sufficient.
2. The pilot covered a mere 60 test cases run once per quarter
but it was a pilot. In total, we would be able to reuse our test
code for ten more apps. Reusability was the key!
Regarding the technologies covered, we had to test apps on both the
iOS and the Android platform in a cost-effective way, of course.

Plan
Project context
An Austrian bank was looking for a mobile automation solution to
test their multiple apps on multiple platforms such as Android and
iOS. They also wanted to have good coverage of the phones and tablets
that are mostly used by their customers. Based on this information,
we developed a custom mobile test strategy to answer the following
questions:

Performing a pilot does not just mean automating tests. It also means
performing a carefully developed experiment, whose outcome would
form the basis for a year-long successful test automation. For this
reason, our project plan followed four phases throughout the project:

Tool
selection

What kinds of apps must be tested?


Which device types and models have to be covered?
How many test cases have to be covered?
How much effort has regression testing generated up to now
and how does this effort fit into the framework of this project?
The task we faced was to pilot mobile test automation covering two
types of app. The first one, a web app, had to be tested on one phone
and one tablet, each on Android and iOS. The second one was a hybrid
app to be tested on iOS and Android again, but on four phones and
four tablets each. Why that difference in devices? A web app relies on
the web browser only, whereas a hybrid app takes more features of the
underlying system into account, so it has to be tested more thoroughly
on a wider range of device models. The devices were chosen from an
analysis by our customer on the range of devices used by its clients.

Challenge
We were dealing with a regression test set of a total of 60 test cases, of
which two-thirds covered the web app and the remaining third covered
the hybrid app. The regression test is executed four times per year, i.e.
2 apps, 60 test cases, 4 times a year, but this alone does not mean you
would automate those test cases from the very beginning. It could

Framework
setup
and
first POC

Automation
and
1 execution
cycle

4
execution
cycles

Figure 1

Tool selection is a primarily theoretical step, in which a single tool or


a shortlist is selected to perform a multi-factor analysis. The most
suitable tool(s) for the specific project or customer context are chosen
and evaluated practically in the second phase, the setup and first
proof of concept. Having proven that the tool is technically capable of
being used in the project, the actual automation takes place for the
pilot. While only a selection of test cases is covered in the first proof of
concept, the full set of test cases is covered by the automation phase.
Automation means software engineering and there is no software
engineering without testing (at least this is how things should be), so
the automation phase covers a first execution cycle of the automated
test cases. The last step finally proves whether the approach is valid for
a managed service approach, which needs software testing to be industrialized, i.e. standardized and executable in a predictable manner.
Our specific plan looked as follows:

7 tools

9 test cases
3 weeks

60 test cases
4 weeks

4 cycles
4 weeks

Figure 2

Testing Experience 27/2014

63

We narrowed down a pre-selection of seven tools to a single tool that


fully satisfied the customers needs theoretically. To prove the theory
in practice, we took 9 representative test cases out of 60, set up the
test environment and performed the proof of concept (PoC). Despite
the setup activities, a proof of concept in a customer context is always
partially an experiment that takes quite some time. You can easily see
the learning effect taken from the PoC: Without having to perform a
second environment setup and with our experience from the PoC,
we could easily ramp up from 9 test cases in 3 weeks to 60 test cases
in 4 weeks. This would finally be the automation speed in the actual
project context. The last four weeks of the pilot were finally dedicated
to standard one-week test cycles.

Implementation

1.

Feature: Login with correct email and correct password


credentials

2.

Scenario Outline: As a valid user having email and password


I can log into my account

3.

Given that I have opened the browser

4.

Then I enter "https://www.example.com/oauth" into "URL"


field

5.

And I press the "Go" button

6.

Then I wait for "login" page to load

7.

When I enter <email> into "user" field

8.

And I enter <password> into "password" field

9.

And I click on "Login" button

10.

Then I wait for 5 seconds

11.

Then I should get redirected to "www.example.com/index.


php" web page

We had seventools shortlisted when we started the pilot phase and


we finally decided to use Calabash. Calabash is an open-source test
framework based on the idea of behavior-driven development (BDD).
BDD separates the actual functional test case from the code behind.
Doing so, allows it to divide testing activities amongst two roles:
Subject Matter Experts: SMEs know a lot about the functionality
of an app(lication), but in most cases hardly anything about coding.

12. Examples:
13. |email|password|
14. |"test123@example.com"|"test123"|

In this example, we can find a hybrid approach to test automation that


is recommended for test automation in general. Hybrid approach
means that the test automation approach of Calabash is a blend of:
Action word-based testing and

Test Automation Experts: TAEs know a lot about coding, but in


many cases significantly less about the app(lication)s expected
behavior.

Data-driven testing.
Each line in the example can be seen as an action word that describes

With BDD tools, subject matter experts can define the functional
aspects of a test case in human-readable form. Such a story has a
form like this:

an activity like I press the Go button on a level which is non-technical


enough for a subject matter expert to understand. Data-driven means
that we do not need to script a new test case for each data combination
we want to test. Instead, we use parameters in the test case itself

Calabash Scripts
with Test Data

Console

Calabash
Agent

Calabash

Reusable
Step Definitions

Figure 3. Mobile test automation architecture with Calabash

64

Testing Experience 27/2014

Built-in
Step Definitions

HTML Report

like the line 'When I enter <email> into "user" field' and a
separate data table defining the considered data combinations (the
table below Examples).

we could steer from within Calabash. This meant we were able to fully
cover the customers needs.

The test case will, of course, not run from just the storyline described
above. In a second step, test automation experts take care of the actual
implementation: They write technical code that is triggered by an action word, steers the app(lication) on a technical level by sending messages or clicking GUI elements, collects the app(lication)s reaction and
evaluates this reaction. In an example, this approach looks as follows:

Lessons learnt

1.

When /^I enter "([^\"]*)" into "([^\"]*)" field$/ do |text,


textField|

2.

sleep(3)

3.

if textField.include? "user"

4.

set_text "webView css:#user", "#{text}"

5.

elsif textField.include? "password"

6.

set_text "webView css:#password", "#{text}"

7.

elsif textField.include? "number"

8.

set_text "webView css:#verfueger", "#{text}"

9.

end

10. end

When interpreting the code of the test story, Calabash tries to find
matching code behind the pieces using regular expression matching
and executes those. This piece of code is triggered by the line 'When I
enter <email> into "user" field'. For the first test run, <email>

has been replaced by the first data value for email addresses from
the test data table by "test123@example.com". When invoking the
above code, the variable text is set to the first piece of variable text
(test123@example.com) and the variable textField to the name of
the text field which we want to access (user). The code then waits for
three seconds to synchronize with the web app and sets the text of the
user field to test123@example.com . Then Calabash goes on with the
next line in the test story.
In order to deliver cost-effective test automation for a vast number of
combinations of test cases and mobile devices, it is necessary to build
a concise and sufficiently generic test automation library (see Figure
3. Mobile Test Automation Architecture with Calabash3). This library
contains all reusable step definitions and care must be taken that those
step definitions are reusable over the range of devices so they can be
transparently used from the test stories. This avoids the overhead of
having to write and maintain the same test cases multiple times for
different devices. Furthermore, the reusable step library should be
sufficiently modularized to distinguish between the product-specific,
product-line-specific and branch-specific action words. Modularizing
the automation library in this way allows it to reuse parts of the same
library in different projects and again minimizes the development
overhead for the test automation.

We proposed an overall project plan of four stages which we successfully performed. The most important phase was the piloting phase
of the chosen solution, a test automation framework based on Calabash. We indeed discovered some obstacles, but could relatively easily
overcome those by making use of the simple extensibility of Calabash.
Reusability became an equally important issue to our customer as
cost-effectiveness. With Calabash it was possible to write automated
test cases for both iOS and Android apps, reusing 80% of the test code
identically for both platforms and with only 20% of the code needing
to be adapted for the respective platform. The code for web objects
could also be reused for both the web app and the hybrid app, which
again significantly reduced the automation effort.

> about the authors


Dr. Jens Calam has been working with SQS AG
(www.sqs.com) since 2008 as a test automation
specialist. He supports his customers by planning, implementing and coaching test automation solutions. Within SQS, Jens is responsible for
the development of the Mobile Testing service
for Germany.
Parag Kulkarni has been with SQS India since
2008 as a core member of the Test Automation
team. His area of expertise is in automation using a variety of tools. His core competencies include automation framework design and mobile
automation. He has worked extensively in the
CRM, publishing, banking and telecoms domains
over the past eight years.
Sven Euteneuer has been working with SQS AG
since 2007. He is responsible for the technical
and non-functional services at SQS, including
test automation and mobile testing.

Calabash covered all of our requirements regarding cost-effectiveness


(it is an open-source tool, so free from the very beginning) and the
supported platforms (iOS and Android, as required). It is also easy to
extend with custom functionality. This turned out great for us, because
Calabash was the best solution but not the perfect one from the very
beginning, because Calabash supports only native and hybrid apps. To
extend its support to web apps, we developed a web browser which

Testing Experience 27/2014

65

Masthead
Editor
Daz& Hilterscheid

Editorial

Website

Jos Daz

www.testingexperience.com

Unternehmensberatung GmbH
Kurfrstendamm 179

Layout&Design

10707 Berlin

Lucas Jahn

Germany

Konstanze Ackermann

Subscribe
subscribe.testingexperience.com

Articles&Authors
Phone: +49 (0)30 74 76 28-0
Fax: +49 (0)30 74 76 28-99

editorial@testingexperience.com

Marketing & Sales


Annett Schober
sales@testingexperience.com

Email: info@diazhilterscheid.com
Website: www.diazhilterscheid.com

Price
Online version: free of charge

ISSN 1866-5705

www.testingexperience.com
Daz& Hilterscheid is a member of
Verband der Zeitschriftenverleger

Print version: 8.00 (plus shipping)

Berlin-Brandenburg e.V..

www.testingexperience-shop.com

In all of our publications at Daz& Hilterscheid

labelling legislation and the rights of ownership

without permission from Daz& Hilterscheid Un-

Unternehmensberatung GmbH, we make every

of the registered owners. The mere mention of

ternehmensberatung GmbH, including other elec-

effort to respect all copyrights of the chosen graphic

a trademark in no way allows the conclusion to

tronic or printed media.

and text materials. In the case that we do not have

be drawn that it is not protected by the rights of

The opinions mentioned within the articles and

our own suitable graphic or text, we utilize those

third parties.

contents herein do not necessarily express those

from public domains.

The copyright for published material created by

of the publisher. Only the authors are responsible

All brands and trademarks mentioned, where ap-

Daz& Hilterscheid Unternehmensberatung GmbH

for the content of their articles.

plicable, registered by third parties are subject

remains the authors property. No material in this

without restriction to the provisions of ruling

publication may be reproduced in any way or form

Picture Credits
iStockphoto.com/exdez........................................C1

Index of Advertisers
Agile Testing Days..................................................... C2

Thanks to the members of the Testing Experience editorial board for helping us select articles
for this issue: Erik van Veenendaal, Graham Bath, Maik Nogens, and Arjan Brands.

Dont miss in the next issue (December 2014):

CMAP Certified Mobile App Professional.......47

A Unified Framework For All Automation Needs Part III

Mobile App Bundle...................................................C4

by Vladimir Belorusets, PhD

Mobile App Europe......................................................5


Ranorex............................................................................3
Rocky Nook, Inc........................................................... 26
Software Testing World Cup................................. C3
Testing Experience......................................................8
Testing Experience.................................................... 13
Testing Experience.................................................... 51

66

Editorial Board

Testing Experience 27/2014

Test-Driven Developments are Inefficient; Behavior-Driven Developments


are a Beacon of Hope? The StratEx Experience Part II
by Rudolf de Schipper and Abdelkrim Boujraf

Columns
by Erik van Veenendaal and Alex Podelko

Software Testing World Cup 2014

dedicated to the
worldwide community
of software testing
professionals

international sportive
software testing
competition

1,000
> 2,500

PARTICIPATING
TEAMS
REGISTERED
TESTERS

from around the globe!

in total
we expect

on average, each team detects


15 BUGS during the competition

15,000 LOGGED DEFECTS

START MAKING BETTER SOFTWARE AND


LET WORLD CHAMPIONS TEST
YOUR PRODUCT!
Become SUT (Software Under Test) Sponsor
for the nals and get your software tested by the
greatest testing teams from around the world!

Finals Package Price:


250 per tester*

*Pay just 250 for each world class tester


participating in the nals. The price limit is
set to a maximum of 28 testers (7,000 ).
All prices are exclusive of VAT.

info@softwaretestingworldcup.com
www.softwaretestingworldcup.com

other sponsors include:

MOBILE APP
BUNDLE OFFER

BOOK BOTH
AND GET THE CHEAPER ONE FOR

50% OFF

IF YOU WOULD LIKE TO BOOK THIS BUNDLE, PLEASE CONTACT US AT


INFO@DIAZHILTERSCHEID.DE.

CONFERENCE
Sep 29 Oct 1, 2014 Berlin/Potsdam

LEARN HOW TO CREATE


BETTER MOBILE APPS!
www.mobileappeurope.com

TRAINING
Become a CMAP Certified
Mobile App Professional!
cmap.diazhilterscheid.com

You might also like