Professional Documents
Culture Documents
Testingexperience27 09 14
Testingexperience27 09 14
Testingexperience27 09 14
AGILE EVENT!
November 1013, 2014
in Potsdam, Germany
Reasons to join:
100+ SESSIONS
Roman Pichler
Lisa Crispin
Bob Marshall
500+ TESTERS
FINALS
Markus Grtner
Antony Marcano
CAR BUILD
PARTY
Matt Heusser
Alan Richardson
Janet Gregory
Dear readers,
After a great summer we are moving towards late summer, another
beautiful time of the year. Most of the people here are back from their
holidays and we have already entered the busy and hot phase of the
year before the winter comes.
Life and work is determined by continuous change or improvement. I
prefer the good and positive changes in life, which is why I am really
happy to present you with the new issue of Testing Experience, including lots of new improvements and changes. The magazine has been
in existence for six years and we do not stand still we always try to
satisfy our readers and we found out that your requirements have
changed. We have reacted and included a wider range of content. From
this issue, we wont be having a main topic per issue anymore. Instead,
we have decided to give you a more diverse range of articles on topics
such as mobile, security, performance, and Agile. Another new thing
is that we will be regularly publishing columns written by renowned
authors. In the current issue we have started with Alex Podelko writing
about Performance and Manu Cohen with a column about Security. More columns on different topics are planned for the upcoming
issues. Our latest renewal is the Book Corner with reviews and hints
on new book releases and publications relevant to software testing.
Speaking of changes if your company has started to work with mobile app projects then I really can recommend the Mobile App Europe
conference (www.mobileappeurope.com) and the new CMAP training
(cmap.diazhilterscheid.com) dedicated to mobile app professionals.
If your company is in transition to Agile, then Europes greatest event
for the Agile testing community is the place to be! The sixth, and by far
the best, edition of the Agile Testing Days (www.agiletestingdays.com)
will take place in November. This years highlight is the first European
Car Build Party. Together with the conference attendees we will build
a real car using the Scrum methodology.
What can I say? The prospect of autumn 2014 is absolutely awesome
for any testing professional who is ready for a change!
To quote Andy Warhol: They always say time changes things, but you
actually have to change them yourself.
Enjoy reading!
contents 27/2014
From the Editor...............................................................................................1
Book Corner.................................................................................................. 41
By Daniel Knott
Doing all those tasks manually means lots of work and is time consuming. In most cases, those test scenarios cannot be done manually,
because it is very hard to simulate fast and multiple user inputs with
one or two hands. But it can be done with the help of tools and it is
really easy to integrate into the development and testing process.
Android Monkey
For Android apps, the Monkey tool [MON01] can be used, which is part
of the Android SDK. Monkey is a tool that is able to run either on the
physical device or the emulator. While running, it generates pseudorandom user events such as touch, click, rotate, swipe, mute the phone,
shutdown the internet connection, and many more to stress test the
app and to see how the app handles all those inputs and interrupts.
You need the package name of the Android apk file to execute the
Monkey tool, otherwise the tool will execute its random commands
to the entire phone instead of to the app under test.
With access to the app code, the package name can be found in the
AndroidManifest.xml. If only the compiled apk file is available, mobile
testers can use the Android Asset Packaging Tool [AAP02] (aapt), to
get the package name from the app. aapt is located in the build-tools
folder of the installed Android SDK version.
The path to aapt can look like this:
/../daniel/android/sdk/build-tools/android-4.4/
With the following command, the package name can be read out
from the apk file:
./aapt d badging /daniel/myApp/myApp.apk | grep 'pack'
...
package: name='com.myApp' versionCode='' versionName=''
...
When the package name (in this case com.myApp) is available, execute
Monkey with the help of adb (Android Debug Bridge) [ADB03].
The following command will start Monkey:
./adb shell monkey -p com.myApp -v 2000
UI AutoMonkey
For iOS apps there is a similar tool available, called UI AutoMonkey
[UIA04]. UI AutoMonkey is also able to generate multiple commands
in order to stress test an iOS app. To use UI AutoMonkey, a UIAutomation Instruments template must be configured within Xcode. After the
template has been configured, a JavaScript file needs to be written
to tell the tool how many and which commands should be executed
during the stress-testing session.
...
2.
config: {
3.
numberOfEvents: 2000,
4.
5.
6.
eventWeights: {
7.
tap: 300,
8.
drag: 12,
9.
flick: 15,
GET YOUR
TICKET NOW!
References
[MON01]: Android Monkey
developer.android.com/tools/help/monkey.html
[AAP02]: Android Tool aapt
developer.android.com/tools/building/index.html
[ADB03]: Android Debug Bridge
developer.android.com/tools/help/adb.html
[UIA04]: UI Auto Monkey
github.com/jonathanpenn/ui-auto-monkey
Marketers
Developers
Designers
Testers
LinkedIn: www.linkedin.com/pub/daniel-knott/1a/925/993
Blog: www.adventuresinqa.com
Managers
LAST
TICK
ETS
www.mobileappeurope.com
Testing Experience 27/2014
By Patrick Prill
TEST DESIGN
Now to the different colors. The blue hat is about managing and moderating the whole discussion of the other hats. The blue hat is objective
and should help to focus the discussion. If you are using the approach
on your own, you always have to wear the blue hat so you do not lose
track. When splitting the hats between different team members, this
could be given to the test manager or team lead.
The white hat stands for objective information and analytical thinking. This hat focuses on the requirements and how to achieve them.
In test design the white hat helps to create a model of the application.
Wearing the white hat involves executing a test case as intended and
concentrating on the facts. This persons task is to collect facts in order
to inform the ongoing discussion, value-free.
The red hat symbolizes emotional thinking, both positive and negative.
This hat should help you to observe your own emotions. During testing
you develop emotions about the software under test. In my view, this
also, to a great extent, involves the hard-to-measure characteristic
of charisma. Do I like to use the software, is it annoying to use, or
much too complicated? Such information is often hard to put into a
bug report, but it should be reported at least to the stakeholder so
they have a chance to react. Software that upsets you when you use
it might be functional and technically correct, but the user will not
view it as high quality.
The yellow hat stands for an optimistic response. It is all about the best
case. This hat only sees the good things and benefits in the software,
so it is a good hat for happy path testing. The yellow hat is for a sunny
day experience, but you should be very careful if the yellow hat does
not add much information, as this is a bad sign!
The black hat is all about critical and pessimistic thinking, about discernment. This hat is the little devil on your shoulder and is excellent
at identifying defects and risks. The black hat is skeptical and critical.
Listen carefully to the black hat, as it can find many new error scenarios
or unknown risks.
The green hat, last but not least, symbolizes creative thinking. This
hat develops new ideas and thinks differently. In testing, the green
hat can find new ways of testing or using a function. The green hat is
creative in helping to optimize the software and you can also use it
to find workarounds. My tip is to try thinking like a child. Children use
things in a lot of different ways that grown-ups are no longer capable
of imagining because of their fixed ideas. Try using the green hat to
get rid of your ingrained thinking habits. This is difficult, especially
in the beginning, but you will come across a lot of interesting ideas.
Some you will try to put aside in the beginning, but it is best to write
them down and come back to them later.
When using the six thinking hats, you create a great deal of potential
for collecting information. Your project environment should be ready
to accept information not just on bugs, otherwise that would really
be a waste of creativity and feedback.
Some of the hats can be used at the same time during test execution.
For example, the yellow, green, and red hats can be combined if the
red hat is giving positive input. If the output of the red hat is more
negative, then it should be combined with the black hat to find more
risks and problems. It is important to always combine them with the
blue hat to keep the sources of information apart and to have some
structure in your process.
You can collect your information in a mind map (see example from
XMind), which helps to provide structure and present all the information together.
Personas
Quality is value to some person who matters.
Jerry Weinberg, extended by James Bach.
Personas is an approach to defining several groups of users of the
software by creating fictional representatives of those groups. This
method or approach is far more than role testing or using user stories.
You focus not on the job or tasks, but on the person as a human being
and you create a profile of sample users that captures as many facets
of the user as possible. This is similar to describing and creating a
movie character for the actress or actor.
This approach is especially good for testers who test software that will
be used by lots of different users. In business software, the user is given
training or at least a short introduction to the system. This is not possible for many types of software, so the software needs to be intuitive
and to provide simple help texts or self-explanatory forms and flows.
You, as a tester, have been working with that product for weeks and
know every inch of the specification. You found out a lot of ways, hints,
and dodges. For you, it is easy to work with that software. But how do
you get rid of everything you know? Alcohol and drugs are no solution
here, because you should not lose what you know completely, just put
it aside for a test scenario or two. That is the point at which personas
try to help. You play a role, you try to put aside as much knowledge as
you need to, you try to act out of character, and you will see and learn
new aspects of the software. One of the first issues you might find is
the basic knowledge you expect your users to have.
It is important to stay in or return to the role at every point of your test
session. For example, get into the role of Frank, 67 years old, a retired
millwright who is a bit short-sighted. Frank used computers in the
last few years when he was at work, but that was several years ago
now, and he does not have one at home. Think of a screen where it is
not obvious what you should do next, or not described. Do not push
that button down there because you know that is the button to get to
the next page. What would Frank do? Is there something missing that
would show where the button is?
It is not easy to group your users and it is impossible to take care of all
your users problems. You have to find the right mix of characters and
try out the necessary depth of definition of your persona. The type of
software and its distribution determine how important it is to put
personas in your test plan. Business software has a different set of
requirements here than, for example, the software used in a ticket
machine for commuter trains.
The last example is an especially good opportunity to see why you
should use personas. Go to a train station and observe the users of a
ticket machine in the wild. Who are those people, what is their background? How easy is it for them to see where to go next? Is the person
reading a lot of the text displayed on the screen?
Using the right persona, you can find out that timeouts might be too
short, because you do not have the time to slowly read all the help
text on the page. That timeout scenario might be described in the
specification and also in some sort of test case. But usually that thing
works against the clock, and not in terms of whether it is possible to
read every piece of information on the screen slowly and thoroughly.
When DHL introduced into Germany those big yellow boxes where you
can send your parcels to and catch them any time you want, I personally
thought that the user menu was one of the best I had ever seen. But
when you wait in line and observe the problems others have with the
system, it makes you think about what you need to improve in order
to create a good user experience for them, so they like using that box.
Conclusion
It is very important not to test only from your own point of view.
Whether methods and approaches like the two I have just described
help your test design and help to gather new and important information depends strongly on the project context. But knowing those
approaches and methods, and using them in the right context, should
be part of every good testers toolbox.
How the project uses the information you found, besides the bugs of
course, is another kettle of fish. But collecting and presenting information is part of a testers task.
!
s
l
a
n
o
i
s
s
e
f
o
r
Find your p
Place your job offer on the new Testing Experience
Job Board, accessible in the magazine and on our website.
Reach tens of thousands of IT professionals
worldwide!
Size 2
Size 1
Missed Part I?
Read it in
issue No. 26!
A Unified Framework
For All Automation Needs Part II
Introduction
In the first part of this article [1], I described the main principles applied
in the development of a unified test automation (UTA) framework that
serves as the foundation for testing multiple application interfaces.
The UTA was built on JUnit and JUnitParams. We covered test data
management, data-driven testing, and automated updating of the
test results in the test case management system. In the second part,
I will describe the details of implementing the automated testing of
browser GUI and REST API.
We can group the testing of web applications and REST API into one
2.
try {
3.
4.
5.
catch (TimeoutException e) {
6.
7.
8.
9.
Synchronization
WebDriver may not wait for the page to load. In some circumstances,
WebDriver may return control before the page has finished, or even
started, loading. In this case, the next action on the page fails. To
ensure robustness, you need to wait for the element(s) to exist in the
page before continuing with operations.
title, driver.getTitle()));
Page Objects
The Page Object design pattern is an extension of the idea implemented
in the best test automation tools, such as WinRunner, in the 1990s. One
of the favorite questions I used to ask interviewing test engineers at
that time was: You have a thousand debugged and running scripts.
Each script has a statement: button_press("OK"). The developer has
decided to change the button label from OK to Done. Do you need
to edit one thousand scripts to make them run without errors?
All pages under test are modeled as page classes. These classes describe the page elements and services/operations they provide by
following the Page Object design pattern. We will discuss this pattern
in detail in the next section. To avoid synchronization failures, each
page class is a subclass of the generic BasePageObject class. Instead
of using direct operations on web elements, this class contains wrapper methods, like clickElement, which guarantee that links, buttons,
radiobuttons, checkboxes, and other elements are visible before you
operate on them (Listing 1).
1.
2.
3.
.visibilityOf(element));
4.
element.click();
5.
The answer lay in a concept called GUI Map. WinRunner used a logical
1.
2.
3.
4.
5.
@FindBy(name = "Username")
6.
7.
8.
@FindBy(name = "Password")
9.
10.
11. @FindBy (name = "UserLoginButton")
12. private WebElement buttonLogin;
13.
14. @FindBy(xpath = "//td[3]/font/div")
15. private WebElement errorMessage;
16. .....
18. // constructor
Test scripts and page services are robust to the page element
changes since they do not use any physical descriptions.
17.
Another important point is that the page class should contain only
those elements that participate in the tests. You do not need to mimic
the full page to replicate the developers work.
26. driver.get(loginPage);
10
1.
@Before
2.
3.
driver = BasePageObject.getWebDriver(browser);
4.
5.
6.
7.
@Test
8.
9.
password);
10.
15.
13.
map, requestHeaders);
18.
19. String result = restTemplate.postForObject(url,
requestEntity, String.class);
16. }
Listing 4. Using page objects
Listing 6. A REST POST request with the headers and entity body
for all HTTP requests. The simplest code for executing a GET request
is presented in Listing 5 where the FacebookResponsePage class is a
response page object.
I found the Firefox plug-in RESTClient [5] very useful for testing REST
API and for developing page objects. It supports all HTTP methods and
displays the response pages in XML or JSON format.
The beauty of Spring Framework is that one page object class recognizes
both formats. You can apply JSON and XML annotations to the same
elements simultaneously and the content of the page object will be
based on the format of the response page.
1.
-<GeocodeResponse>
2.
<status>OK</status>
1.
3.
-<result>
2.
4.
<type>street_address</type>
"http://graph.facebook.com/safenet-inc",
5.
+<formatted_address></formatted_address>
FacebookResponsePage.class);
6.
+<address_component></address_component>
7.
+<address_component></address_component>
8.
+<address_component></address_component>
9.
+<address_component></address_component>
An example of a POST with request headers and entity body does not
look much more complex (Listing 6). Here the entity body contains
parameters in the JSON format. However, you specify them as key/
value pairs in the Map, and the RestTemplate converts them to JSON.
10. +<address_component></address_component>
11. +<address_component></address_component>
12. +<address_component></address_component>
13. +<geometry></geometry>
14. </result>
1.
String url =
"http://www.htmlgoon.com/api/POST_JSON_Service.php";
2.
3.
4.
5.
6.
requestHeaders.setContentType(MediaType.APPLICATION_JSON);
7.
requestHeaders.setAccept(Arrays.asList(MediaType.
APPLICATION_JSON));
8.
requestHeaders.set("MyRequestHeader", "MyValue");
9.
15. </GeocodeResponse>
Listing 7. An example of the REST XML response page
11
the same elements depending on the page format. For example, the
response page in JSON corresponding to Listing 7 contains the results
element instead of result in the XML format. In this case, you need to
apply the explicit @JsonProperty annotation to the element variable.
In addition, any page object class must define getters and setters for
the elements under test.
1.
@XmlRootElement(name = "GeocodeResponse")
2.
3.
@JsonProperty("results")
4.
5.
6.
7.
return status;
8.
9.
If a complex element contains other complex elements, then those elements must be mapped to their own page object classes. In Listing 7, the
result element contains formatted_address, address_component,
and geometry elements that are represented by the separate page
object classes.
Summary
In the second part of this article, I described how to test the browser
GUI and REST API in the UTA framework using the open source Selenium
WebDriver and Spring Framework. I covered synchronization issues
for web applications and explored the commonality between HTML,
XML, and JSON pages returned by the server. Those pages are mapped
to Java classes following the Page Object design pattern. In the third
11. }
12. public Results[] getResults() {
13. return result;
14. }
15. public void setResults(Results[] results) {
16. this.result = results;
17. }
18. }
Listing 8. An example of a page object for REST API
12
Read Part III of this article in the December issue (No. 28)!
References
[1] Vladimir Belorusets. A Unified Framework for All Automation
Needs Part I. Testing Experience, pp. 6670, Issue No. 26, 2014.
[2] Selenium:
seleniumhq.org
[3] Spring Framework:
projects.spring.io/spring-framework
[4] SWD Page Recorder:
swd-tools.com
[5] RESTClient:
restclient.net
or review
f
in
s
le
ic
t
r
a
r
Get you
by October 15!
ce.com
n
ie
r
e
p
x
e
g
in
t
write.tes
Testing Experience 27/2014
13
Security
Identity Management
Identity, authentication, and authorization are key requirements in
almost every web application and API. Applications need to know the
users identity and then use that identity to decide whether to allow access to a given resource based on an authorization policy. While simple
in concept, identity management is a complicated task. Identity is a collection of attributes that describe an entity, such as personal information, group membership, contact information, business information,
and so on. Such information is sensitive by nature and must be handled
with care. In some cases, this information is managed by an external
party rather than being directly managed by the applications owner.
Managing identities involves challenges such as securely storing identity information, securely handling credentials, federating identities
between organizations, revocation of identities when required, implementing an authentication protocol, and the list goes on.
Identity management is also a burden for clients. For each service that
manages identity, clients often have to create a separate identity with
a unique set of credentials. In many cases this is too much of a burden
for users, so it is not uncommon for a user to share credentials across
applications (i.e. they use the same username and password for multiple systems). This is not best practice and most clients are unaware
that their identity is as secure as the weakest application managing it.
In large and well-managed enterprises, these challenges are met
through the use of so called single sign-on (SSO) solutions. These
allow a user to log in once, typically via their workstation network
login, and authenticate to all services. In order for this to be seamless
and secure, a complicated set of technologies is involved. This works
well within an organization as it is within a managed IT infrastructure
and on the secured corporate network. However, the same solution
does not translate well to web services available on the internet.
To address these problems and create a solution that works well both
within a corporate network and on the internet, identity management
standards and infrastructures have been developed and standardized.
One of the key aspects of these solutions is the extraction of identity
management from applications and services. Instead, applications and
services outsource the task of identity management to a third party.
Identity is managed by this central identity provider, which is trusted
by the relevant services, applications, and users. Users authenticate
with their identity provider which, in response, provides a token that
users present to applications. This token is evidence of their identity
and may optionally include additional claims.
Applications validate these tokens using cryptographic keys that are
used to establish trust with the identity provider. This allows an application to determine who the user is and determine what they are
14
OpenID
OpenID was designed to handle authentication on the web. With
OpenID, a web site can obtain a signed token from a trusted identity
provider with information about the client that issued the request.
OpenID allows users to use an existing account to sign in to multiple
websites, without needing to create new passwords. Users may choose
to associate information to their OpenID that can be shared with the
websites they visit, such as a name or email address. With OpenID,
users can control how much of that information is shared with the
websites they visit. In OpenID, credentials are only given to the identity
provider, and that provider then confirms the users identity to the
websites he or she visits. Other than the provider, no website ever sees
the password, so users do not need to worry about an unscrupulous
or insecure website compromising their identity.
OpenID is decentralized and is not owned by anyone. Anyone can choose
to use an OpenID or become an OpenID provider for free without having to register or be approved by any organization.
OAuth 2.0
OAuth 2.0 was designed for delegated access on the web and not for
authentication per se. With OAuth 2.0, websites and web services can
obtain access to a clients resources stored somewhere on the web (e.g.,
Facebook Friends). To get access to a resource, an application has to
obtain an access token from the OAuth identity provider.
Summary
Identity management is a serious matter. In the last decade, various
standards and technologies have been developed to address this issue. Today we see SAML-based solutions in enterprise applications
and OAuth 2.0/OpenIDConnect solutions in web and mobile applications that require simplicity and interoperability. Choosing the right
standard is important when implementing simple and safe identity
management.
Web applications will use the OAuth 2.0 Authorization Code Flow
grant flow.
Java scripts clients running inside a browser, such as HTML5 Single
Page Applications, native client, and mobile clients, will use OAuth
2.0 Implicit Flow.
Trusted Clients with no user interface will use the OAuth 2.0
Resource Owner Password Credential Flow.
Group. He currently consults for various enterprises in Israel and worldwide, architecting developing and testing distributed applications using a wide range of
technologies.
Manu Cohen-Yashar is known as one of the top identity management, Azure, WCF, and WF experts in Israel. He has written a few
of the official Microsoft Courses (MOC) and conducts lectures and
workshops for developers and enterprises that want to specialize in
Once the application has the access token, it will put it in the authorization header of all http requests for protected resources.
OpenIDConnect
teams in the Israeli Army and leads the architecture of large scale
and Couchbase.
Because Manu Cohen-Yashar is one of the leading experts in application security and identity management in Israel, he was chosen to
infrastructure for the government of Israel.
Manu Cohen-Yashar is a member in one of the leading big data
systems using databases such as Cassandra, Mongo Db, Raven DB
Manu Cohen-Yashar writes a popular blog about application security, cloud computing and big data at blogs.microsoft.co.il/applisec.
15
By Nicolaas Kotze
Internationalization and
Localization Testing
Introduction
How many businesses really know what their profits can be when
targeting other countries? The digital market is very different to what
it was 15 years ago. The mobile phone and the internet have especially
contributed to the complexities of marketing and software development since the 1980s. Ironically, even after 30 years there continue to
be organizations and well-known brands that fail in targeting these
new markets by not properly doing research to understand why
other organizations struggled in the past. These failures result in
high financial loss and probably negatively impact brand confidence
just as much. These days, retail internationalization has also become
a subject of much academic study.
Well-known brands such as Nike attempted to target the Asian youth
market by featuring Lebron James and a kung fu master. But it failed
because it was found offensive by the Chinese markets [1]. Puffs tissues
tried to enter the German market, only to learn that Puff in colloquial
German means brothel.
Gerber tried to sell baby food in Africa and use the same packaging as
in the US with a Caucasian baby on the label. The problem was that,
in Africa, companies prefer to put pictures on the labels of what the
packaging contains, since many people cannot read.
Often software projects start with no intention of targeting international markets, thus funds are not allocated accordingly. So it is of
no surprise that many project plans consider these testing activities
late in the development life cycle. Introducing localization late into a
development cycle is sure to provide managers with a nerve-racking
surprise when it comes to budget, resources, processes, and skills.
Generally stakeholders are not aware that localization is not just about
testing the supported languages. Internationalization should be implemented before considering localization, unless the team customizes the
system in such a way that it supports this from the beginning. Many
teams do not know or understand this, but there is a clear difference
between internationalization and localization, and the types of defects
that will be identified are different.
Internationalization/globalization (i18n)
The abbreviation i18n is widely used and derived from the fact there
are 18 letters between the I and the n.
16
Localization (l10n)
The abbreviation l10n is widely used and derived from the fact there
are 10 letters between the I and the n. Localization is the translation
of the actual content to another language.
Typical issues in a specific language should be grammatical errors,
non-translated strings, missing localization such as incorrect URLs
showing, mistranslations, or terminology errors.
they closed down around 2011 and in its place today is the Language
Terminology/Translation and Authoring Consortium (LTAC)[6].
For interest, read up on Term Based eXchange (TBX) which is an XMLbased standard for exchanging structured terminological data that
has been approved by LISA and published by ISO as ISO 30042 Systems
Internationalization
'' Formats for numbers, dates, times, addresses, and phone numbers
'' International paper sizes
'' Do not use language to assume a users location, and do not use
Localization
'' Localizable resource files
Good practices can be like the time when everyone finds out you will
be a parent soon. Suddenly, everyone is an expert on preparing for a
baby, how to raise it, and which colleges to target when they become a
young adult. As most of you know, it is always good to take the advice
with a pinch of salt. The following supported me when I was involved
with internationalization and localization.
1. Consider the global strategy from the start and prioritize your
language sets and languages.
2. Do not assume things, because when something goes wrong, it
makes an ass of u and me.
11. Identify any areas where there will be little or no room for
strings to fit nicely. If the product needs to support German or
Chinese there will be quite a few areas like this and UI designers
17
will need to come up with clever ways to work around this. Try
pseudo-localization testing to prevent common internationalization defects. This is a process that creates a localized product
in an artificial language that is identical to English, except that
each character is written with a different character that visually
resembles the English character. This should be entirely machinegenerated to save time, and the pseudo-localized builds should
be created in exactly the same way as the localized builds. Even
monolingual English software developers and testers can read
pseudo-localized text and this has proven to be an excellent
way of discovering internationalization problems early in the
development cycle.
Early on in development, make it a priority to find internationalization defects by concentrating first on five languages including
English. Experience has shown that we are most likely to find
specific defects in the following languages:
German: It contains long words that can reveal dialog size
and alignment defects better than other languages.
Japanese: With tens of thousands of characters, multiple
non-Latin scripts, alternative input method engines, and an
especially complex orthography, Japanese is a great way to
find defects that affect many East Asian languages.
Arabic: It is written right-to-left and has contextual shaping
where a character shape depends on the adjacent characters.
Hindi: This will help finding legacy, non-Unicode defects that
affect all such languages.
12. If you know the libraries used to develop the products, do some
research on their forums and bug tracking system to find limitations or issues. These are good sources for ideas on designing
tests.
18
13. Identify areas where there are sorting capabilities. Using internet browsers to sort tends to be problematic.
14. Test automation is your friend. Use data-driven test automation
as much as possible, but take note that there might be limitations as to what the test automation framework can support.
Managing defects
If logging defects by category using the quality characteristics according to ISO/IEC 9126-1[12] and ISO/IEC 9126-2[13], which is now replaced by
ISO/IEC 25010[14], internationalization could fall under Portability and
localization could fall under Usability. Categorizing defects provides
valuable statistics to the Quality Assurance (QA) team to help them
evaluate the success of the current processes in place throughout the
SDLC and build on lessons learnt for future projects.
Also, make sure the defect tracking system the team use supports all
the supported languages. Switching over to spreadsheets halfway
through development just adds unnecessary manual administration
and complications. It is also recommended to use a single management system for all supported languages as this will streamline and
improve the reporting efficiency.
Once test reports for the different languages stream in, it can become
quite a challenge to manage this efficiently. Do not try to handle multilingual bug reports using manual processes by examining individual
bugs and then manually translate one-by-one for appropriate followup by the feature team that owns the affected component. This is a
time-consuming and error-prone exercise that scales poorly to large
and diverse programs. Create a language detection API for the defect
tracking tool that is able to automatically detect the language of the
customer defects as they are reported. Or just have an option available
for the reporter to select the language. If the defect tracking tool is able
[10] TerminOrgs:
www.terminorgs.net
Severity
Description
1 Critical
2 Major/High
3 Minor/Medium
4 Trivial/Low
Closure
audience is not always as easy and cheap as some might believe. Getting it wrong can lead to serious brand damage that might make you
a laughing stock for years to come.
Our role as testers is to try can get involved very early and convince
stakeholders to plan carefully and allocate sufficient funds to see this
through successfully.
understanding human behavior which collecceived. He was introduced to testing in the games
industry while in London, UK, working on numerous AAA titles. A career in testing formally started on returning to
South Africa testing GIS software systems that utilise Google Maps
from the public service delivery domain for clients in the Netherlands
and then later he moved on to the busy retail credit and financial
services. He chose testing as career path because it enables people
to blend creative thinking around formal processes or regulations
References
and still have the exhilarating pleasure of breaking things. The fact
sions are primary drivers keeping things interesting and riddled with
that testing intertwines with so many other disciplines and profeschallenges. Lately his responsibilities with Dynamic Visual Technologies (DVT) Cape Town as SQA Competency Lead are to direct his
enthusiasm and energy towards mentoring, motivating, assist and
make people aware about the benefits of testing and improving
processes for more effective testing. Having gained experience from
printing, digital video production/sales/support/training and special
effects as well as being an office automation field technician before
being introduced to testing grants him the skills to understand
problems that frustrate people but also what is required to support
people effectively.
LinkedIn: za.linkedin.com/in/nicolaasjkotze
Blog: njkotze.wordpress.com
[6] LTAC:
www.ltacglobal.org
[7] Systems to manage terminology, knowledge and content
TermBase eXchange (TBX):
www.iso.org/iso/catalogue_detail.htm?csnumber=45797
[8] W3C Internationalization Activity:
www.w3.org/International/about
[9] GALA:
www.gala-global.org
19
By Christian Kopsch
Localization Testing
How far would Chinese censorship limit and influence our work?
Main issues
From the project point of view, the following main issues needed to
be reviewed and specified more concretely:
Preparation
In our first brainstorming meeting we discussed various scenarios in
order to examine possible implications in the context of software testing. In these discussions we found that development and testing are
difficult to separate from one another. Basic questions arise in terms
20
3. Number formats
What additional number formats might be required? Of significant
concern are different formats of dates, times, measures, weights, and
numbers (e.g., decimal places), or zip codes and telephone numbers.
Consequently, would we need additional input fields?
Primarily our software is concerned with the formats of dates. At different points in our application we use the date for labeling or sorting
Localization testing
At the beginning of the project our team worked in Scrum. Regular
releases were developed and delivered, but the essential basic functionality of the original application was still included. In addition, some
new features specially required for the Chinese market were developed.
After several sprints we recognized that, in some cases, Scrum was too
static for us. New features and user stories were often too complex to
define and implement efficiently in a single sprint.
Later the project switched to Kanban. Fixed sprints and commitments
became more fluent and our team became more agile in the whole
process. Besides the actual localization of our software, new requirements from our Chinese customer were sent to us, then specified,
developed, and tested by us immediately.
The whole localization and development process, as well as testing at
all test levels, was conducted in Germany. Production also remained
in Germany; however we needed to establish a process of transferring
software and data to the Far East.
After project completion and delivery to China, a user acceptance
test would be performed in China by our customers. Upon successful
completion of the acceptance test, it would go live. Afterwards, our
software would be rolled out to the Chinese market and our application would be hosted locally by a Chinese service partner in China.
At the beginning of our testing we were confronted with questions
about test systems and test environments. Primarily, we re-used our
pre-existing test systems with Chinese language packages.
We decided to execute our tests on the most popular browsers Firefox,
Chrome, and IE8-11 in the German market because we would like to
be able to support them later in the Chinese market. Additionally, we
were testing in the Baidu browser which is widely used in China and
is based on a Chrome engine.
For a testing environment we first used different local test instances. At
the same time we were building up a staging system that was hosted
in Germany. Later we improved our testing capabilities and began to
run performance tests in this environment.
Initially we had some small issues in developing and executing the HP
Loadrunner scripts, due to the parameter files being incompatible with
the Chinese character set. We quickly solved the limitations and found
some workarounds so we could continue working with the scripts.
Despite all these obstacles in the main project layer, the team overcame
them. I was working out of my comfort zone on test-specific questions
and issues in my role as a tester and test manager.
This caused further questions and increased the need for better communication between the project team, the customer, and our colleagues in China.
21
Due to the six-hour time difference between Germany and China, there
was only a small time window for daily verbal communication via
telephone, for example. Alternatively, we had to wait one day to receive
e-mail feedback. However, we successfully overcame these challenges.
The testers were rarely involved in direct communication with the
Chinese customer. The responsibility for direct communication rested
with the lead project manager and some selected developers who
sometimes worked closely with our Chinese colleagues. The most
important aspect was the regular communication and dialogue with
all the participants and stakeholders in the project team.
Functional testing
Our functional tests were primarily executed manually. We combined
them with exploratory testing. As our GUI developed more and more
Chinese labels and texts, the question arose as to how to differentiate labels, buttons, and their associated functions in our application
from each other, because nobody in our core team spoke Chinese. We
examined various possibilities.
One possibility was to consult an interpreter. Fortunately, this problem
was solved sooner than expected: one colleague was a native speaker
and supported our core team with all the necessary translations and
text changes. Additionally, we built up an English-language reference
product which was identical to our Chinese product.
This simplified the test activities, especially for the external colleagues
and staff members supporting the local QA team, and particularly with
the identification of the Chinese buttons, labels, and texts.
In the expanded GUI testing of all our supported browsers, we found
a lot of styling bugs as expected. These bugs were fixed or the style of
the GUI was adjusted accordingly.
Lessons learned
All these challenges occurred last year. We rapidly adapted to the
difficulties and we overcame these obstacles to ensure optimal implementation. After a twelve-month project phase we will hand over the
finished localized software to our Chinese customer. After an acceptance test in China, our software will be rolled out to our new Asian
customers in the Chinese market.
Looking back over the last year, my conclusion is that both software
testing and software development are confronted by similar challenges. It is almost impossible to look at software testing and software
development separately, and therefore objectively in many cases
they are synonymous.
Especially at the beginning of our project, we regarded software testing as an isolated activity. Continuous communication was the most
important aspect of the project; the more closely the team cooperated
in our agile project, the more immediately we achieved an improvement in the quality of the project.
It is not yet clear whether the project will continue in maintenance
mode or whether new features will be required after our handover,
because we have to see whether our potential Chinese customer will
accept our software or, indeed, whether it will function in their current market.
This is our first attempt at entering this market and we will have to
see whether our investment over the past year results in a satisfied
customer.
22
So far we can see that localization testing is not limited to verifying the
translation because it also includes many other things to test. However,
you may only want to evaluate the translation of the application into
different languages. If the number of languages is significant, these
tests are candidates for automation. The great advantage of automation in these cases is that you can use a single script for each scenario.
This script is written once and run many times as needed according to
the languages you want to test. This script will reference a data pool
that contains the corresponding labels according to the language you
want to evaluate, so the translation will be in a separate data file.
In general, automation tools were not created to support localization
testing, but they are useful in this process. Either way, the best strategy
for addressing localization testing is to create a mix of manual and
automated testing.
23
Column
Friends or Foes?
Currently I am leading a project to describe the implementation
have many problems and are looking for concrete answers. Using
value. However, it is not a silver bullet that will solve all our quality
quality (see Figure 1). In this column I will briefly discuss some of
Of course, using the Agile life cycle model has a decisive influence
on the way in which test process improvement is approached. The
improvement culture here is closely aligned to the iterations and can
be characterized as follows:
Within projects that use Agile life cycle models, improvements generally take place in frequent feedback loops that enable test process
improvements to be considered frequently, e.g., when applying SCRUM,
at the end of a sprint, or even as part of a daily stand-up meeting.
Retrospectives are a standard and important tool that will drive (test)
improvements. A team-based improvement focus is already embedded
in Agile. As a test improver, the challenge is to make use of this improvement cycle, take the improvements to another level (e.g., facilitate
cross project learning), and institutionalize them where necessary.
24
Improvement methods
Support from test process improvement models
Because the scope is often limited to the previous sprint, small but
frequent improvements are made that focus mainly on solving specific project problems. The focus of these improvements is often not
on cross-project learning and institutionalization of improvements.
Looking at the organization of test improvement, we find that there is
likely to be less focus on test process improvement at an organizational
level and more emphasis on the self-management of teams within the
project. These teams generally have the mandate to change the testing
Defects
1000
100
10
10
100
1000
Test Defects
10000
The TMMi website provides case studies and other material on using
TMMi in Agile projects. I have personally provided consulting services
to a small financial institution while achieving TMMi level 2 and to a
medium-sized embedded software company while achieving TMMi
level 3, both employing the Agile (SCRUM) life cycle using the standard
TMMi model. Note that within TMMi, only the goals are mandatory,
not the practices.
As stated with TMMi, a special project has been launched to develop
a special derivate that focuses on TMMi in Agile environments. The
main underlying principle is that TMMi is a generic model applicable
to various life cycle models and various environments. Most (specific)
goals and (specific) practices as defined by the TMMi have been shown
25
26
written numerous papers and a number of books, including Practical Risk-Based Testing: The PRISMA Approach and ISTQB Foundations
of Software Testing. He is one of the core developers of the TMap
testing methodology and a participant in working parties of the
International Requirements Engineering Board (IREB). Erik is also a
former part-time senior lecturer at the Eindhoven University of
Technology, vice-president of the International Software Testing
Qualifications Board (20052009) and currently board member of
the TMMi Foundation.
Twitter: @ErikvVeenendaal
Website: www.erikvanveenendaal.nl
By Felix Krger
Field Report:
Test Automation and Quality Assurance in the
Context of Multi-Platform Mobile Development
The word app still suggests that we are dealing with little applications. While that may be true in some cases, this field report is about a
pretty big app that is used to remote control and monitor the statuses
of different parts of a machine, such as light, air flow, and position.
The machine uses a mobile communications network to be accessible
by a backend server, which our app accesses via the internet. In all, the
complexity is comparable to a desktop application.
One major aspect of this app is variant management. Different customer groups receive different feature sets and different machine
types require specific data presentation. This results in a very dynamic
app both in terms of its composition during the build and also at
runtime, depending on which machine type we want to use.
So this is anything but a small project. It is not a mobile app that
accompanies an existing business application it is the only solution
Because of the two teams, the product backlog contains most user
stories twice one version for each supported platform. For most
user stories, both versions are planned in the same sprint, depending
on the development progress, which does differ for iOS and Android.
When a story is implemented, the result is compared with the other
platforms app. During the sprint review, we prefer to present a feature
in parallel for Android and iOS. By doing this, we can ensure that we
achieve feature-identical apps and a very similar user experience for
both target platforms.
Beyond the user experience, the Android and iOS apps have very similar
software architecture, despite being implemented independently. The
data model, layered design, screen flow management, variant management, and domain-specific algorithms are specified in a common
software architecture document. So, when implementing a function
for the second platform there is a template which is easy to understand
because it is implemented on the same basis. This does not work for
view implementation due to different widgets and user interaction
concepts here the development is completely platform-specific.
27
Automated testing
The challenge for our quality assurance in terms of testing is to have
tests for each app, at multiple levels (unit, integration, and acceptance
(UI tests). Since we want to automate as much as possible, we have
a QA consultant who is a part of the team and who drives our test
automation. He is responsible for test specification and review of test
implementations. The actual implementation of automated tests is
done by the developers, triggered by a test task that is generated by
default for each user story.
Depending on the implemented feature, there are acceptance tests
(automated UI tests), unit tests, and integration tests. Tests are always
implemented platform-specifically. In the case of the UI tests, they
are synchronized by using the same acceptance criteria. For each acceptance criterion defined in a (UI-related) user story there is at least
one UI test.
To implement all these different automated tests, we use platformspecific frameworks. Our lower-level tests are implemented using
SenTest or JUnit respectively. On iOS, additional libraries like nocilla
and JRSwizzle are used for mocking. For UI tests we use KIF for iOS and
Robotium for Android. In order to get more stable Android tests and
eliminate false negative results, the Robotium Recorder (commercial
ideal
For this project, we have around 10% unit tests, 40% integration tests
(mocked and against the real backend), and 50% UI tests. The amount
of integration tests is only that high because of quality issues (poor
interface specification) in the product we received from the independent backend supplier.
Continuous integration
We use Git with a basic branching model as our version control system.
It defines a master branch, release branches, and branches for each
feature and bug fix. The developer merges a feature as a whole into
the master branch when the user story is done. To ensure that no
incomplete features are merged into the master branch, default tasks
are generated for each user story. There are default tasks for acceptance
(reviews by product owner and QA consultant) and code quality (code
review, static code analysis, and automated tests).
The basis for continuous integration is the master branch, because
it should always contain a project ready to release. Each commit
(merged feature) triggers a full build cycle consisting of the following jobs:
Updating/building dependencies
Building the app (currently three different build variants)
Static analysis
Unit tests
UI tests (one job for each build time variant)
App distribution via intranet
mobile
this project
UI
UI
UI
Unit/Component
Unit/Component
Unit/Component
Integration
Integration
Integration
Figure 2. Test level ratio (ideal, typical mobile development project, this project)
28
We use Jenkins for iOS and Android, as it is the company default and
well supported by the IT department. Especially in the iOS development,
we had teething problems with Jenkins that could have been avoided
had we been able to choose the Xcode server. However, additional
plugins in Jenkins did eventually make it possible to integrate our
iOS systems into the CI, for example the Clang Analyzer, and plugins
to manage environment variables or share workspaces between jobs.
Initial assessment
Requirements
identification
Evolution and
optimization
Case creation
Defining the
techniques
Execution
Association
Adaptability
Classification
and focus
Tools and
setup
To start with, you need to identify and define the life cycle stages
the test case will go through, so that all efforts are focused on fully
developing its components. Before we start writing the test case, it is
30
imperative to make sure that we have identified all of the test objects
requirements, which will give us an initial idea of how to create the
cases. Once we have this new knowledge, the best possible design
technique can be chosen. Later it will be necessary to set a focus and
a classification for the cases that need to be set up.
After that, the formal set up process can begin. All the set up cases
need to adapt easily to whatever stage the project is in, securing all
necessary coverage. Within the project it is necessary to identify tools
that support both design and execution. Cases that test the same object need to have a clear association that allows them to be optimally
managed. After that the test execution begins and this is where we
assess their quality and how effective they are. Finally, the cases go
into a repetitive evolution and optimization stage that runs alongside
the test objects evolution.
Requirement identification
In most projects nowadays, a clear document design and requirements
methodology is followed. However, depending on the size of the project, available time, approved budget, and human and technological
resources allotted to the project, a strong and organized methodology
that allows you to easily establish the requirements and derive the
test cases might not be followed. This would make a more in-depth
analysis necessary and you might need to choose a different way to
design test cases, because it would not be possible to use a known
technique for the task at hand.
Test requirements can be generated in a formal and detailed document, which should have gone through a cycle of stand-still testing.
They can be the result of a technical and functioning team or they can
be a punctual request for change or maintenance on the test object.
Regardless of the source, it needs to be clear that all the materials to
identify what is being tested are being looked for. In the same way,
the team looks for a how validation of the what by following a step
by step procedure. It is also necessary to establish the expected result,
which the test case should arrive at by following each of the steps. An
important thing to bear in mind when identifying the materials is to
the need to establish what should not be done so that the team gets
the raw materials to design both positive and negative scenarios which
apply the requirements in full context.
Case creation
When the requirements have been identified and the materials clarified, both start coming together to create a test case. During this first
step of the creation process, it is important to bear in mind that both
set up and design of cases look to guarantee full test coverage of the
object whose requirements are being validated. It is also necessary
to bear in mind that test cases improve their productivity during the
execution stage, decreasing the time when functions are not understood, which makes it necessary to be very clear when writing them,
using simple, easy to understand language, and using your voice while
writing to guide the executing analyst. It is good to start sentences
step-by-step using verbs that indicate the task, and the result needs
to include future tense verbs that indicate the expected result after
the case execution is over.
Test cases must have a series of components that let us be sure that
each of them is using the identified materials and that fully apply the
requirement they are designed for. For this you need a base template
like the one below, which shows the important elements that every
test case needs:
Element
Description
Case ID
Requirement
name or number
Objective
The what
Case name
Description
Assumptions and
preconditions
Steps
Conditions for
execution
Status
Expected result
31
Test object
Test type
Testing
techniques
Functionality or
component
Login
Test Suite
Black Box
Test Suite
White Box
Website
Test case
GUI
Validate
username and
password
access
Logic
Inquiries
Transactions
Functional
Test Suite
Focus
Login
Inquiries
Test Suite
Load Test
Login
Test Suite
Security Test
Login
GUI
Logic
Messages
LOG
Stress
Volume
Non-Functional
Test Suite
Access Controls
Ethical Hacking
Adaptability
During this stage it is important to evaluate the adaptability of the test
cases to the current stage of the project. Cases have to go through a
stand-still validation before being run in order to guarantee that they
are complete and that they cover all the functionalities for the specified
components. Another validation stage takes place while the tests are
being run and it checks that the cases do contain all the interaction
flow between the test object and the testing analyst, providing him
with a step-by-step orientation to achieve the expected result.
32
Association
In order for the structure for the cases defined in the Classification
and Focus section to be complete and for it to be manageable and
scalable, it is necessary to identify all the relationships in the test cases,
whether they are direct or indirect. A direct relationship is the logical
order of test execution. For example, you would not be able to test
the result of a balance query without having logged into the website.
An indirect relationship is one that tests a different element of the
test object and guarantees that all of the objects components will
work correctly when put together, complying with all of the specified
requirements. For example, the GUI and Logic cases that validate
login can be run, but it will be necessary to run the loading and
security tests to finally OK full functionality.
Taking all this into account, it is important to include two more elements in the base structure (case creation) of the cases: Dependencies
and Completion. In the former we indicate the Case or Cases ID of
the associated cases that must be run before others, and in the latter
all cases from other testing sets that complete validations of all the
elements that comprise the test object are connected.
Execution
At this stage it is important to identify the tools that will support the
process itself, to set up automatic test execution and to support the
analysis of the results depending on the case focus. To run the cases, you
might need a particular tool that allows you to access a database, or to
analyze the communication between the front end and the back end.
It is important to be clear about the way versions will work in terms of
the documentation of the tests run on each cycle. This definition must
go hand-in-hand with the structure defined for the Classification and
Focus given to the test cases. It is also important to have traceability
on the component status or the functionality of all the testing sets to
ensure that all the requirements of each of them has been tested. To do
this, you can create a keyword that identifies a component through all
the testing sets where there are cases that apply their own functionality
or one associated with them. This keyword will become a part of the
base structure of a test case as explained in Case Creation. Traceability of the case status should be supported by a control board that
evidences the status flow through which the case has been during all
the cycles that have been run. This is key to flagging up early any possible component damage in case they were working correctly. As can
be seen in Figure 3, test cases 1 and 2 achieved a successful outcome
on versions 1.0 and 1.1 respectively, but were affected on version 2.1,
where the outcome was unsuccessful.
ID Test Case
Release
Date Execution
Result
Conclusions
Test case design needs to be strategic, guaranteeing that while the
project advances, the team will acquire all the necessary elements to
achieve full test cases. This means they can be understood by everyone
involved, are easily adaptable, can be grown and completed, and ensure
total coverage of all requirements.
1.0
15/05/2014
SUCCESS
1.0
16/05/2014
FAILED
1.0
15/05/2014
FAILED
1.1
10/07/2014
SUCCESS
1.1
15/07/2014
SUCCESS
1.1
15/07/2014
FAILED
2.1
15/09/2014
FAILED
2.1
15/09/2014
FAILED
2.1
15/09/2014
SUCCESS
Through the projects life cycle, test cases should evolve in synchrony
with the project. They have to grow and improve as the documentation matures or new knowledge is acquired by the testing team. Direct
and indirect relationships between cases can be improved, and it is
even possible to create associations from the testing sets to indicate
what has to be tested first. For example, if a software components
navigation and logical tests have not run through successfully, it is
33
ADVERTORIAL
By Emil Simeonov
New and exciting software testing trends are emerging, which can help
testing teams tackle the challenges of mobile testing to some extent.
Crowd testing, device sharing, and the like are effective and fun. Of
course, they are already an improvement, but the cost of resolving
even trivial issues during the post-production testing phase of any
project is still too high. Hence, only issues with a really critical impact
usually get resolved.
And what if most issues that end users will hit could be detected and
resolved earlier? What if test harnesses with a growing number of
automated tests could guarantee regression-free mobile applications?
Of course, the need for exploratory testing would still be there, but
then such activities could be focused on figuring out corner cases not
covered by the automated testing suite. They would also provide input
for new automated tests. Both development and post-production testing cycles could be shorter and more effective. This is not an imaginary
situation anymore, and therefore the field of mobile test automation
has been explored with great interest.
34
1. Essentially, test automation is programming, so a basic knowledge of common programming languages such as Java, C#, JavaScript, etc. is required. This ultimately raises the entry barrier
for newcomers.
2. The test automation tools should be really close to (if not the
same as) the ones used by the development teams producing the
tested mobile applications. However, this is not the case with
most of the mobile test automation tools out there. They are
either designed to run in web browsers or as standalone desktop
applications. What about integrated development environments (IDE)? There are a couple of issues. First, the diversity of
mobile platforms already implies the same variety in terms of
development and testing infrastructure. Second, there are some
commercial offerings betting on IDE, but their user experience
is flawed with proprietary ideas and concepts inapplicable in
any other context. This makes it even harder for test automation
engineers to ramp up and use these tools effectively.
3. Proven agile development and testing practices mandate that
automated tests are most beneficial to organizations, when updated and run regularly as close as possible to the involved development teams. In this way any detected issues could get timely
resolution at the lowest possible cost. Unfortunately, the testing
solutions out there still encourage mostly post-production
testing cycles. Continuous integration (CI) is a chimera. Open
source test automation tools do not take this into consideration
and commercial offerings fail to provide an easy and intuitive
approach to CI mostly due to the proprietary concepts implemented as part of this kind of software.
4. Multi-level testing is another important component needed
for effective mobile test automation. There is basically logical
(and often technological) separation between tests ensuring
that a number of non-functional requirements are going to be
consistently met by a tested mobile application. In this sense,
depending on the specific requirements for any given application, it may be necessary to pay special attention to security,
performance, accessibility, API, etc. and create the necessary
tests. The multi-level testing concept increases the reliability
and maturity of tests. It also gives them sharper focus and makes
it easier for both the development and testing teams to better
understand and react to test failures, thus increasing the quality
35
36
LinkedIn: www.linkedin.com/company/4986424
Blog: www.tenkod.com/category/tenkod
Website: www.tenkod.com
By Ravi Kumar BN
If you are still using QR codes without testing them, it is time for you
to start testing your codes. Testing your codes simply means trying to
use them after you create them and just before you take them public.
Until you are able to read a QR code just by looking at it, you should
always test the proofs with a variety of smartphones and scanning
applications before you release a campaign. This is the simplest way
to spot scanning problems.
load and install applications, can access the internet, and have
cameras. These types of phones are loosely referred to as smartphones; the most common examples are iPhones, Blackberries
and Android phones.
The application: There are a number of applications which can be
used to decode a QR code, all of which work in similar ways. Here
are a few:
Red Laser (iPhone)
ScanLife (Blackberry)
Quick Mark (Android)
Use ScanLife, a free app which has versions for a wide range of phones.
Going to the website www.getscanlife.com on a phones internet
browser will automatically detect the type of phone and guide you
to the appropriate version of the application.
The connection: Because the QR code is a link to online content,
you need to be able to connect to the internet in the location
where the codes are placed. Smartphones can connect to the Internet in two ways: through a 3G data connection, or through WiFi.
Tester/
end user
Smartphone
with code reader
Uses
To test
Refers to
Test
scenarios
QR codes
under test
37
Internet connection
Visibility
Prominence
Background color
Scanning time
Foreground color
Contrast Accuracy
Aesthetics
Usage instructions
User experience
Functionality
Link directly to a URL
Placement on media
You need to test internet accessibility in the area where you plan on
placing your code. This is because poor internet reception may cause
issues when scanning the code. Find places with good internet access
to ensure that people have an easier time scanning your code.
4.1.4 Placement/prominence/visibility
Placement checks the position (top, bottom, left, right) of code in the
media and its relevance in the context of the business purpose. Visibility focuses on the existence of code in communications media. For
instance, can the consumer make out if the medium has a QR code or
not? Prominence checking ensures that the code is evident, transparent, obvious, or hidden deep in the media with other information.
4.1.5 Aesthetics
An ideal QR code would be dark in color on a white/light background
(contrast is imperative). Request a final proof, if possible, from the
printer to ensure color and contrast accuracy.
38
a black code and darker than a white background) or you can take steps
to make sure it is displayed in an area with the right amount of light.
Make sure the code can be used on multiple devices. Try it out on multiple devices. Scan the code with as many different types of devices
old and new and QR code readers as possible to make sure it works.
Importantly, make sure the QR code can be scanned with older phones.
You need to consider where you plan on placing the code and ensure
that the distance does not affect the scanning ability of the code. Scan
recognition should not require significant distance adjustment. The
QR code should successfully scan at the distance people will normally
be away from it, (i.e., billboard effect).
ance. If you are using smartphones to scan the QR codes, you should
test with different phone cameras that vary in quality and resolution.
A small placement (less than an inch) will often be too dense to scan
if you have encoded a longer URL. For example, if you use a long URL
you will be unable to successfully scan .5 QR code. But when you blow
the code up to 1, you will able to scan it.
Scan the QR code with multiple scanner applications. Check scanning on an old phone, with the worst QR code app, in poor lighting
conditions.
If you do not have a short URL, you can shorten it using bit.ly or goo.gl.
If you have shortened the URL, you will be able to scan it at .5. This
illustrates how important it is to have a short QR code, and how important it is to test QR codes for short vs long URLs.
Billboard
Someones T-shirt
Print advertising
Physical products
Building wall
Magazine advert
Environment
Communication media
Light conditions
Signage
Monitor resolutions
Camera resolutions
Scanner variants
Scanning distances
Smartphones
ZXing Online
Decoder
Chrome
QReader
Smartphone
QR code reader
Security scams
Scan source
Code version
Version 1
21 21 module
Version 2
25 25 module
Version 3
29 29 module
Wounded codes
Long URL
URL type
Technical
Version 40
177 177 module
Short URL
Level M
about 15 %
Level Q
about 25 %
Level H
about 30 %
39
find out if it is a web link, coupon, or a code for free products or some
other goodie. Many people will readily scan any code they find in the
hope that it is associated with a prize of some sort.
Most scanning applications will recognize the fact that the decoded
message is a link and will automatically launch your smartphones web
browser and open up the link. This saves you the hassle of having to
type the web address into your phones tiny keyboard. This is also the
point where the hacker can take advantage.
Hackers have discovered that they can also use QR codes to infect your
smartphone with malware, trick you into visiting a phishing site, or
steal information directly from your mobile device.
All a hacker has to do is encode their malicious payload or web address
into QR code format using free encoding tools found on the internet,
print out the QR code on some adhesive paper, and affix their malicious QR code over top of a legitimate one (or e-mail it to you). Since
the QR encoding is not human readable, the victim who scans the
malicious QR code will not know that they are scanning a malicious
link until it is too late.
Conclusion
Test QR codes over and over again with multiple apps and phones before releasing to the public. The bottom line is test, test, test as closely
as possible to where, when, and how regular people with ordinary
technology will be scanning the QR code.
Since QR codes feature up to a 30% error correction rate, there is flexibility for creative branding and tweaks. But if the designer accidentally
overdid it, test-scanning is an easy path to catch those issues.
ITIL Foundation Level Certified, and Six Sigma Green Belt Certified
40
Book Corner
NEW
Book Review:
Personal Kanban:
Mapping Work | Navigating Life
Authored by Jim Benson, Tonianne DeMaria Barry
Published by CreateSpace Independent Publishing Platform. 2011. 216 pages. Soft Cover. US$24.95
This is one of those small, readable books that has
great mileage. The two authors do a great job in
New Releases:
Testing in Scrum:
A Guide for Software Quality Assurance in the Agile World
Authored by Tilo Linz
Published by Rocky Nook. 2nd Edition. 2014. 560 pages. Soft Cover. US$49.95
41
42
Referenced books
So we concluded that any testing methodology that requires extensive re-engineering of what is basically a workable and dependable
architecture should be looked upon with a certain suspicion. As a
consequence, we do not use unit tests that require our code to be
able to work without a database. We do not use mocking with all
its complexities and we do not spend time on making our classes and
objects independent of each other. It does not make for the most correct code, we know. However, we do not mind. If tomorrow we find
a better way to do it, we will change our code templates and simply
re-generate our application code (well, most of it). So we are not overly
worried about having the right architecture to begin with in fact we
have already changed it twice, but that is another blog.
Referenced articles
So does this mean the end of test-driven development? Not at all. There
are things you can test very well with unit tests. Plus, there is a new descendant that we have also investigated, and which shows promise for
other areas we would want to test: behavior-driven development (BDD).
BDD has the same initial outset as TDD, in the sense that it starts the
development process with the definition of the tests the future application will need to pass to be accepted. But BDD is more appropriate for
this task because it seems to focus more on the functionality of a system
than on how it should be built. So it is less prone to the criticism we
have about TDD. For one thing, it provides for a way to bridge the gap
between users and developers by using a specific language in which
to specify tests (or acceptance criteria, if you wish). This language,
Gherkin (github.com/cucumber/cucumber/wiki/Gherkin), is so simple
that the learning curve is as flat is it comes, meaning that everyone
can be taught to understand it in record time. Writing proper Gherkin
requires a bit more time.
For us, its main advantage is that Gherkin provides for a way to communicate the functionality of a system at a level that is understandable
to a developer. Its main downside is that you will end up with A LOT of
Gherkin to fully describe a system of a reasonable size.
In the end, this is the main criticism we have about most of these
methodologies, (UML included). If you have a system that goes beyond
a simple calculator (the usual example), no modeling language (as they
all are, in a way) is powerful enough to describe a full and complete
system in such a way that you can understand and describe it more
quickly than by looking at the screens and the code that implements
these screens.
So the search goes on
LinkedIn: be.linkedin.com/in/abdelkrimboujraf
43
Performance
Column by Alex Podelko
The Skills
Performance Testers
Need and How to
Get Them
Periodically I see pretty vigorous discussions about the skills needed
by performance testers. It looks like most experts agree that performance testing requires more skills and knowledge than just creating
and running scripts using a particular load testing tool. While it is
still possible to imagine a performance tester in a large corporation
who only creates scripts and mechanically runs them while other
performance experts monitor the system and analyze results, I do not
think there are many prospects for this person, nor for the approach.
Systems have now become so complicated that the sum of the views
of specialized experts does not give the whole performance picture.
Thinking about the skills needed for performance testing, the following
areas come to mind as a minimum in addition to load testing proper:
What is going on with the system?
Monitoring and performance analysis.
We see an issue. What should we do?
Diagnostics, tuning, and system performance engineering.
Tuning doesnt help; is there something wrong with the application?
Software performance engineering.
What if?
Modeling and capacity planning.
And, of course, how can we get it all done?
Communication, presentation, and project management.
44
You probably need to know something about all these areas to be a good
performance tester (often more qualified professionals in this area are
referred to as performance engineers or performance architects, even
if performance testing remains their main focus although the use of
these terms varies). You do not need to be an expert in, for example,
database tuning most companies have DBAs for that but you do
need to be able to speak to a DBA in his or her language to coordinate
efforts effectively; or raise concerns about the performance consequences of the current application design. Unfortunately this is not
easy you need to know enough to understand what is going on and
communicate effectively.
The question is how to get such skills. Through constant self-learning
and gaining experience gradually? Yes, of course, but that takes a lot of
time. Moreover, many areas are pretty hard to jump into from scratch.
You need to gain some basic understanding before you will be comfortable enough to learn further on your own. Go to a class? Definitely go
to a class for performance testing and for your main tool. But what
about the many other different products you are working with? This
might mean several week-long performance-related classes for each
product. But these are developed for specialists making a living tuning
these particular products and you do not have time to go to all these
classes and do not normally need to go into so much depth. Talk to an
expert? Sure, if you find one around. Performance experts are scarce
and busy, so you had better have some well-prepared questions, which
is hard to do if you only know a little about the subject.
When you have gone far enough along the road, you will fall into
another trap. You already know enough that basic training will not be
beneficial, but there are almost no advanced classes at all for performance testers. When you go beyond the basics, things such as details
of environments, tools, systems, applications, etc. become so different
that it makes no sense to create a class for specific combinations. You
know areas where you need more information, you need to verify
your approaches and practices against other experts, you need more
advanced tips and tricks, and you need to find somebody you can
discuss your problems with.
I believe that a good conference is a solution in both cases. Somebody
digests information and presents it back to you. Not that it is absolutely
45
Mobile
test automation
methods
46
Simulator/emulatorbased automation
Remote device-based
automation using cloud
Real device-based
automation using bots
CMAP Certified
Mobile App Professional
The new certification for Mobile App Testing
Apps and mobiles have become an important element of todays society in
a very short time frame. It is important that IT professionals are up-to-date
with the latest developments of mobile technology in order to understand
the ever evolving impacts on testing, performance, and security. These
impacts transpire and influence how IT specialists develop and test software in their everyday work.
A Mobile App Testing certified professional can support the requirements
team in review of mobile application, improve user experience with a
strong understanding of usability and have the ability to identify and apply
appropriate methods of testing, including proper usage of tools, unique to
mobile technology.
EN
DE
DE
47
add-on mechanisms such as HP QTP and IBM RFT are also available for
test engineers who are familiar with industry-wide products.
As we saw above, no method is a complete one, and each has its own
advantages and disadvantages. It is very important to plan for the
right methods at the right phases of testing.
Browser add-on
automation
Virtualization
Easiness to
automate
Speed of
automation
Simulator/emulator
automation
Remote device
automation
Real device
automation
Closeness to
real devices
Cost of testing
Quality
assurance
Figure 2. Mobile test automation methods comparison of key parameters: virtualization, closeness to real devices, easiness to automate, cost of testing, speed of automation,
quality assurance
48
However, the cost of testing will be not at all comparable with any
of the above methods.
Test Phase
Browser Add-on
Automation
Simulator/
Emulator
Automation
Remote Device
Automation
Unit Testing
Functional Testing
Regression Testing
Interruption Testing
Integration Testing
Performance and
Security Testing
in complex business engagements, which is a driving force in improvtesting and has come up with many innovative ideas for the improve-
Venkatesh Ramasamy is an ISTQB-certified testing professional and has been working as Project
Conclusion
Technology strives to grow equally at its opposite extremes. While the
functional complexity of apps is increasing day-by-day, the ease-ofuse parameter has played a key role in the success of various mobile
apps. Correspondingly, when advancements on virtualizations keep
increasing, efforts to maintain the reality have kept up equal pace.
While automating on a virtual device saves significant time and money,
there is nothing that gives as much quality assurance as real device
automation. Every mobile automation strategy must be tuned to
multiply the positive advantages of using virtual and real automation methods, while collectively negating the disadvantages of both.
A well-orchestrated automation strategy significantly reduces effort
and accelerates the time-to-market.
products for performing end-to-end test management activities which optimize the testing costs and improve
the quality of the application. He has presented around 16 research
papers in various technologies, such as information technology,
quality engineering and assurance, embedded systems, micro electronics and communication.
Vinothraj Jagadeesan has a degree in Computer
Application from the University of Madras and
has extensive testing experience in niche areas
including open-source test automation, testing
SOA and Agile implementation across locations.
Having successfully completed more than nine
certifications, he is an insurance domain expert
and is certified in both the US and the UK by AICPCU and CII respectively. He is also an HP-certified professional in both Quality Center
Acknowledgement
We wish to extend our gratitude to Mr. Prasad Ramanujam, Senior
Project Manager, Cognizant Technology Solutions for his constant
guidance and continued support that has helped to shape this article
and bring it to fruition.
References
always blends creativity with the testing approach, using efficient tools and methods to solve
1. bitbeam.org
2. sauceio.com/index.php/2013/04/build-your-own-angry-birdplaying-robot-at-our-first-nyc-robot-hackathon
3. us.pycon.org/2012/schedule/presentation/470
4. www.technologyreview.com/news/522501/intel-robot-puts-touchscreens-through-their-paces
5. www.pcmag.com/article2/0,2817,2409695,00.asp
49
Demystifying DevOps
Through a Testers Perspective
Introduction
Having been involved in analyzing DevOps practices across several
projects for various clients, based on our experience, we find that
there is a certain reluctance within the testing community to adopt
DevOps practices. It can be attributed to several reasons, the most
prominent of which is that testers have very little idea of how DevOps
is likely to affect their routine testing activities. However, with DevOps
positioned to become the next step in going agile [1], testing teams
need to overcome their trepidation and embrace DevOps practices.
This can only be achieved through proper understanding of DevOps
from a testers perspective, which we endeavor to do with this article.
What is DevOps?
The term DevOps generally refers to the rising movement that promotes a collaborative working relationship between the Development
team, where the term refers not just to developers but to all the individuals who are involved in the Development life cycle including
testers, business, PM, Scrum Masters, etc. and Operations, which includes DB administrators, support analysts and networking personnel.
DevOps is a process that allows the project team to deliver speedier
results in a predictable way and it is derived from the abbreviation
of Development and Operations. It leads to the fast flow of planned
work to production. The concept behind it is to have developers and
operations teams working closely together so that it ultimately benefits the business, with the key idea being to maintain quality while
maximizing velocity.
Why DevOps?
When the code is not moved to production as soon as it is developed,
IT operations are faced with a pile up of deployments, customers do
not get as much value, and the deployments are often chaotic and not
as organized as they should be. Agile practices have made it easier for
the development teams to quickly create changes, but manual procedures and irregularities between the various processes and tools have
resulted in too high a percentage of errors for the operations teams
to confidently deploy safely to production every change they have
developed. Some of the problems they face whilst trying to deploy
continuously include:
Expensive error-prone manual process in deployments often leading to roll-back and re-release
Slow deployments to development and test environments result in
the project teams being left unproductive
Inability of testing teams to keep up with the pace of changes
being made and, even if they are able to, an increasing number of
defects being identified in later stages of the life cycle
50
Application
Life Cycle
Unsatisfied
Customers
Error-Prone
Manual Processes
Plan
Test
Release
Feedback
Ideally, once the Development team finishes a small feature that is fully
functional, it gets moved to production as soon as possible through
the Operations team. This involves a code that is continually evolving
and continually integrating. But these processes of continuous integration and delivery do not make any sense without parallel continuous
testing, which leads us onto the next question.
retest of the full code and if everything is tested manually, this process
will need a huge amount of resources and time.
Therefore in short cycles that are part of continuous development, each
part of the software is required to be frequently retested as additional
components or features are added to it. With deployments typically
happening every few days, it is quite impossible to test all the features
once every few days manually.
Automated testing offers an ideal solution, since such coded tests can
be run in a short timespan and as many times as required. Only new
stories with more substantial changes need to be tested manually. As
soon the testing for one story is completed, automated testing for the
same can be created and added to a central repository. Hence, even
though the number of tests increases continually as the project grows,
the number of tests performed manually remains relatively constant.
Services testing
The layer of services and components comprises different units. Individual components of the system are integrated and their web services need to be checked by analyzing the responses for set requests.
It is particularly important when performing these tests in sensitive
portions of an application, such as premium validations in insurance
www.testingexperience-shop.com
Testing Experience 27/2014
51
UI testing
UI testing is a black-box testing technique that inherently tests the
application, the middleware, and the infrastructure. These GUI tests
are the most commonly found tests and are expensive to write and
automate. However, for effective releases to production, it is of utmost
importance to perform all system and regression tests automatically.
One of the keys to Devops is, understanding the actors at each level
and the expected level of quality at each stage in the above test cycle.
For example, consider the following requirement: When clicking on
submit an entry should be created in the Database. It is virtually impossible to test this in UI tests. However, we need to make sure that there
is a unit test in place that covers this scenario. Therefore any failure
here can be traced to any one of the tests above and the corresponding
actor (eg: Test team, Development team, etc.) can be held accountable.
Conclusion
The aim of DevOps is not just to increase the rate at which the change
occurs, but to successfully deploy these changes into production while
quickly detecting and rectifying errors as and when they occur. To do
this, the majority of tests need to be automated with a specific person
or team held accountable for that Quality Gate. Also, DevOps is a work
culture that cannot thrive without Agility, encouragement from the
top management and understanding between the development and
operations team, but when properly implemented has the ability to
provide long-term business benefits.
References
[1] www.ibm.com/developerworks/community/blogs/beingagile/
entry/devops_building_on_top_of _agile
[2] Httermann, Michael: DevOps for Developers Chapter 2
(Introducing DevOps)
UI
Service
Integration
Unit Testing
52
LinkedIn: www.linkedin.com/in/alishabakhthawar
Blog: www.secretsofquality.com
The questions lead to the continuous expansion and further development of tools to support quality assurance in the areas of Team
Foundation Server and Visual Studio. The new Visual Studio 2013 offers
quality engineers in particular new options of working effectively in
software testing. With IntelliTrace, entirely new paths may be taken
within quality assurance, which allow developers to recognize the
causes of unintended effects faster. The bug fixing time per defect is
accordingly reduced.
53
profound conception and planning: Only with this, can such a category
of tools develop its full potential. However, illustrating this with an
example of a comprehensive test conception and planning would go
beyond the scope of this professional article. Nonetheless, you will find
some further information regarding this topic in several info boxes.
rver
Agi
le
W
T
.N E
va
Work Item
Tracking
Ja
Team Foundation
Server | Service
Continuous
Delivery (Azure)
Build
Automation
Feedback
Management
54
ro i
Scrum CMM
an
I
nb
a
K
nt
nd
SCM
-Tailored Tools
Role
m
sto
Cu
Planning
C lo u d S h ar e P oi
eb
Se
OS
li
t
en
The new versions offer improvements in many product areas such as the
.NET frameworks (version 4.5.1), programming languages, the ASP.NET,
many modeling tools, team collaboration, and test management.
Many optimizations took place in details compared to the 2012 version.
This way, test results and work items can now be directly associated
with the relevant program code. Thus, the programmer can be informed about new tasks or test results, which are then displayed in the
context of the relevant code. Debugging for asynchronous code was
improved and now also supports Git the decentralized open source
version control system in TFS. Furthermore, there are now migration
paths from the Microsoft Tool supplements Visual Studio Lightswitch
and Webmatrix to Visual Studio.
Visual Studio Lightswitch: Lightswitch is a development environment for data-driven business applications that is now included in
the Professional, Premium, and Ultimate editions of Visual Studio.
The tool simplifies the design of applications that focus on the input,
display, and changing of data in the SQL Server, SQL Azure, or SharePoint. Lightswitch was developed, for instance, to support the rapid
prototyping of business applications. Using the templates provided for
Lightswitch, the complexity of a Visual Studio environment is reduced,
i.e., concealed. There are also tools for generating Visual Basic and C#
code so that the developed business logic code can be implemented
on the entire Visual Studio environment in order to then make use of
Visual Studios advanced development tools. This way, Lightswitch can
be used in the framework of a preliminary study for a large project to
gather deeper insight about the projected use. Since it is possible to
switch away from Lightswitch without migrating to the comprehensive
Visual Studio, the works of the preliminary study can continue to be
used in the framework of the actual development project without a
problem. Lightswitch applications may also be integrated with Office
applications such as Excel, Word, and Outlook and be established as
an independent execution program.
Web Matrix: This is an IDE for the development of websites. The development environment is geared towards small development teams.
The IDE supports PHP and ASP. There are migration paths to Visual
Studio and SQL Server.
Premium/
MSDN
Visual Studio
Visual Studio
2012
Visual Studio
2012 Update 1
2015
Visual Studio
2013
Visual Studio
2013 Updates
(quarterly)
Visual Studio
2012 Update 2
Visual Studio
2012 Update 3
.NET Framework
Visual Studio
2012 Update 4
.NET
Framework 4.5
.NET
Framework 4.5.1
released
.NET
Framework 5
planned
Unlimited
10
Only cloud
2016
Visual Studio
2014
Unlimited
The chart depicts the roadmap of Microsoft for the Visual Studio product series:
2014
Online
Professional
Unlimited
2013
Professional/
MSDN
Unlimited
2012
Test Professional/
MSDN
The Visual Studio 2013 version was presented to the public in November
2013 and has since been delivered by Microsoft. For this year, Microsoft
plans to have quarterly updates for the current version of Visual Studio. The new Visual Studio is projected to be released in late 2014. The
schedule plans to deliver Visual Studio 2014 with the .NET Framework 5.
The current Visual Studio introduced the following four central innovations Plan, Develop, Operate, and Release.
Plan: Agile Portfolio Management enables a company-wide product
backlog. Requirements can be nested into several levels and allocated
to different teams. If a flat list of requirements is not sufficient in
the product backlog, then this function is very helpful. Kanban was
transferred from car production to software development, making a
continuous flow necessary to offer new functionalities to customers.
With the customizable Kanban board, the status of requirements can be
displayed simply and clearly. Which requirements have been approved,
implemented, tested, and delivered? Questions such as this can then
be answered at a glance. Work Item Tagging offers the possibility of
providing keywords for requirements or tasks and searching for these.
55
56
Build
IntelliTrace
1001010001
1000010110
1000000110
1111010101
Program Code
Executable
Data
The command line tools are provided in the form of the file
IntelliTraceCollection.cab and require approx. 13 MB of memory
space on the hard drive. A directory for log files is also necessary. In my
example, this directory is called LogFileLocation. Since the size of the
log file can be limited, this prevents the filling of the disk system. After
reaching the maximum size, the log file is overwritten using the FiFo
principle (first in first out).
The PowerShell is highly popular among administrators and thus a
small example of how IntelliTrace can be activated should be helpful. In
order to use IntelliTrace commands, import the IntelliTrace PowerShell
module into the PowerShell console using the following command:
Import-Module c:\IntelliTrace\Microsoft.VisualStudio.
IntelliTrace.PowerShell.dll
With the help of an internet search engine, the relevant information can easily be researched on the web. Questions can also be
directed to me.
c:\IntelliTrace\collection_plan.ASP.NET.trace.xml
c:\LogFileLocation
After that, a trace file for evaluation will be available in the directory
LogFileLocation. If the IT operation already uses the System Center
Operations Manager (SCOM), integration through the IntelliTrace
Profiling Management Pack will be available.
The Microsoft Test Manager is the standard tool for the tester to plan
and implement tests. Here, IntelliTrace is activated in the data collectors of the test settings. In the case of errors, the IntelliTrace file will
be automatically attached to a bug in Team Foundation Server. The
information of the data collectors reduces the questions from the
developer to the tester regarding under what conditions the error
occurred, making troubleshooting considerably faster.
57
IntelliTrace options
The options are the same for all activation variants presented previously. In working with IntelliTrace, I was always concerned about the
question of whether IntelliTrace has a significant impact on the application or system performance. As a matter of fact, IntelliTrace can
distinctly slow down the system. Luckily, there are two main settings
when using IntelliTrace: The setting IntelliTrace events only has a
smaller impact on performance, while IntelliTrace events and call
information clearly reduces application speed. This latter setting
certainly only makes sense in the development environment. However,
it delivers extensive process details and information to the developer,
which are necessary for a rapid cause analysis.
In the dialogue Advance, especially experienced developers can adjust IntelliTrace to their needs focusing on log information. This way,
the maximum size of the log file, the file location, and the symbol and
source paths can be adjusted.
In the settings IntelliTrace Events, you can determine which events
58
changes can often be done quickly. After checking in the code correction, the debugging flows into the next build and is accordingly released
after successful processing through quality assurance.
IntelliTrace evaluation
The IntelliTrace file is opened in Visual Studio and first shows the
exceptions and events that occurred. From the events, the developer
can conclude which actions were carried out on the surface, which files
were opened, or which database calls were made before the error occurred. Using this information, the developer can then usually narrow
down the area very quickly and determine the causes.
arrows upward, the developer can, for instance, move back in time
in order to find the cause of the error. To the left of this lies the IntelliTrace window with the corresponding exceptions and additional
information about the relevant event. Underneath the editor window,
variables and their contents are displayed with regards to the position
of the cursor in the editor window. Next to that, the output window
shows the screen output with regard to the current cursor position
in the editor window.
The relationship of error messages about the current position in the
code and the respective variable contents especially deliver new insights about the systems behavior. This information can be deduced
without IntelliTrace, but the time required is considerably higher.
Test level
Component test
Component integration
test
System test
Acceptance test
Test discipline
Operation test
Document review
Functional test
Load test
Performance test
Security test
Interface test
Decision points
Boundary value
analysis
Path coverage
Profile
Review
Packard, Hoffmann-La Roche, and Logica. Over the years, he has become an expert at the European level.
He has developed a risk-based testing approach which was published
Functional analysis
Test type
Blackbox test
Whitebox test
Guideline test
Real-life test
Semantic test
Test basis
Decision tables
Use cases
Error messages
Data masks
ERD models
Interface documentation
Operation procedure
documentation
Infrastructure plans
Figure 6. Interaction of different test and derivation parameters
Experience has shown that the efficiency of the verification process changes, depending on the combination of degree of overlap
and design technology. Evaluation of the efficiency also greatly
depends on the defined quality goals. However, this goal definition is not always implemented in test projects. Yet without this
determination, no optimal choice of derivation processes can be
made: There are combinations of degree of overlap and test design technology that cannot provide information about specific
quality goals and thus only create costs without benefits when
applied.
59
By Mithun Sridharan
60
8. Data-driven inputs
Most of todays applications are interactive, requiring users to key in
something at some point. Knowing how the application responds to
the various set of inputs is essential in delivering a stable and quality
product to the market. Data-driven testing helps us to understand
how an application deals with a range of inputs. Rather than having
testers manually enter endless combinations of data or hard-codespecific values into the test script, the testing infrastructure frame-
61
work automatically pulls values from a data source, enters the fetched
data into the application, and verifies that the application responds
appropriately before repeating the test with another combination
of values. Automated data-driven testing significantly increases test
coverage, while simultaneously reducing the need to create more tests
with different variables. An important use of data-driven tests is in
ensuring that applications are tested for boundary conditions and
invalid input. Data-driven tests are often part of model-based tests,
which include randomization to cover a wide range of input data. To
enable test execution with different combinations of data, the data
sources should be properly managed. The chosen test automation tool
should include drivers and support a range of data formats, such as
flat files, spreadsheets, and database stores.
62
Blue Ocean Solutions (BlueOS) PLC, a Germanymation company focusing on technology companies. He brings with him over ten years of
international experience in business development, marketing, global delivery, and consulting.
He holds a Master of Business Administration (MBA) from ESMT
European School of Management and Technology, Berlin and a Master of Science (MSc) from Christian Albrechts Universitt zu Kiel,
Germany. He is a Harvard Manage Mentor Leadership Plus graduate,
a Project Management Professional (PMP), and a Certified Information Systems Auditor (CISA). He also served as the Communication
Chair for the German Outsourcing Association in 2013 and is based
in Eschborn, Germany.
Twitter: @jixtacom
LinkedIn: de.linkedin.com/in/mithun
Blog: korporate.blogspot.com
Website: www.blueos.in
Plan
Project context
An Austrian bank was looking for a mobile automation solution to
test their multiple apps on multiple platforms such as Android and
iOS. They also wanted to have good coverage of the phones and tablets
that are mostly used by their customers. Based on this information,
we developed a custom mobile test strategy to answer the following
questions:
Performing a pilot does not just mean automating tests. It also means
performing a carefully developed experiment, whose outcome would
form the basis for a year-long successful test automation. For this
reason, our project plan followed four phases throughout the project:
Tool
selection
Challenge
We were dealing with a regression test set of a total of 60 test cases, of
which two-thirds covered the web app and the remaining third covered
the hybrid app. The regression test is executed four times per year, i.e.
2 apps, 60 test cases, 4 times a year, but this alone does not mean you
would automate those test cases from the very beginning. It could
Framework
setup
and
first POC
Automation
and
1 execution
cycle
4
execution
cycles
Figure 1
7 tools
9 test cases
3 weeks
60 test cases
4 weeks
4 cycles
4 weeks
Figure 2
63
Implementation
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12. Examples:
13. |email|password|
14. |"test123@example.com"|"test123"|
Data-driven testing.
Each line in the example can be seen as an action word that describes
With BDD tools, subject matter experts can define the functional
aspects of a test case in human-readable form. Such a story has a
form like this:
Calabash Scripts
with Test Data
Console
Calabash
Agent
Calabash
Reusable
Step Definitions
64
Built-in
Step Definitions
HTML Report
like the line 'When I enter <email> into "user" field' and a
separate data table defining the considered data combinations (the
table below Examples).
we could steer from within Calabash. This meant we were able to fully
cover the customers needs.
The test case will, of course, not run from just the storyline described
above. In a second step, test automation experts take care of the actual
implementation: They write technical code that is triggered by an action word, steers the app(lication) on a technical level by sending messages or clicking GUI elements, collects the app(lication)s reaction and
evaluates this reaction. In an example, this approach looks as follows:
Lessons learnt
1.
2.
sleep(3)
3.
if textField.include? "user"
4.
5.
6.
7.
8.
9.
end
10. end
When interpreting the code of the test story, Calabash tries to find
matching code behind the pieces using regular expression matching
and executes those. This piece of code is triggered by the line 'When I
enter <email> into "user" field'. For the first test run, <email>
has been replaced by the first data value for email addresses from
the test data table by "test123@example.com". When invoking the
above code, the variable text is set to the first piece of variable text
(test123@example.com) and the variable textField to the name of
the text field which we want to access (user). The code then waits for
three seconds to synchronize with the web app and sets the text of the
user field to test123@example.com . Then Calabash goes on with the
next line in the test story.
In order to deliver cost-effective test automation for a vast number of
combinations of test cases and mobile devices, it is necessary to build
a concise and sufficiently generic test automation library (see Figure
3. Mobile Test Automation Architecture with Calabash3). This library
contains all reusable step definitions and care must be taken that those
step definitions are reusable over the range of devices so they can be
transparently used from the test stories. This avoids the overhead of
having to write and maintain the same test cases multiple times for
different devices. Furthermore, the reusable step library should be
sufficiently modularized to distinguish between the product-specific,
product-line-specific and branch-specific action words. Modularizing
the automation library in this way allows it to reuse parts of the same
library in different projects and again minimizes the development
overhead for the test automation.
We proposed an overall project plan of four stages which we successfully performed. The most important phase was the piloting phase
of the chosen solution, a test automation framework based on Calabash. We indeed discovered some obstacles, but could relatively easily
overcome those by making use of the simple extensibility of Calabash.
Reusability became an equally important issue to our customer as
cost-effectiveness. With Calabash it was possible to write automated
test cases for both iOS and Android apps, reusing 80% of the test code
identically for both platforms and with only 20% of the code needing
to be adapted for the respective platform. The code for web objects
could also be reused for both the web app and the hybrid app, which
again significantly reduced the automation effort.
65
Masthead
Editor
Daz& Hilterscheid
Editorial
Website
Jos Daz
www.testingexperience.com
Unternehmensberatung GmbH
Kurfrstendamm 179
Layout&Design
10707 Berlin
Lucas Jahn
Germany
Konstanze Ackermann
Subscribe
subscribe.testingexperience.com
Articles&Authors
Phone: +49 (0)30 74 76 28-0
Fax: +49 (0)30 74 76 28-99
editorial@testingexperience.com
Email: info@diazhilterscheid.com
Website: www.diazhilterscheid.com
Price
Online version: free of charge
ISSN 1866-5705
www.testingexperience.com
Daz& Hilterscheid is a member of
Verband der Zeitschriftenverleger
Berlin-Brandenburg e.V..
www.testingexperience-shop.com
third parties.
Picture Credits
iStockphoto.com/exdez........................................C1
Index of Advertisers
Agile Testing Days..................................................... C2
Thanks to the members of the Testing Experience editorial board for helping us select articles
for this issue: Erik van Veenendaal, Graham Bath, Maik Nogens, and Arjan Brands.
66
Editorial Board
Columns
by Erik van Veenendaal and Alex Podelko
dedicated to the
worldwide community
of software testing
professionals
international sportive
software testing
competition
1,000
> 2,500
PARTICIPATING
TEAMS
REGISTERED
TESTERS
in total
we expect
info@softwaretestingworldcup.com
www.softwaretestingworldcup.com
MOBILE APP
BUNDLE OFFER
BOOK BOTH
AND GET THE CHEAPER ONE FOR
50% OFF
CONFERENCE
Sep 29 Oct 1, 2014 Berlin/Potsdam
TRAINING
Become a CMAP Certified
Mobile App Professional!
cmap.diazhilterscheid.com