Professional Documents
Culture Documents
ISTQB CTFL Syll 2011 PDF
ISTQB CTFL Syll 2011 PDF
Certifi
ied Testerr
Found
dation
n Lev
vel Sy
yllabu
us
Released
R
Verrsion 201
11
Intternatio
onal Software Testing
g Qualiffication
ns Boarrd
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Copyrighht Notice
This doccument may be copied in its entirety, or extracts made,
m if the source
s is ackknowledged.
Copyrigh
ht Notice © Innternational Software Teesting Qualific
cations Boarrd (hereinafte
er called ISTQB®)
ISTQB iss a registered
d trademark of the Intern
national Softw
ware Testingg Qualifications Board,
Copyrigh
ht © 2011 the hair), Debra Friedenberg, and
e authors forr the update 2011 (Thomas Müller (ch
the ISTQ
QB WG Foun ndation Level)
Copyrigh
ht © 2007 thee authors forr the update 2007 (Thomas Müller (ch
hair), Dorothyy Graham, Debra
D
berg and Erikk van Veenendaal)
Friedenb
Copyrigh
ht © 2005, th
he authors (TThomas Mülle
er (chair), Re grid Eldh, Dorothy Graham,
ex Black, Sig
Klaus Ollsen, Maaret Pyhäjärvi, Geoff
G Thompson and Erik k van Veenenndaal).
The auth
hors hereby transfer
t the copyright
c to the
t Internatio
onal Softwarre Testing Qu
ualifications Board
(ISTQB). The authorrs (as currentt copyright holders) and ISTQB
I (as th pyright holder)
he future cop
have agrreed to the fo
ollowing cond
ditions of usee:
1) Any individual orr training com mpany may useu this syllaabus as the basis
b for a tra
aining course
e if the
authors and the ISTQB are acknowledge ed as the soource and co opyright owners of the sy yllabus
and provided tha at any adverrtisement of such a trainning course may
m mention n the syllabu
us only
afterr submission n for official accreditatio
on of the tra
aining materrials to an IISTQB recognized
Natio onal Board.
2) Any individual orr group of in ndividuals maay use this syllabus
s as the
t basis for articles, boo oks, or
otheer derivative writings if th he authors and
a the ISTQB are acknowledged a as the sourcce and
copyyright ownerss of the syllabus.
3) Any ISTQB-reco ognized Natio onal Board may
m translate e this syllabu
us and licensse the syllabbus (or
its trranslation) to
o other parties.
Version 2011
2 Page 2 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Revision Histo
ory
Version D
Date Remarks
s
ISTQB 2011
2 E
Effective 1-Ap
pr-2011 Certified Tester Foundation Levell Syllabus
Maintena ance Releasee – see Appe
endix E – Reelease
Notes
ISTQB 2010
2 E
Effective 30-M
Mar-2010 Certified Tester Foundation Levell Syllabus
Maintena ance Releasee – see Appe
endix E – Reelease
Notes
ISTQB 2007
2 0
01-May-2007
7 Certified Tester Foundation Levell Syllabus
Maintena ance Releasee
ISTQB 2005
2 01-July-2005
0 Certified Tester Foundation Levell Syllabus
ASQF V2.2 J
July-2003 ASQF Sy yllabus Foundation Level Version 2.2
“Lehrplann Grundlagenn des Softwa
are-testens“
ISEB V2
2.0 2
25-Feb-1999 ISEB Sofftware Testin
ng Foundatioon Syllabus V2.0
V
25 February 1999
Version 2011
2 Page 3 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Table of Conte
ents
Acknowledgements..................................................................................................................................... 7
Introducttion to this Syllabus
S ....................................................................................................................... 8
Purpoose of this Do ocument ..................................................................................................................... 8
The Certified
C Testter Foundatio on Level in Software
S Testing ................................................................. 8
Learnning Objective es/Cognitive Level of Kno owledge ............................................................................. 8
The Examination
E .
................................................................................................................................... 8
Accreeditation ........................................................................................................................................... 8
Level of Detail......................................................................................................................................... 9
How this
t Syllabus is Organized ............................................................................................................. 9
1. Fun ndamentals of o Testing (K K2)......................................................................................................... 10
1.1 Why is Te esting Necessary (K2) .............................................................................................. 11
1.1.1 Software Systems Context C (K1)) ........................................................................................ 11
1.1.2 Causess of Software e Defects (K2 2) ..................................................................................... 11
1.1.3 Role off Testing in Software S Devvelopment, Maintenance
M and Operations (K2) ............... 11
1.1.4 Testing g and Qualityy (K2) .................................................................................................... 11
1.1.5 How Much Testing is Enough? (K2) ................................................................................. 12
1.2 What is Testing? (K2) ............................................................................................................. 13
1.3 Seven Testing Princip ples (K2) ................................................................................................ 14
1.4 Fundamental Test Pro ocess (K1) ............................................................................................ 15
4.1 Test Planning and Control
1.4 C (K1) ........................................................................................ 15
4.2 Test An
1.4 nalysis and Design D (K1) ....................
. ..................................................................... 15
4.3 Test Im
1.4 mplementatio on and Execu ution (K1).......................................................................... 16
4.4 Evaluating Exit Critteria and Rep
1.4 porting (K1) ...................................................................... 16
4.5 Test Cllosure Activitties (K1) ............................................................................................... 16
1.4
1.5 The Psych hology of Testing (K2) ............................................................................................. 18
1.6 Code of Ethics
E ........................................................................................................................ 20
2. Tessting Throug ghout the Sofftware Life Cycle C (K2) .......................................................................... 21
2.1 Software Developmen nt Models (K2 2) ..................................................................................... 22
2.1.1 V-mode el (Sequentia al Development Model) (K2) ............................................................... 22
2.1.2 Iterative e-incrementa al Development Models (K2) ( .............................................................. 22
2.1.3 Testing g within a Life e Cycle Model (K2) ............................................................................. 22
2.2 Test Leve els (K2) ..................................................................................................................... 24
2.1 Compo
2.2 onent Testing g (K2) .................................................................................................... 24
2.2 Integra
2.2 ation Testing (K2) ..................................................................................................... 25
2.3 System
2.2 m Testing (K2 2) .......................................................................................................... 26
2.4 Acceptance Testing
2.2 g (K2).................................................................................................... 26
2.3 Test Type es (K2) ...................................................................................................................... 28
3.1 Testing
2.3 g of Function n (Functional Testing) (K2 2) .................................................................. 28
3.2 Testing
2.3 g of Non-funcctional Softw ware Characte eristics (Non n-functional T Testing) (K2) ......... 28
3.3 Testing
2.3 g of Software e Structure/A Architecture (Structural Te esting) (K2) .............................. 29
3.4 Testing
2.3 g Related to Changes: Re e-testing and d Regression n Testing (K2 2)........................... 29
2.4 Maintenan nce Testing (K2) ( ...................................................................................................... 30
3. Sta atic Techniqu ues (K2).................................................................................................................... 31
3.1 Static Tecchniques and d the Test Prrocess (K2) ....................................................................... 32
3.2 Review Process (K2) .............................................................................................................. 33
2.1 Activitie
3.2 es of a Form mal Review (K K1) .................................................................................... 33
2.2 Roles and
3.2 a Responssibilities (K1)) ........................................................................................ 33
2.3 Types of
3.2 o Reviews (K2) ( ....................................................................................................... 34
2.4 Successs Factors fo
3.2 or Reviews (K K2).................................................................................... 35
3.3 Static Ana alysis by Too ols (K2) ................................................................................................. 36
4. Tesst Design Te echniques (K K4) ......................................................................................................... 37
4.1 The Test Developmen nt Process (K K3) .................................................................................... 38
4.2 Categorie es of Test De esign Techniq ques (K2) ......................................................................... 39
Version 2011
2 Page 4 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Version 2011
2 Page 6 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Ackno
owledgements
Internatio
onal Softwarre Testing Quualifications Board Working Group Fooundation Leevel (Edition 2011):
Thomas Müller (chair), Debra Friedenberg. The T core teamm thanks the review teamm (Dan Almog g,
Armin Be eer, Rex Black, Julie Garrdiner, Judy McKay, Tuulla Pääkköne
en, Eric Riou du Cosquierr Hans
Schaefer, Stephanie Ulrich, Erik van Veenendaal) and all National Bo oards for the suggestions for
the curre
ent version of
o the syllabus.
Version 2011
2 Page 7 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Introd
duction
n to this
s Syllabus
Purpo
ose of thiss Docume
ent
This sylla
abus forms the
t basis for the International Softwarre Testing Qualification a
at the Founda ation
Level. Thhe Internatio
onal Softwaree Testing Qualifications Board
B (ISTQBB) provides it to the Natio
onal
Boards for
f them to accredit the trraining providders and to derive
d examination questtions in their local
language e. Training providers
p will determine appropriate
a te
eaching methhods and prooduce course eware
editation. The syllabus will
for accre w help candidates in their preparation for the exa amination.
Information on the history and ba ackground off the syllabuss can be foun
nd in Appenddix A.
The Certified
C T
Tester Foundation Level in Software
e Testing
The Foundation Leve el qualificatio
on is aimed at
a anyone inv volved in softtware testingg. This includdes
people inn roles such as testers, te ers, test conssultants, testt managers, user
est analysts,, test enginee
acceptan nce testers and
a software developers. This Founda ation Level qualification
q iis also appro
opriate
for anyone who wantts a basic un nderstanding of software testing, such h as project m managers, quality
managerrs, software developmen nt managers, business an nalysts, IT dirrectors and mmanagement
consultaants. Holders of the Foundation Certifficate will be able to go on n to a higherr-level softwa
are
testing qualification.
q
Learniing Objecctives/Co
ognitive Level of Knowledge
K e
Learning
g objectives are
a indicated
d for each section in this syllabus
s and
d classified ass follows:
o K1: remember
r
o K2: understand
u
o K3: apply
a
o K4: analyze
a
Further details
d and examples
e of learning
l obje
ectives are given in Appe
endix B.
All termss listed under “Terms” jusst below chappter heading emembered ((K1), even if not
gs shall be re
explicitlyy mentioned in the learnin
ng objectivess.
The Examinatio
E on
The Foundation Leve el Certificate examination
n will be base
ed on this syyllabus. Answ
wers to
examinaation question ns may require the use ofo material baased on more e than one section of this
s
syllabus. All sectionss of the syllab
bus are exam
minable.
The form
mat of the exa
amination is multiple cho
oice.
Exams may
m be taken n as part of an
a accredited
d training cou
urse or taken
n independen ntly (e.g., at an
a
examina ation center or
o in a public exam). Commpletion of an
a accredited urse is not a pre-
d training cou
requisite
e for the exam
m.
Accred
ditation
An ISTQ QB National Board
B may accredit training providers s whose courrse material ffollows this
syllabus. Training pro
oviders shouuld obtain acccreditation guidelines from the board or body thatt
performss the accreditation. An acccredited cou
urse is recoggnized as con nforming to this syllabus, and
is allowe
ed to have an
n ISTQB exa amination as part of the course.
Further guidance
g for training provviders is give
en in Append
dix D.
Version 2011
2 Page 8 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Level of Detail
The leveel of detail in this syllabuss allows interrnationally co
onsistent teaching and exxamination. In
order to achieve this goal, the syllabus consissts of:
o General instructional objectivves describin ng the intentio
on of the Fouundation Levvel
o A list of informatiion to teach, including a description,
d and
a referencces to additio onal sources if
requuired
o Learrning objectivves for each knowledge area, a describbing the cognnitive learning outcome and
a
minddset to be acchieved
o A list of terms tha at students must
m be ablee to recall and
d understand d
o A de escription of the
t key conccepts to teach, including sources
s suchh as accepte ed literature or
o
standards
The sylla
abus contentt is not a desscription of th
he entire kno
owledge area a of software testing; it refflects
the level of detail to be
b covered inn Foundation n Level trainiing courses.
How th
his Syllab
bus is Orrganized
There arre six major chapters.
c The top-level heading
h for each chapter shows the h highest level of
learning objectives th ed within the chapter and specifies the
hat is covere e time for the
e chapter. Foor
examplee:
2. Tes
sting Thrroughoutt the Sofftware Life Cycle (K2) 115 min
nutes
This heaading shows that Chapterr 2 has learning objectivees of K1 (asssumed when a higher level is
shown) and
a K2 (but notn K3), and it is intended 5 minutes to teach the material in the
d to take 115 e
chapter. Within each chapter there are a num mber of sectio
ons. Each se ection also ha
as the learning
objective
es and the am mount of time
e required. Subsections
S that do not have
h given are included
a time g
within the time for the
e section.
Version 2011
2 Page 9 of 78
7 31-Marr-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
1. Fundam
mentals of Testting (K2
2) 15
55 minu
utes
Learniing Objecctives forr Fundam
mentals off Testing
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.
1.1 Wh
hy is Testing Necess
sary? (K2)
LO-1.1.1
1 Describee, with examples, the wayy in which a defect in sofftware can ca ause harm tooa
person, to
t the enviroonment or to a company (K2)(
LO-1.1.2
2 Distinguish between the root cau use of a defecct and its effeects (K2)
LO-1.1.3
3 Give rea
asons why testing is nece essary by givving example es (K2)
LO-1.1.4
4 Describee why testingg is part of qu
uality assurance and give e examples o of how testing
contributtes to higherr quality (K2)
LO-1.1.5
5 Explain and
a compare e the terms error,
e e, and the corrresponding terms
defect, fault, failure
mistake and bug, usiing exampless (K2)
1.2 Wh
hat is Testing? (K2)
LO-1.2.1
1 Recall th
he common objectives
o off testing (K1)
LO-1.2.2
2 Provide examples for the objectivves of testingg in different phases of th
he software life
cycle (K22)
LO-1.2.3
3 Differenttiate testing from
f debugg
ging (K2)
1.3 Sev
ven Testin
ng Princip
ples (K2)
LO-1.3.1
1 Explain the
t seven prrinciples in te
esting (K2)
1.4 Fun
ndamenta
al Test Pro
ocess (K1))
LO-1.4.1
1 Recall th
he five fundamental test activities
a and
d respective tasks
t from planning to closure
(K1)
1.5 The
e Psychology of Tes
sting (K2))
LO-1.5.1
1 Recall th
he psycholog
gical factors that
t influence
e the successs of testing ((K1)
LO-1.5.2
2 Contrastt the mindsett of a tester and
a of a deve eloper (K2)
Version 2011
2 Page 10 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Bug, deffect, error, fa
ailure, fault, mistake,
m quallity, risk
1.1.1 Software
e Systems
s Context (K1)
(
Softwaree systems arre an integrall part of life, from
f busines
ss applications (e.g., bannking) to cons
sumer
productss (e.g., cars).. Most people
e have had an a experienc ce with softwaare that did n
not work as
expectedd. Software that
t does nott work correcctly can lead to many pro oblems, including loss of
money, time
t or businness reputation, and coulld even caus se injury or de
eath.
1.1.2 Causes of
o Softwarre Defects
s (K2)
A human n being can make
m an erroor (mistake), which produuces a defect (fault, bug) in the prograam
code, or in a docume ent. If a defecct in code is executed, th
he system ma ay fail to do w
what it shoulld do
(or do soomething it shouldn’t), ca ausing a failure. Defects in software, systems
s or d
documents may
m
result in failures, but not all defeccts do so.
Software
e testing mayy also be req
quired to mee
et contractua
al or legal req or industry-specific
quirements, o
standard
ds.
Testing can
c give con nfidence in th
he quality of the
t software if it finds few
w or no defeccts. A properrly
designedd test that pa
asses reduce es the overall level of risk
k in a system
m. When testing does findd
defects, the quality of
o the softwarre system inccreases whe en those defe
ects are fixed
d.
Lessonss should be leearned from previous proojects. By understanding the root causes of defec cts
found in other projeccts, processe
es can be imp
proved, whic ch in turn sho
ould prevent those defectts from
reoccurring and, as a consequen nce, improve the quality of
o future systtems. This is an aspect of
o
quality assurance.
Testing should
s be inttegrated as one
o of the qu
uality assurance activitiess (i.e., alongsside develop
pment
standard
ds, training and defect annalysis).
Version 2011
2 Page 11 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Testing should
s provid
de sufficient information to
t stakeholders to make informed decisions abou ut the
release of
o the softwa
are or systemm being testeed, for the next development step or h
handover to
customeers.
Version 2011
2 Page 12 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Debugging, requirem
ment, review, test case, te
esting, test objective
o
Backgrround
A common perceptio on of testing is
i that it onlyy consists of running testss, i.e., execu
uting the softw
ware.
This is part
p of testing
g, but not all of
o the testing g activities.
Both dynnamic testing esting can be used as a means for acchieving sim
g and static te milar objective
es,
c be used to improve both
and will provide inforrmation that can b the systtem being tessted and the e
developm ment and tessting processses.
Testing can
c have the e following obbjectives:
o Finding defects
o Gain ning confiden
nce about thee level of qua
ality
o Provviding information for deccision-makingg
o Prevventing defeccts
Debugging and testin ng are differeent. Dynamicc testing can show failurees that are caaused by deffects.
Debugging is the devvelopment acctivity that fin nds, analyzes and removves the cause e of the failurre.
Subsequuent re-testin ng by a tester ensures tha at the fix doe
es indeed ressolve the failure. The
responsiibility for thesse activities is
i usually tessters test and
d developerss debug.
Version 2011
2 Page 13 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Exhaustive testing
Princip
ples
A numbe er of testing principles
p ha
ave been sug
ggested overr the past 40 years and offer general
guideline
es common for f all testing
g.
Principle 7 – Absennce-of-errors
s fallacy
Finding and
a fixing deefects does not e system built is unusable
n help if the e and does n
not fulfill the users’
needs annd expectatio
ons.
Version 2011
2 Page 14 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
1.4 Fundam
mental Test
T Pro
ocess (K
K1) 35 minuttes
Terms
Confirmaation testing,, re-testing, exit
e criteria, incident, regrression testin
ng, test basiss, test condittion,
test cove
erage, test daata, test execution, test log, test plan, test proced
dure, test policy, test suitee, test
summaryy report, testtware
Backgrround
The mosst visible partt of testing iss test executiion. But to be
e effective an
nd efficient, ttest plans sh
hould
also inclu
ude time to be
b spent on planning
p the tests, designing test casses, preparin ng for execution
and evalluating resultts.
nning and co
Test plan ontrol tasks are
a defined in
n Chapter 5 of
o this syllab
bus.
The test analysis and d design actiivity has the following ma ajor tasks:
o Reviiewing the te est basis (succh as require ements, softw ware integrityy level1 (risk level), risk
analysis reports, architecture e, design, inte
erface speciffications)
o Evaluating testab bility of the te
est basis andd test objects
s
o Identifying and prioritizing
p tesst conditions based on an nalysis of tesst items, the specification n,
behaavior and stru ucture of the e software
o Desiigning and prioritizing hig gh level test cases
c
o Identifying necesssary test data to supportt the test con nditions and test cases
o Desiigning the tesst environme ent setup andd identifying any required d infrastructuure and tools
o Crea ating bi-direcctional tracea ability betwee
en test basis and test casses
1
The degrree to which sofftware compliess or must complyy with a set of stakeholder-sele
s ected software aand/or software--based
system cha aracteristics (e.g
g., software com
mplexity, risk asssessment, safeety level, securitty level, desired performance,
reliability, or
o cost) which are
a defined to re eflect the importaance of the softtware to its stakkeholders.
Version 2011
2 Page 15 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Test imp
plementation and executio on has the foollowing majo or tasks:
o Fina alizing, implem menting and prioritizing test
t ncluding the identification
cases (in n of test data
a)
o Deve eloping and prioritizing te est procedure es, creating test
t data and d, optionally, preparing te
est
harnnesses and writing
w autommated test scrripts
o Crea ating test suittes from the test procedu ures for efficient test execcution
o Veriffying that the e test environnment has be een set up co orrectly
o Veriffying and updating bi-dire eability between the test basis and te
ectional trace est cases
o Execcuting test prrocedures either manuallly or by using g test executtion tools, acccording to thhe
planned sequencce
o Logg ging the outccome of test execution an nd recording the identitiess and versions of the sofftware
unde er test, test to
ools and testtware
o Com mparing actua al results with
h expected results
r
o Repo orting discrepancies as in ncidents and d analyzing th hem in orderr to establish h their cause (e.g.,
a deefect in the co ode, in specified test dataa, in the test document, or o a mistake in the way th he test
was executed)
o Repe eating test activities as a result of acttion taken for each discre epancy, for e example, re-
execcution of a te est that previoously failed in order to coonfirm a fix (cconfirmation testing), exeecution
of a corrected tesst and/or exe ecution of tessts in order to
t ensure tha at defects have not been
intro
oduced in uncchanged areas of the sofftware or that defect fixing did not unccover other
defeects (regressiion testing)
1.4.4 Evaluatin
ng Exit Crriteria and Reporting
g (K1)
Evaluatin
ng exit criteria is the activvity where te
est execution is assessedd against the defined
objective
es. This shouuld be done for f each test level (see Section
S 2.2).
Version 2011
2 Page 16 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Version 2011
2 Page 17 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Error gue
essing, indep
pendence
Backgrround
The mind dset to be ussed while tessting and revviewing is diffferent from thhat used whiile developing
softwaree. With the rig ght mindset developers
d a able to te
are est their own code, but seeparation of this
t
responsiibility to a tesster is typically done to help focus efffort and provide additiona
al benefits, su
uch as
an indeppendent view w by trained anda professio onal testing resources.
r In
ndependent ttesting may beb
carried out
o at any levvel of testing.
A certainn degree of inndependencce (avoiding the t author bias) often ma akes the teste er more effecctive
at finding
g defects and d failures. Inddependence e is not, howe
ever, a replaccement for faamiliarity, an
nd
develope ers can efficiently find ma any defects in their own code.
c Severaal levels of in
ndependence e can
be defineed as shown n here from lo ow to high:
o Testts designed by b the person(s) who wro ware under test (low level of independence)
ote the softw
o Testts designed by b another person(s) (e.g g., from the development
d t team)
o Testts designed by b a person(s) from a diffferent organiizational grou up (e.g., an iindependentt test
team
m) or test spe ecialists (e.g.., usability orr performancce test specia
alists)
o Testts designed by b a person(s) from a diffferent organiization or com mpany (i.e., outsourcing or
certification by an external bo ody)
People and
a projects are driven byy objectives.. People tend eir plans with the objectives set
d to align the
by manaagement and other stakeh holders, for example,
e to find
f defects or
o to confirmm that softwarre
meets itss objectives. Therefore, itt is important to clearly state the objeectives of testing.
Identifyin uring testing may be percceived as criticism againsst the producct and agains
ng failures du st the
author. As
A a result, te en seen as a destructive activity,
esting is ofte a evenn though it iss very constru
uctive
in the maanagement of o product rissks. Looking for failures in a system requires
r curio osity, profess
sional
pessimissm, a critical eye, attentio
on to detail, good
g commu unication with
h developme ent peers, and
experiennce on which to base erro or guessing.
If errors, defects or fa
ailures are coommunicated in a constrructive way, bad feelings between the e
testers and
a the analyysts, designe ers and developers can be
b avoided. This
T applies tto defects fou
und
during reeviews as we ell as in testin
ng.
Version 2011
2 Page 18 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Version 2011
2 Page 19 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
1.6 Code of
o Ethicss 10 minuttes
CLIENT AND EMPLO OYER - Certtified softwarre testers shaall act in a manner
m that iss in the best interests
of their client
c and em
mployer, conssistent with th
he public inte
erest
PRODUC CT - Certified
d software te
esters shall ensure
e that th
he deliverables they provvide (on the products
p
and systtems they tesst) meet the highest profeessional stanndards possible
PROFES SSION - Cerrtified softwarre testers shall advance the integrity and reputatio
on of the pro
ofession
consistent with the public interestt
COLLEAAGUES - Cerrtified softwa hall be fair to and supporttive of their ccolleagues, and
are testers sh a
promote cooperation
n with software developerrs
SELF - Certified
C softw
ware testers shall particip
pate in lifelon
ng learning regarding
r e practice of their
the
on and shall promote an ethical appro
professio oach to the practice
p of the profession
n
Refere
ences
1.1.5 Black, 2001, Kaner,
K 2002
1.2 Beizzer, 1990, Bla
ack, 2001, Myers,
M 1979
1.3 Beizzer, 1990, He
etzel, 1988, Myers,
M 1979
1.4 Hetzzel, 1988
1.4.5 Black, 2001, Craig,
C 2002
1.5 Blacck, 2001, Hettzel, 1988
Version 2011
2 Page 20 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
2. Testing
T g Throug
ghout th
he Softw
ware Liffe 1
115 minutes
Cycle
e (K2)
Learniing Objecctives forr Testing Througho
out the Software
S L
Life Cycle
e
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.
2.2 Tes
st Levels (K2)
(
LO-2.2.1
1 Compare e the differen
nt levels of teesting: majorr objectives, typical objeccts of testing,,
typical ta
argets of testting (e.g., fun
nctional or sttructural) and
d related worrk products, people
who testt, types of deefects and failures to be id dentified (K2
2)
2.3 Tes
st Types (K2)
LO-2.3.1
1 Compare e four softwa
are test typess (functional,, non-functional, structura
al and chang
ge-
related) by example (K2)
LO-2.3.2
2 Recognize that functtional and strructural tests
s occur at any test level (K1)
LO-2.3.3
3 Identify and
a describe e non-functio
onal test typees based on non-function
n al requireme
ents
(K2)
LO-2.3.4
4 Identify and
a describe e test types based
b on the
e analysis of a software syystem’s struccture
or archite
ecture (K2)
LO-2.3.5
5 Describe e the purpose
e of confirmaation testing and regression testing (K K2)
2.4 Maintenance
e Testing (K2)
(
LO-2.4.1
1 Compare e maintenance testing (te esting an existing systemm) to testing a new application
with resp
pect to test tyypes, triggers for testing and amount of testing (KK2)
LO-2.4.2
2 Recognize indicatorss for mainten nance testing on, migration and retirement)
g (modificatio
(K1)
LO-2.4.3
3. Describee the role of regression
r te mpact analysis in mainten
esting and im nance (K2)
Version 2011
2 Page 21 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
2.1 Softwa
are Deve
elopmen
nt Models (K2) 20 minuttes
Terms
Commerrcial Off-The--Shelf (COTS
S), iterative-iincremental development model, validation,
verification, V-model
Backgrround
Testing does
d not exisst in isolation
n; test activitiies are relate
ed to softwarre developme
ent activities.
Differentt developmen nt life cycle models
m need d different approaches to testing.
In practicce, a V-mode el may have more, fewerr or different levels of devvelopment an nd testing,
dependin ng on the prooject and the
e software prroduct. For example, therre may be co omponent
integratioon testing aftter componeent testing, an
nd system in
ntegration tessting after syystem testing
g.
2.1.2 Iterative--incremen
ntal Develo
opment Models (K2)
Iterative--incremental developmen blishing requiirements, designing, build
nt is the proccess of estab ding
and testiing a system m in a series of
o short deve elopment cyc cles. Examples are: proto otyping, Rapid
Applicatiion Developm ment (RAD), Rational Un nified Processs (RUP) and agile develo opment mode els. A
system that
t is producced using theese models may m be teste ed at several test levels dduring each
iteration.. An increme ent, added to others deve eloped previoously, forms a growing pa artial system,
which sh hould also be e tested. Reggression testing is increas singly importtant on all ite
erations afterr the
first one.. Verification and validatio
on can be ca arried out on each increm
ment.
Test leve
els can be coombined or reorganized
r d
depending on the nature of the projecct or the systtem
architectture. For exa
ample, for the
e integration of a Commeercial Off-The
e-Shelf (COTTS) software
product into a systemm, the purchaaser may perrform integra
ation testing at
a the system
m level (e.g.,
Version 2011
2 Page 22 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
integratio
on to the infrrastructure and other systems, or sys
stem deploym ment) and accceptance tes
sting
(function
nal and/or noon-functional,, and user an
nd/or operational testing)).
Version 2011
2 Page 23 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
2.2 Test Le
evels (K
K2) 40 minuttes
Terms
Alpha testing, beta teesting, comp ponent testing
g, driver, field
d testing, fun
nctional requuirement,
integratio
on, integratio
on testing, noon-functionall requiremen nt, robustnesss testing, stu
ub, system te
esting,
test environment, tesst level, test-d
driven development, use er acceptance e testing
Backgrround
For eachh of the test levels, the fo
ollowing can be
b identified: the genericc objectives, tthe work
product(s) being refe erenced for deriving
d test cases
c (i.e., th
he test basiss), the test ob
bject (i.e., wh
hat is
being tessted), typicall defects and
d failures to be
b found, tes st harness requirements a and tool supp port,
and speccific approacches and responsibilities.
Typical test
t objects:
o Com mponents
o Prog grams
o Data a conversion / migration programs
p
o Data abase modules
One app proach to com mponent testting is to prepare and auttomate test cases
c before
e coding. Thiss is
called a test-first app
proach or tesst-driven deveelopment. Thhis approach h is highly iterative and is
s
based on n cycles of developing teest cases, theen building and integratinng small piecces of code, and
a
executing the compo onent tests coorrecting anyy issues and iterating unttil they pass.
Version 2011
2 Page 24 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
2.2.2 Integratio
on Testing
g (K2)
Test bassis:
o Softw ware and sysstem design
o Arch hitecture
o Workflows
o Use cases
Typical test
t objects:
o Subssystems
o Data abase implem mentation
o Infraastructure
o Interrfaces
o Systtem configura ation and configuration data
d
Testing of
o specific no
on-functionall characteristtics (e.g., performance) may
m be includ
ded in integrration
testing as
a well as funnctional testin
ng.
At each stage of inteegration, testeers concentrrate solely onn the integrattion itself. Fo
or example, iff they
are integ
grating modu ule A with mo odule B they are intereste ed in testing the commun nication betw
ween
the modu ules, not the functionalityy of the indiviidual module
e as that wass done during g componentt
testing. Both
B function
nal and strucctural approaches may be e used.
Ideally, testers
t should understand d the archite
ecture and inffluence integ
gration plann
ning. If integra
ation
tests are
e planned before compon nents or systeems are built, those commponents can n be built in th
he
order reqquired for mo
ost efficient testing.
t
Version 2011
2 Page 25 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Typical test
t objects:
o Systtem, user and operation manuals
m
o Systtem configura ation and configuration data
d
System testing
t is con
ncerned withh the behavio
or of a whole system/prod duct. The tessting scope shall
s
be clearlly addressedd in the Master and/or Levvel Test Plan
n for that test level.
In system
m testing, the
e test environ
nment should correspond d to the final target or pro
oduction
environmment as much as possible e in order to minimize the
e risk of environment-spe ecific failures
s not
being fou
und in testing
g.
System testing
t may include testss based on risks and/or on o requirements specificaations, busine
ess
processe es, use casees, or other high level textt descriptions
s or models of system be
ehavior,
ons with the operating syystem, and syystem resources.
interactio
System testing
t should investigatee functional and
a non-func ctional requirrements of thhe system, and
data quaality characte
eristics. Teste
ers also needd to deal with
h incompletee or undocum mented
requiremments. System m testing of functional
f requirements starts
s by usinng the most a appropriate
specifica
ation-based (black-box)
( te
echniques foor the aspectt of the syste
em to be teste ed. For exammple, a
decision table may beb created for combinations of effects s described inn business ru ules. Structure-
based teechniques (wwhite-box) ma ay then be ussed to assesss the thoroughness of th he testing with
respect to
t a structuraal element, such
s as menu u structure or
o web page navigation
n (ssee Chapter 4).
An indep
pendent test team often carries
c out syystem testing
g.
2.2.4 Acceptan
nce Testin
ng (K2)
Test bassis:
o Userr requiremen nts
o Systtem requirem ments
o Use cases
o Business processses
o Riskk analysis rep
ports
Typical test
t objects:
o Business processses on fully integrated syystem
o Operational and maintenance e processes
o Userr proceduress
o Form ms
o Repo orts
o Conffiguration da ata
use, alth
hough it is no
ot necessarilyy the final levvel of testing. For example, a large-sccale system
integratio
on test may come
c he acceptancce test for a system.
after th
Typical forms
f of acce
eptance testiing include th
he following:
Version 2011
2 Page 27 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Black-boox testing, coode coverage e, functional testing, interroperability teesting, load ttesting,
maintainnability testing
g, performan nce testing, portability
p tes
sting, reliabiliity testing, se
ecurity testing,
stress te
esting, structuural testing, usability
u testting, white-bo
ox testing
Backgrround
A group of test activities can be aimed
a at veriifying the sofftware system
m (or a part o
of a system) based
on a spe
ecific reason or target for testing.
A test typ
pe is focusedd on a particcular test obje
ective, which
h could be an ny of the follo
owing:
o A fun nction to be performed byy the software
o A no on-functional quality characteristic, suuch as reliabiility or usability
o The structure or architecture of the softwa are or system
m
o Change related, i.e., confirmiing that defe ects have beeen fixed (con nfirmation tessting) and loo
oking
for unintended
u chhanges (regrression testin ng)
2.3.1 Testing of
o Functio
on (Functio
onal Testiing) (K2)
The funcctions that a system, subssystem or co omponent aree to perform may be described in work
productss such as a reequirementss specification
n, use cases
s, or a functio
onal specifica
ation, or they
y may
be undoccumented. TheT functionss are “what” the
t system does.
d
Specifica
ation-based techniques
t m be used to derive tes
may st conditionss and test casses from the
functiona
ality of the so
oftware or syystem (see Chapter
C 4). Fu
unctional tessting conside
ers the externnal
behaviorr of the softw
ware (black-bbox testing).
2.3.2 Testing of
o Non-fun
nctional Software
S Characteris
C stics (Non
n-functional
Testing
g) (K2)
Non-funcctional testing includes, but
b is not limited to, perfo ormance testing, load testing, stress
testing, usability
u testing, maintain
nability testin
ng, reliability testing
t and portability
p tessting. It is the
e
testing of
o “how” the system
s workss.
Version 2011
2 Page 28 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
9126). Non-functiona
N al testing con
nsiders the external behaavior of the so
oftware and in most case
es
uses bla
ack-box test design
d techn
niques to acccomplish thatt.
2.3.3 Testing of
o Softwarre Structu
ure/Archite
ecture (Strructural Testing) (K
K2)
Structura
al (white-boxx) testing mayy be perform
med at all testt levels. Structural techniiques are best
used afte
er specification-based techniques, in order to help p measure th he thoroughnness of testin
ng
through assessment of coverage e of a type of structure.
After a defect
d is dete
ected and fixe
ed, the softwware should be
b re-tested to
t confirm th
hat the originaal
defect ha as been succcessfully rem
moved. This is i called confirmation. De
ebugging (loccating and fix
xing a
defect) iss a developmment activity, not a testing
g activity.
Version 2011
2 Page 29 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Impact analysis,
a maintenance tessting
Backgrround
Once deeployed, a sooftware systeem is often in
n service for years
y or decades. During g this time the
system, its configura
ation data, or its environm
ment are oftenn corrected, changed or extended. Th he
planning
g of releases in advance isi crucial for successful maintenance
m e testing. A distinction hass to be
made be etween plannned releases and hot fixe es. Maintenan nce testing iss done on ann existing
operational system, and
a is triggered by modiffications, mig he software or
gration, or retirement of th
system.
Modifica
ations include
e planned en
nhancement changes
c (e.g
g., release-ba
ased), correcctive and
emergenncy changes, and change es of environ nment, such as
a planned operating
o sysstem or database
upgrades, planned upgrade of Co ommercial-O Off-The-Shelff software, orr patches to correct newly
exposedd or discovere
ed vulnerabilities of the operating
o sysstem.
In additioon to testing what has be een changed, maintenanc ce testing inccludes regression testing g to
parts of the
t system that have nott been chang ged. The sco ope of mainte enance testin
ng is related tot the
risk of th
he change, th he size of the
e existing sysstem and to the
t size of th he change. DDepending on n the
changess, maintenancce testing may be done at a any or all test
t nd for any orr all test types.
levels an
Determin ning how thee existing sysstem may be affected by changes is called
c impactt analysis, an nd is
used to help
h decide how
h much re egression tessting to do. The
T impact analysis may be used to
determin ne the regresssion test suiite.
Refere
ences
2.1.3 CMMMI, Craig, 2002,
2 Hetzell, 1988, IEEE
E 12207
2.2 Hetzzel, 1988
2.2.4 Coopeland, 20004, Myers, 19979
2.3.1 Beeizer, 1990, Black,
B 2001, Copeland, 2004
2
2.3.2 Black, 2001, ISSO 9126
2.3.3 Beeizer, 1990, Copeland,
C 20004, Hetzel, 1988
2.3.4 Heetzel, 1988, IEEE
I STD 82 29-1998
2.4 Blacck, 2001, Cra
aig, 2002, Heetzel, 1988, IEEE
I STD 82
29-1998
Version 2011
2 Page 30 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
3. S
Static T
Techniques (K2
2) 6
60 minuttes
Learniing Objecctives forr Static Te
echniques
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.
3.1 Sta
atic Techniques and
d the Test Process (K2)
(
LO-3.1.1
1 Recognize software work produccts that can be b examined by the differrent static
techniquues (K1)
LO-3.1.2
2 Describee the importaance and valu ue of conside
ering static te
echniques fo
or the assesssment
of softwa
are work products (K2)
LO-3.1.3
3 Explain the
t differencce between static
s and dynnamic techniques, consid dering objecttives,
types of defects to be a the role of these tech
e identified, and hniques with
hin the softwa
are life
cycle (K22)
3.2 Rev
view Proc
cess (K2)
LO-3.2.1
1 Recall th
he activities, roles and responsibilities
s of a typical formal revie
ew (K1)
LO-3.2.2
2 Explain the
t differencces between different type es of reviewss: informal re
eview, techniical
review, walkthrough
w and inspection (K2)
LO-3.2.3
3 Explain the
t factors fo or successful performanc ce of reviewss (K2)
3.3 Sta
atic Analys
sis by Too
ols (K2)
LO-3.3.1
1 Recall tyypical defectss and errors identified by
y static analyssis and comppare them too
reviews and dynamicc testing (K1)
LO-3.3.2
2 Describe e, using exam
mples, the tyypical benefits of static an
nalysis (K2)
LO-3.3.3
3 List typiccal code and design defe ects that mayy be identified nalysis tools (K1)
d by static an
Version 2011
2 Page 31 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Backgrround
Unlike dyynamic testin
ng, which reqquires the exxecution of so
oftware, static testing tecchniques rely
y on
the manu ual examinattion (reviewss) and autom
mated analysiis (static anaalysis) of the code or otheer
project documentatio
d on without the execution of the code.
Benefits of reviews in
nclude early defect detecction and corrrection, deveelopment pro oductivity
improvem ments, reducced developm ment timesca ales, reducedd testing cosst and time, liifetime cost
reductionns, fewer deffects and improved comm munication. Reviews
R can
n find omissioons, for exam mple,
in require
ements, whicch are unlikeely to be foun
nd in dynamic testing.
Typical defects
d a easier to find in reviews than in dynamic testin
that are ng include: ddeviations froom
standardds, requiremeent defects, design
d defeccts, insufficie
ent maintaina
ability and inccorrect interfa
ace
specifica
ations.
Version 2011
2 Page 32 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
3.2 Review
w Processs (K2) 25 minuttes
Terms
Entry criteria, formal review, inforrmal review, inspection, metric,
m mode
erator, peer rreview, review
wer,
scribe, te
echnical review, walkthro ough
Backgrround
The diffe
erent types of
o reviews vary from inform mal, characterized by noo written instrructions for
reviewerrs, to systematic, charactterized by teaam participattion, documeented resultss of the revieww, and
documen nted procedu ures for condducting the re
eview. The fo
ormality of a review proce ess is related
d to
factors such
s as the maturity
m of the developme ent process, any legal or regulatory re equirements s or the
need forr an audit traiil.
3.2.1 Activities
s of a Form
mal Revie
ew (K1)
A typicall formal revie
ew has the fo
ollowing main
n activities:
1. Plannning
• Defining
D the review criterria
• Selecting
S thee personnel
• Allocating
A ro
oles
• Defining
D the entry and exxit criteria forr more forma al review type
es (e.g., insp
pections)
• Selecting
S wh hich parts of documents to t review
• Checking
C enntry criteria (ffor more form
mal review types)
2. Kickk-off
• Distributing
D d
documents
• Explaining
E th
he objectivess, process an nd documentts to the participants
3. Indivvidual preparration
• Preparing
P for the review meeting by reviewing
r the
e document(s)
• Noting
N poten ntial defects, questions an nd commentts
4. Exam mination/eva aluation/recorrding of resu ults (review meeting)
m
• Discussing
D o logging, with documented results or
or o minutes (fo or more formmal review typ
pes)
• Noting
N defeccts, making re ecommenda ations regarding handling the defects, making dec cisions
a
about the deefects
• Examining/e
E evaluating and recording issues during g any physiccal meetings or tracking any
a
g
group electroonic commun nications
5. Rewwork
• Fixing
F defectts found (typpically done by
b the authorr)
• Recording
R updated status of defects (in formal reviews)
6. Folloow-up
• Checking
C thaat defects ha ave been add dressed
• Gathering
G metrics
• Checking
C onn exit criteria (for more forrmal review types)
t
3.2.2 Roles an
nd Respon
nsibilities (K1)
A typicall formal revie
ew will includ
de the roles below:
b
o Manager: decide es on the exeecution of revviews, alloca
ates time in project
p sched
dules and
dete
ermines if the e review objeectives have been met.
Version 2011
2 Page 33 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Informal Review
o No foormal processs
o May take the form m of pair pro
ogramming or
o a technical lead review
wing designs and code
o Resu ults may be documented
d
o Variees in usefuln
ness dependiing on the re
eviewers
o Main n purpose: in
nexpensive way
w to get so ome benefit
Walkthrrough
o Mee eting led by author
a
o May take the form m of scenarios, dry runs,, peer group participation n
o Open-ended sesssions
• Optional
O pre-meeting pre eparation of reviewers
r
• Optional
O preparation of a review repoort including list of finding
gs
o Optio onal scribe (who is not th
he author)
o May vary in pracctice from quiite informal to very formaal
o Main n purposes: learning, gaining undersstanding, find ding defects
Techniccal Review
o Docu umented, de efined defect--detection prrocess that in
ncludes peerrs and techniical experts with
w
optio
onal manage ement particippation
o May be performe ed as a peer review witho out managem ment participation
o Idea ally led by trained modera ator (not the author)
a
o Pre-meeting prep paration by reviewers
r
o Optio onal use of checklists
c
o Prep paration of a review reporrt which inclu udes the list of findings, the
t verdict whether the
softw
ware productt meets its re equirements and, where appropriate,
a recommendations relate ed to
findings
o May vary in pracctice from quiite informal to very forma al
o Main n purposes: discussing, making decissions, evalua ating alternattives, finding
g defects, sollving
technical problem ms and checcking conform mance to spe ecifications, plans,
p regulaations, and
standards
Version 2011
2 Page 34 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Inspectiion
o Led by trained moderator
m (noot the author)
o Usua ed as a peer examination
ally conducte n
o Defin ned roles
o Incluudes metrics gathering
o Formmal process based
b on rules and checcklists
o Speccified entry and
a exit criterria for accep ptance of the software pro
oduct
o Pre-meeting prep paration
o Inspection reportt including lisst of findings
o Formmal follow-upp process (wiith optional process
p improovement com mponents)
o Optio onal reader
o Main n purpose: fin
nding defectss
3.2.4 Success
s Factors for
f Review
ws (K2)
Successs factors for reviews
r include:
o Each h review hass clear predeffined objectivves
o The right people for the revie ew objectivess are involved d
o Testters are value ed reviewerss who contrib bute to the reeview and alsso learn abou ut the producct
whicch enables th hem to prepa are tests earlier
o Defe ects found arre welcomed and expresssed objective ely
o Peop ple issues an nd psycholog gical aspectss are dealt with (e.g., makking it a positive experiennce for
the author)
a
o The review is conducted in an atmospherre of trust; th he outcome will
w not be ussed for the
evaluation of the e participantss
o Reviiew techniqu ues are applie ed that are suitable
s to ac bjectives and to the type and
chieve the ob a
level of software work produccts and revie ewers
o Checcklists or role es are used if appropriate e to increasee effectivenesss of defect identification
n
o Train ning is given in review techniques, esspecially the more formall techniques such as
inspe ection
o Management sup pports a goo od review proocess (e.g., by
b incorporatting adequate e time for revview
activvities in proje
ect scheduless)
o Therre is an emphasis on learrning and pro ocess improv vement
Version 2011
2 Page 35 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Compiler, complexityy, control flow
w, data flow, static analys
sis
Backgrround
The obje ective of statiic analysis iss to find defects in softwa are source coode and softw ware models s.
Static annalysis is perrformed witho out actually executing
e thee software beeing examine ed by the toool;
dynamicc testing doess execute the e software co ode. Static analysis can locate
l defectts that are ha
ard to
find in dyynamic testinng. As with re eviews, staticc analysis finnds defects rather
r than fa
ailures. Staticc
analysis tools analyzze program code c (e.g., co
ontrol flow an
nd data flow)), as well as g
generated ou utput
such as HTML and XML. X
The valu
ue of static an
nalysis is:
o Earlyy detection ofo defects prior to test exeecution
o Earlyy warning ab bout suspicio ous aspects of
o the code or
o design by the
t calculatioon of metrics
s, such
as a high comple exity measurre
o Identification of defects
d not easily
e found by
b dynamic testing
t
o Dete ecting dependencies and inconsistenccies in softw ware models such
s as linkss
o Imprroved mainta ainability of code
c and dessign
o Prevvention of defects, if lesso ons are learn
ned in develo
opment
Typical defects
d disco
overed by staatic analysis tools include
e:
o Refe erencing a vaariable with an
a undefined d value
o Inconsistent interfaces betwe een moduless and compon nents
o Varia ables that arre not used or
o are improp perly declaredd
o Unre eachable (de ead) code
o Misssing and erro oneous logic (potentially infinite loops)
o Overly complicatted constructts
o Prog gramming sta andards violaations
o Secu urity vulnerabbilities
o Synttax violationss of code and d software models
m
Static an
nalysis tools are typically used by devvelopers (che
ecking against predefined d rules or
programming standa ards) before and
a during component an nd integration testing or w
when checking-in
code to configuration
c n manageme ent tools, and
d by designers during sofftware modelling. Static
analysis tools may produce a larg ge number ofo warning meessages, which need to b be well-manaaged
to allow the most effe
ective use off the tool.
Refere
ences
3.2 IEEE
E 1028
3.2.2 Giilb, 1993, van
n Veenendaa
al, 2004
3.2.4 Giilb, 1993, IEE
EE 1028
3.3 van Veenendaall, 2004
Version 2011
2 Page 36 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
4. T
Test De
esign Te
echniqu
ues (K4)) 28
85 minu
utes
Learniing Objecctives forr Test Dessign Tech
hniques
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.
4.1 The
e Test Dev
velopmentt Process (K3)
LO-4.1.1
1 Differenttiate between n a test desiggn specificattion, test case specificatio on and test
procedure specificatiion (K2)
LO-4.1.2
2 Compare e the terms test
t condition
n, test case and
a test proccedure (K2)
LO-4.1.3
3 Evaluate e the quality of test casess in terms of clear traceab bility to the re
equirements s and
expected d results (K22)
LO-4.1.4
4 Translate e test cases into a well-sstructured tes
st procedure specification n at a level of
o
detail rellevant to the knowledge of o the testerss (K3)
4.2 Cattegories of
o Test Des
sign Tech
hniques (K
K2)
LO-4.2.1
1 Recall reeasons that both
b specification-based (black-box) and
a structure e-based (white-
box) testt design tech
hniques are useful
u and lis
st the commoon technique
es for each (KK1)
LO-4.2.2
2 Explain the
t characteristics, comm monalities, an
nd difference
es between sspecification--based
testing, structure-bas
s sed testing and
a experienc ce-based tessting (K2)
4.3 Spe
ecification
n-based or Black-bo
ox Techniques (K3)
LO-4.3.1
1 Write tesst cases from
m given softwware models using equiva alence partitiioning, bound
dary
value annalysis, decission tables an
nd state transition diagra
ams/tables (KK3)
LO-4.3.2
2 Explain the
t main purrpose of each h of the four testing techn
niques, whatt level and ty
ype of
testing could
c use the
e technique, and
a how cov verage may be b measured d (K2)
LO-4.3.3
3 Explain the
t concept of use case testing and its benefits (K K2)
4.4 Strructure-ba
ased or Wh
hite-box Technique
T s (K4)
LO-4.4.1
1 Describe e the conceppt and value of
o code cove erage (K2)
LO-4.4.2
2 Explain the
t conceptss of statemen nt and decision coveragee, and give reeasons why these
t
conceptss can also be e used at tesst levels othe
er than component testingg (e.g., on
businesss proceduress at system le evel) (K2)
LO-4.4.3
3 Write tesst cases from
m given contrrol flows usinng statementt and decision test designn
techniqu ues (K3)
LO-4.4.4
4 Assess statement
s an
nd decision coverage
c for completenesss with respe
ect to defined
d exit
criteria. (K4)
(
4.5 Exp
perience-b
based Tec
chniques (K2)
(
LO-4.5.1
1 Recall re
easons for writing
w test ca
ases based on
o intuition, experience
e annd knowledgge
about coommon defeccts (K1)
LO-4.5.2
2 Compare e experiencee-based tech hniques with specification
n-based testin
ng technique
es (K2)
4.6 Cho
oosing Te
est Techniiques (K2))
LO-4.6.1
1 Classify test design techniques
t a
according to their
t fitness to
t a given co
ontext, for the
e test
basis, re
espective moodels and sofftware characcteristics (K2
2)
Version 2011
2 Page 37 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
4.1 The Te
est Deve
elopmen
nt Proces
ss (K3) 15 minuttes
Terms
Test casse specificatio
on, test desig
gn, test execcution schedule, test proccedure speciification, testt
script, tra
aceability
Backgrround
The test developmen nt process deescribed in thhis section caan be done in different wways, from veery
informal with little or no documen ntation, to very formal (as s it is describ
bed below). T
The level of
formalityy depends on n the contextt of the testin
ng, including the maturity of testing annd development
processe es, time consstraints, safe
ety or regulattory requiremments, and th he people invvolved.
During te est analysis, the test basis documenttation is analyzed in orde er to determinne what to te est,
i.e., to id
dentify the tesst conditionss. A test cond
dition is defin
ned as an item or event th hat could be
verified byb one or mo ore test casees (e.g., a fun
nction, transaaction, quality characterisstic or structu
ural
element)).
The varioous test proccedures and automated test t scripts are subsequeently formed into a test
executioon schedule that
t defines the
t order in which
w the various test pro
ocedures, an
nd possibly
automate ed test scriptts, are execuuted. The test execution schedule will take into account such
factors as
a regression n tests, prioritization, and technical an
nd logical dependencies.
Version 2011
2 Page 38 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
4.2 Catego
ories of Test
T Design Tec
chnique
es 15 minuttes
(K2)
Terms
Black-bo
ox test design
n technique, experience--based test design
d techniique, test dessign techniqu
ue,
white-bo
ox test design
n technique
Backgrround
The purp
pose of a tesst design tech
hnique is to identify
i test conditions,
c te
est cases, an
nd test data.
This sylla
abus refers to
t specificatio
on-based tesst design tec
chniques as black-box
b tecchniques and
d
structure
e-based test design technniques as whhite-box techniques. In ad
ddition experrience-based
d test
design teechniques arre covered.
Version 2011
2 Page 39 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
4.3 Specification-b
based orr Black-b
box 1
150 minu
utes
Techn
niques (K3)
(
Terms
Boundarry value analysis, decisio
on table testin
ng, equivalen
nce partitioniing, state transition testin
ng, use
case tessting
4.3.1 Equivale
ence Partittioning (K
K3)
In equivaalence partitiioning, inputss to the softw
ware or systeem are divide ed into group ps that are
expected d to exhibit similar
s behavvior, so they are
a likely to be
b processed d in the same way.
Equivale ence partition
ns (or classess) can be fou und for both valid data, i.e., values that should be e
accepted d and invalid data, i.e., vaalues that shhould be rejected. Partitioons can also be identified d for
outputs, internal valuues, time-relaated values (e.g.,
( before or after an event)
e and for interface
parameters (e.g., inte egrated com mponents bein ng tested during integratiion testing). TTests can be e
designed d to cover alll valid and in
nvalid partitions. Equivaleence partitionning is appliccable at all levels of
testing.
Equivale
ence partition
ning can be used
u to achie
eve input and erage goals. It can be ap
d output cove pplied
to human input, input via interfacces to a syste
em, or interfa
ace parameteers in integra
ation testing.
Boundarry value analysis can be applied at all test levels. It is relatively easy to apply and its defect-
finding capability
c is high.
h Detailed
d specificatio
ons are helpfful in determiining the inte
eresting
boundarries.
4.3.3 Decision
n Table Testing (K3))
Decisionn tables are a good way tot capture syystem require ements that contain
c logiccal conditions
s, and
to documment internal system design. They ma ay be used to o record commplex business rules thatt a
system is to impleme ent. When crreating decision tables, thhe specificatiion is analyzzed, and cond ditions
and actio entified. The input conditions and actions are mosst often stated in
ons of the syystem are ide
w that theyy must be true or false (Boolean). The
such a way e decision tabble contains the triggering
conditionns, often com
mbinations off true and false for all inp
put conditionss, and the ressulting actionns for
each com mbination of conditions. Each
E columnn of the tablee corresponds to a busine ess rule that
defines a unique com mbination of conditions annd which res ecution of the actions
sult in the exe
associated with that rule. The covverage stand dard commonly used with h decision table testing iss to
have at least
l one tesst per column
n in the table
e, which typiccally involvess covering alll combination ns of
triggering
g conditions..
Version 2011
2 Page 40 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Tests ca an be designe ed to cover a typical sequuence of states, to coverr every state,, to exercise every
transitionn, to exercise
e specific seq
quences of transitions
t orr to test invalid transitionss.
State tra
ansition testin
ng is much used within th he embedded d software in ndustry and technical
automation in genera al. However, the techniqu ue is also suitable for mo odeling a bussiness object
having specific
s statess or testing screen-dialog
s gue flows (e..g., for Intern
net applicatio
ons or busine
ess
scenarioos).
Version 2011
2 Page 41 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
4.4 Structu
ure-base
ed or Wh
hite-box 60 minuttes
Techn
niques (K4)
Terms
Code co
overage, deciision coverag
ge, statemen
nt coverage, structure-ba
ased testing
Backgrround
Structure
e-based or white-box
w testing is based
d on an identtified structurre of the softtware or the
system, as seen in th he following examples:
o Com mponent level: the structu ure of a softw
ware component, i.e., stattements, deccisions, branc ches
or evven distinct paths
p
o Integ gration level: the structure may be a call
c tree (a diagram in wh hich moduless call other
modules)
o Systtem level: the e structure may
m be a men nu structure, business prrocess or web page struc cture
In this se
ection, three code-relatedd structural te echniques for code coverrage, based on
est design te o
statemen nts, branches and decisio ons, are disccussed. For decision
d testting, a contro
ol flow diagra
am
may be used
u to visua
alize the alte
ernatives for each
e decisio
on.
4.4.1 Statemen
nt Testing
g and Cove
erage (K4)
In compoonent testing
g, statement coverage is the assessm ment of the pe
ercentage off executable
statemennts that have
e been exerccised by a tesst case suite. The statem echnique derives
ment testing te
test case
es to execute atements, normally to increase statem
e specific sta ment coverag ge.
4.4.2 Decision
n Testing and
a Coverrage (K4)
Decisionn coverage, related
r to bra
anch testing, is the asses ssment of thee percentage e of decision
outcome es (e.g., the True
T and Fallse options of
o an IF statement) that ha ave been exxercised by a test
case suite. The decission testing technique
t deerives test ca
ases to execuute specific d
decision outc comes.
Branche es originate frrom decision
n points in the
e code and show
s the tran
nsfer of contrrol to differen
nt
locationss in the code
e.
Decision
n coverage iss determinedd by the numb
ber of all dec
cision outcom
mes covered by (designe
ed or
executed
d) test casess divided by the
t number ofo all possible
e decision ou he code under
utcomes in th
test.
Decision
n testing is a form of conttrol flow testin
ng as it follow
ws a specificc flow of conttrol through the
t
decision points. Deciision coveragge is stronge er than statem ment coverag ge; 100% de ecision coverrage
guaranteees 100% sta atement cove erage, but no ot vice versaa.
Tool sup
pport is usefu
ul for the stru
uctural testing
g of code.
Version 2011
2 Page 42 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
4.5 Experie
ence-ba
ased Tecchniques
s (K2) 30 minuttes
Terms
Exploratory testing, (fault)
( attack
Backgrround
Experiennce-based teesting is where tests are derived
d from the tester’s skill and intuuition and the
eir
experiennce with similar applicatio
ons and technologies. Wh hen used to augment
a sysstematic
techniquues, these tecchniques can n be useful in
n identifying special testss not easily ccaptured by formal
f
techniquues, especially when applied after mo ore formal appproaches. However, this technique may m
yield wid
dely varying degrees
d of effectiveness, depending on the testerrs’ experiencce.
Exploratory testing iss concurrent test design, test executio on, test loggiing and learn
ning, based on
o a
test charrter containinng test objecttives, and ca
arried out within time-boxxes. It is an a
approach that is
most use eful where thhere are few or inadequatte specificatiions and sevvere time pre essure, or in order
o
to augme ent or complement otherr, more forma al testing. It can
c serve ass a check on the test proc cess,
to help ensure
e that th
he most serioous defects are
a found.
Version 2011
2 Page 43 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
No specific terms.
Backgrround
The choice of which test techniquues to use de
epends on a number of fa actors, includ
ding the type
e of
system, regulatory sttandards, customer or co quirements, level of risk, type of risk, test
ontractual req
objectivee, documenta
ation availab
ble, knowledgge of the testters, time and
d budget, deevelopment liife
cycle, usse case models and prevvious experie
ence with types of defectss found.
Refere
ences
4.1 Craiig, 2002, Hettzel, 1988, IE
EEE STD 829-1998
4.2 Beizzer, 1990, Coopeland, 200 04
4.3.1 Coopeland, 20004, Myers, 19 979
4.3.2 Coopeland, 20004, Myers, 19 979
4.3.3 Beeizer, 1990, Copeland,
C 20004
4.3.4 Beeizer, 1990, Copeland,
C 20004
4.3.5 Coopeland, 20004
4.4.3 Beeizer, 1990, Copeland,
C 20004
4.5 Kaner, 2002
4.6 Beizzer, 1990, Coopeland, 200 04
Version 2011
2 Page 44 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
5. T
Test Ma
anagem
ment (K3
3) 17
70 minu
utes
Learniing Objecctives forr Test Managemen
nt
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.
5.1 Tes
st Organizzation (K2)
LO-5.1.1
1 Recognize the imporrtance of inde ependent tessting (K1)
LO-5.1.2
2 Explain the
t benefits and drawbaccks of indepe endent testin
ng within an o
organization (K2)
LO-5.1.3
3 Recognize the differeent team members to be considered for the creation of a test team
(K1)
LO-5.1.4
4 he tasks of a typical test leader
Recall th l and te
ester (K1)
5.2 Tes
st Planning and Esttimation (K
K3)
LO-5.2.1
1 Recognize the differe ent levels an
nd objectives of test plann ning (K1)
LO-5.2.2
2 Summarrize the purpose and content of the te est plan, test design speccification and d test
procedure documentts according to the ‘Stand dard for Softw ware Test Documentatio on’
(IEEE Sttd 829-1998)) (K2)
LO-5.2.3
3 Differenttiate between n conceptuallly different te
est approach hes, such as analytical, model-
m
based, methodical,
m p
process/stand dard complia ant, dynamic/heuristic, co onsultative and
regressioon-averse (K K2)
LO-5.2.4
4 Differenttiate between n the subjectt of test planning for a syystem and sccheduling tes st
executioon (K2)
LO-5.2.5
5 Write a test
t execution schedule for f a given se et of test casses, considerring prioritiza
ation,
and techhnical and log gical dependdencies (K3)
LO-5.2.6
6 List test preparation and executio on activities that
t should beb considered during testt
planningg (K1)
LO-5.2.7
7 Recall tyypical factorss that influencce the effort related to testing (K1)
LO-5.2.8
8 Differenttiate between n two concep ptually differe
ent estimatioon approache es: the metriccs-
based ap pproach and d the expert-b based approa ach (K2)
LO-5.2.9
9 Recognize/justify ade equate entryy and exit critteria for speccific test leve
els and groupps of
test casees (e.g., for integration te
esting, accep ptance testing g or test case es for usability
testing) (K2)
(
5.3 Tes
st Progres
ss Monitorring and Control
C (K
K2)
LO-5.3.1
1 Recall co
ommon metrrics used for monitoring test preparation and execcution (K1)
LO-5.3.2
2 Explain and
a compare e test metricss for test rep
porting and te
est control (e
e.g., defects found
f
and fixed
d, and tests passed
p and failed)
f relate
ed to purposee and use (K K2)
LO-5.3.3
3 Summarrize the purpose and content of the te est summaryy report document accord ding to
the ‘Stan
ndard for Sofftware Test Documentatio
D on’ (IEEE Std 829-1998) (K2)
5.4 Configuratio
on Manage
ement (K2)
LO-5.4.1
1 Summarrize how configuration ma
anagement supports
s testting (K2)
5.5 Ris
sk and Tes
sting (K2)
LO-5.5.1
1 Describee a risk as a possible prooblem that wo ould threatenn the achieve
ement of onee or
more sta
akeholders’ project
p objectives (K2)
LO-5.5.2
2 Rememb ber that the level of risk iss determined
d by likelihoo ning) and impact
od (of happen
(harm re
esulting if it does happen)) (K1)
LO-5.5.3
3 Distinguish between the project anda product risks (K2)
LO-5.5.4
4 Recognize typical product and prroject risks (K K1)
LO-5.5.5
5 Describee, using exam r analysis and risk man
mples, how risk nagement may be used forf test
planning
g (K2)
Version 2011
2 Page 45 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
5.6 Inc
cident Man
nagement (K3)
LO-5.6.1
1 Recognize the conte ent of an incid
dent report according
a to the
t ‘Standard d for Softwarre
Test Doccumentation’’ (IEEE Std 829-1998)
8 (K
K1)
LO-5.6.2
2 Write an incident rep
port covering the observa ation of a failu
ure during te
esting. (K3)
Version 2011
2 Page 46 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Tester, test leader, te
est managerr
For large
e, complex oro safety criticcal projects, it is usually best
b to have multiple leve els of testing, with
some or all of the levvels done by independent testers. Development staff s may partticipate in tessting,
especially at the loweer levels, butt their lack off objectivity often
o limits th
heir effectiveness. The
independ dent testers may have th he authority to o require and d define test processes a and rules, bu ut
testers should
s take on
o such process-related roles r only in the presence e of a clear mmanagementt
mandate e to do so.
Drawbaccks include:
o Isola
ation from thee developme
ent team (if trreated as tottally independent)
o Deve elopers may lose a sensee of responssibility for qua
ality
o Indeependent tessters may be seen as a bottleneck
b or blamed for delays
d in rele
ease
Testing tasks
t may be eople in a specific testing
e done by pe g role, or mayy be done byy someone in
n
another role, such ass a project manager,
m qua
ality managerr, developer, business an nd domain ex
xpert,
infrastruccture or IT operations.
Sometimmes the test leader is called a test ma anager or tesst coordinatorr. The role off the test leader
may be performed
p byy a project manager,
m a de
evelopment manager, a quality
q assurrance manag ger or
the mana arger projects two positio
ager of a tesst group. In la ons may exist: test leaderr and test
managerr. Typically thhe test leadeer plans, monnitors and co
ontrols the testing activitie
es and tasks s as
defined in
i Section 1.4.
Typical test
t leader ta
asks may incclude:
o Coordinate the te est strategy and
a plan with h project managers and others
o
o Write e or review a test strateg
gy for the pro
oject, and tes
st policy for th
he organizatiion
Version 2011
2 Page 47 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Typical tester
t tasks may
m include:
o Reviiew and conttribute to testt plans
o Anallyze, review and assess user requirements, specifications and d models forr testability
o Crea ate test speccifications
o Set upu the test environment (often
( coordinating with system
s administration and network
management)
o Prep pare and acq quire test data
o Implement tests on all test levels, execute e and log the
e tests, evalu
uate the resu
ults and docu ument
the deviations
d froom expected d results
o Use test adminisstration or ma anagement tools and test monitoring tools as requ uired
o Auto omate tests (may be supp ported by a developer
d or a test autom
mation expertt)
o Mea asure perform mance of com mponents and systems (iff applicable)
o Reviiew tests devveloped by others
o
People who
w work on test analysiss, test design n, specific test types or te
est automatio on may be
specialissts in these ro
oles. Dependding on the test
t level andd the risks re product and the
elated to the p
project, different
d peo
ople may takee over the role of tester, keeping
k me degree of independence.
som
Typicallyy testers at th
he componen nt and integrration level would
w be developers, testters at the
acceptan nce test leveel would be business expe erts and useers, and testeers for operattional accepttance
testing would
w be ope
erators.
Version 2011
2 Page 48 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Test app
proach, test strategy
s
Planning
g is influence ed by the testt policy of the
e organizatioon, the scope objectives, risks,
e of testing, o
constrain
nts, criticalityy, testability and
a the availlability of res
sources. As the project annd test plann
ning
progresss, more inform mation becomes availablle and more detail can be e included in
n the plan.
Test plan
nning is a co
ontinuous acttivity and is performed
p in all life cycle processes aand activities
s.
Feedbacck from test activities
a is used
u to recoggnize changin
ng risks so th hat planning can be adjusted.
Version 2011
2 Page 49 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
The sele
ected approa ach depends on the conteext and may consider riskks, hazards a and safety,
available
e resources and
a skills, the technologyy, the nature of the system (e.g., custtom built vs.
COTS), test
t objective
es, and regu
ulations.
Typical approaches
a i
include:
o Anallytical approa aches, such as risk-base ed testing where testing iss directed to areas of gre eatest
risk
o Model-based app proaches, su uch as stocha astic testing using statisttical informattion about faiilure
ratess (such as re
eliability grow
wth models) oro usage (such as operattional profiless)
o Meth hodical approoaches, such h as failure-b
based (includ ding error guessing and ffault attacks),
expe erience-baseed, checklist--based, and quality
q charaacteristic-bassed
o Proccess- or standard-complia ant approach hes, such as those speciffied by indusstry-specific
standards or the various agile e methodolo ogies
o Dyna amic and heuristic approaches, such as explorato ory testing where
w testing is more reacctive to
even nts than pre-planned, and d where execcution and ev valuation aree concurrent tasks
o Conssultative app proaches, such as those in which testt coverage iss driven primarily by the advicea
and guidance of technology and/or
a business domain experts outsside the test tteam
o Regression-averrse approach hes, such as those that in nclude reuse of existing test material,
extensive automation of funcctional regresssion tests, anda standard d test suites
Version 2011
2 Page 50 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Metrics should
s be co
ollected durin
ng and at the end of a tesst level in ord
der to assesss:
o The adequacy off the test objectives for thhat test level
o The adequacy off the test app proaches takken
o The effectivenesss of the testiing with resp
pect to the ob
bjectives
Example
es of test conntrol actions include:
o Makking decisionss based on in nformation frrom test mon nitoring
o Re-pprioritizing tests when an identified rissk occurs (e.g., software delivered latte)
o Changing the tesst schedule dued to availa ability or unav nment
vailability of a test environ
o Settiing an entry criterion requuiring fixes to
o have been re-tested (confirmation ttested) by a
deve
eloper before e accepting them into a build
b
Version 2011
2 Page 51 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
5.4 Configu
uration Manage
M ement (K
K2) 10 minuttes
Terms
Configurration manag
gement, verssion control
Backgrround
The purp pose of configuration management is to establish and maintain the integritty of the prod
ducts
(compon nents, data and
a documen ntation) of the
e software orr system thro
ough the projject and prod
duct
life cycle
e.
During te
est planning,, the configurration manag
gement procedures and infrastructure
i e (tools) shou
uld be
chosen, documented d and implem mented.
Version 2011
2 Page 52 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
5.5 Risk an
nd Testing (K2) 30 minuttes
Terms
Product risk, project risk, risk, risk-based testting
Backgrround
Risk cann be defined as the chancce of an even
nt, hazard, th
hreat or situa
ation occurrin
ng and resultting in
undesiraable consequuences or a potential
p prob
blem. The level of risk will be determiined by the
likelihood
d of an adve
erse event ha
appening and d the impact (the harm re esulting from that event).
Version 2011
2 Page 53 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
In additio
on, testing may
m support the
t identificattion of new risks,
r may he
elp to determ
mine what risk
ks
should be
b reduced, and
a may lower uncertaintty about risks
s.
Version 2011
2 Page 54 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
5.6 Inciden
nt Manag
gement (K3) 40 minuttes
Terms
Incident logging, incident manage
ement, incide
ent report
Backgrround
Since onne of the objeectives of tessting is to find
d defects, the discrepanccies between n actual and
expectedd outcomes need
n to be loogged as inccidents. An in ncident mustt be investiga
ated and may turn
out to be
e a defect. Apppropriate acctions to disp pose incidents and defeccts should bee defined. Inc
cidents
and defeects should be
b tracked fro om discoveryy and classification to corrrection and confirmation n of the
solution. In order to manage
m all in
ncidents to completion,
c an
a organizatioon should esstablish an in
ncident
management processs and rules for f classificattion.
Incidentss may be raissed during development,, review, testting or use off a software product. Theey may
be raised d for issues in
i code or the working syystem, or in any
a type of documentatio
d on including
requiremments, develo opment docu uments, test documents,
d ormation succh as “Help” or
and user info
installatio
on guides.
Details of
o the inciden nt report mayy include:
o Date e of issue, isssuing organizzation, and author
a
o Expe ected and acctual results
o Identification of the
t test item (configuratio on item) and environmentt
o Softw ware or syste em life cycle process in which
w the inc
cident was ob bserved
o Desccription of the incident to enable repro oduction and d resolution, including log
gs, database e
dum mps or screen nshots
o Scop pe or degree e of impact onn stakeholde er(s) interestss
o Seve erity of the im
mpact on the system
o Urge ency/priority to fix
o Statu us of the inciident (e.g., open,
o deferre
ed, duplicate,, waiting to beb fixed, fixed
d awaiting ree-test,
closeed)
o Concclusions, reccommendatio ons and apprrovals
o Glob bal issues, suuch as other areas that may
m be affectted by a change resulting g from the inccident
o Change history, such as the sequence off actions take en by projectt team memb bers with res
spect
to the incident too isolate, repaair, and conffirm it as fixed
d
o Refe erences, inclu uding the ide
entity of the test
t case spe ecification tha
at revealed tthe problem
The structure of an in
ncident report is also covvered in the ‘Standard forr Software Te
est
Docume entation’ (IEE
EE Std 829-1998).
Version 2011
2 Page 55 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Refere
ences
5.1.1 Black, 2001, Hetzel,
H 1988
5.1.2 Black, 2001, Hetzel,
H 1988
5.2.5 Black, 2001, Craig,
C 2002, IEEE
I Std 8299-1998, Kaner 2002
5.3.3 Black, 2001, Craig,
C 2002, Hetzel,
H 1988
8, IEEE Std 829-1998
8
5.4 Craiig, 2002
5.5.2 Black, 2001 , IEEE Std 8299-1998
5.6 Blacck, 2001, IEE
EE Std 829-1
1998
Version 2011
2 Page 56 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
6. T
Tool Su
upport for
f Testiing (K2)) 8
80 minutes
Learniing Objecctives forr Tool Sup
pport for Testing
The obje
ectives identify what you will be able to
t do followin
ng the complletion of each
h module.
6.1 Typ
pes of Tes
st Tools (K
K2)
LO-6.1.1
1 Classify different types of test too g to their purpose and to the activities
ols according s of
the funda
amental testt process and d the softwarre life cycle (K2)
(
LO-6.1.3
3 Explain the
t term testt tool and the K2) 2
e purpose of tool support for testing (K
6.2 Effe
ective Use
e of Tools
s: Potentia
al Benefits
s and Risk
ks (K2)
LO-6.2.1
1 Summarrize the poten ntial benefitss and risks off test automaation and toool support forr
testing (K
K2)
LO-6.2.2
2 Rememb ber special consideration
c ns for test exeecution toolss, static analyysis, and tes
st
management tools (K K1)
2
LO-6.1.2 Intentiona
ally skipped
Version 2011
2 Page 57 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Terms
Configurration manag gement tool, coverage too ol, debugging tool, dynam mic analysis tool, inciden nt
management tool, load testing to ool, modeling oring tool, performance te
g tool, monito esting tool, probe
effect, re
equirements managemen w tool, security tool, staticc analysis tool, stress tes
nt tool, review sting
tool, testt comparatorr, test data prreparation to ool, test desig
gn tool, test harness,
h testt execution tool,
test man nagement too ol, unit test frramework too ol
Tool sup
pport for testing can have e one or more e of the follow
wing purpose es depending on the con ntext:
o Imprrove the efficciency of testt activities byy automating repetitive ta asks or suppo orting manua al test
activvities like testt planning, te
est design, te
est reporting and monitorring
o Auto omate activitiies that require significan nt resources when done manually
m (e.g g., static testting)
o Auto omate activitiies that cann not be executted manually y (e.g., large scale perforrmance testin ng of
clien
nt-server app plications)
o Incre omating large data comp
ease reliabilitty of testing (e.g., by auto parisons or siimulating
beha avior)
Some types of test to ools can be intrusive, which means th hat they can affect the acctual outcomee of
the test. For example e, the actual timing may be
b different due
d to the exxtra instructioons that are
executed d by the tool,, or you mayy get a differe
ent measure of code coveerage. The cconsequence e of
intrusive
e tools is calle
ed the probee effect.
Version 2011
2 Page 58 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Some to
ools offer sup
pport more ap
ppropriate fo
or developerss (e.g., tools that are usedd during
componeent and component integ g). Such tools are marke
gration testing ed with “(D)” iin the list bellow.
Review Tools
These toools assist with review pro
ocesses, che ecklists, revie
ew guidelinees and are ussed to store and
a
commun nicate review
w comments anda report on n defects and d effort. Theyy can be of ffurther help by
b
providing
g aid for onlin
ne reviews fo
or large or ge
eographically y dispersed teams.
t
Static Analysis
A Toools (D)
These toools help devvelopers and testers find defects priorr to dynamic testing by providing support
for enforrcing coding standards (inncluding seccure coding), analysis of structures
s annd dependenncies.
They can n also help in
n planning orr risk analysiis by providin
ng metrics fo
or the code (e
e.g., complex
xity).
Modelin
ng Tools (D)
These to
ools are used
d to validate software moodels (e.g., physical data model (PDM M) for a relational
databasee), by enume
erating inconnsistencies and finding deefects. These
e tools can o
often aid in
generating some test cases base ed on the moodel.
Version 2011
2 Page 59 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Test Comparators
Test commparators de etermine diffe
erences betw
ween files, da
atabases or test
t results. T Test executioon
tools typ
pically include
e dynamic co omparators, but post-exeecution comp parison may b be done by a
separatee comparison n tool. A test comparator may use a teest oracle, especially if itt is automate
ed.
Monitorring Tools
Monitorin
ng tools conttinuously anaalyze, verify and report on usage of specific
s syste
em resources
s, and
give warrnings of posssible service
e problems.
migration
n rules to ensure that the
e processed data is corre
ect, complete
e and complie
es with a pre
e-
defined context-spec
c cific standard
d.
Version 2011
2 Page 61 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Capturin
ng tests by re
ecording the actions of a manual teste er seems attractive, but tthis approach h does
e to large numbers of auttomated test scripts. A ca
not scale aptured scrippt is a linear rrepresentatio
on
with specific data and
d actions as part of each
h script. This type of scrip
pt may be unsstable when
unexpeccted events occur.
o
Version 2011
2 Page 62 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
A data-ddriven testing
g approach separates outt the test inputs (the data a), usually intto a spreadsheet,
and usess a more gen neric test scrript that can read
r the inpu
ut data and execute
e the ssame test script
with diffe
erent data. Testers who area not familiiar with the scripting
s guage can then create the
lang e test
data for these predeffined scripts..
There arre other techniques employed in data a-driven techniques, wherre instead off hard-coded data
combina ations placed in a spreadssheet, data is generated using algoritthms based o on configura
able
parameters at run timme and supplied to the appplication. Foor example, a tool may use an algoritthm,
which ge enerates a ra
andom user ID,
I and for re epeatability in
n pattern, a seed
s is emplloyed for
controllin
ng randomne ess.
In a keywword-driven testing
t appro
oach, the sprreadsheet coontains keyw words describbing the actioons to
be takenn (also called
d action word
ds), and test data. Testers
s (even if the
ey are not fam
miliar with th
he
c then define tests usin
scripting language) can ng the keywoords, which can
c be tailore ed to the
application being tessted.
Static Analysis
A Toools
nalysis tools applied to so
Static an ource code can
c enforce coding
c standards, but if a
applied to exiisting
code ma ay generate a large quanttity of messaages. Warnin ng messagess do not stop the code fro om
being tra
anslated into an executab ble program, but ideally should
s be addressed so tthat maintena ance
of the co
ode is easier in the future
e. A gradual implementatiion of the analysis tool w
with initial filte
ers to
exclude some messa ages is an efffective appro
oach.
Version 2011
2 Page 63 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
6.3 Introdu
ucing a Tool
T into
o an Org
ganizatio
on 15 minuttes
(K1)
Terms
No specific terms.
Backgrround
The main considerattions in seleccting a tool fo
or an organiz zation include
e:
o Asse essment of organizationa
o al maturity, sttrengths and d weaknesses and identiffication of
oppoortunities for an improvedd test processs supported by tools
o Evaluation again nst clear requ
uirements an nd objective criteria
c
o A prooof-of-conce ept, by using a test tool during the eva aluation phasse to establissh whether itt
orms effectivvely with the software und
perfo der test and within the cuurrent infrastrructure or to
identify changes needed to th hat infrastruccture to effec
ctively use th
he tool
o Evaluation of the e vendor (inccluding trainin
ng, support and
a commerccial aspects)) or service support
s
supppliers in case
e of non-commmercial toolss
o Identification of internal requiirements for coaching an nd mentoring in the use o of the tool
o Evaluation of training needs considering the current test team’s te est automatio on skills
o Estimmation of a cost-benefit
c r
ratio based ono a concrete e business ca ase
Refere
ences
6.2.2 Bu
uwalda, 20011, Fewster, 1999
1
6.3 Fewwster, 1999
Version 2011
2 Page 64 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
7. Referen
nces
Stand
dards
ISTQB Glossary
G of Terms
T used in Software Testing
T Versiion 2.1
[CMMI] Chrissis,
C M.B
B., Konrad, M.
M and Shrumm, S. (2004) CMMI, Guidelines for Pro
ocess Integrration
and Prod
duct Improveement, Addisson Wesley: Reading, MAA
See Secction 2.1
[IEEE Sttd 829-1998] IEEE Std 82 29™ (1998) IEEE Standa ware Test Doccumentation,
ard for Softw
See Secctions 2.3, 2.4
4, 4.1, 5.2, 5.3,
5 5.5, 5.6
[IEEE 10
028] IEEE Sttd 1028™ (20
008) IEEE Standard for Software
S Revviews and Au
udits,
See Secction 3.2
[IEEE 12
2207] IEEE 12207/ISO/IE
1 EC 12207-20
008, Software
e life cycle processes,
See Secction 2.1
[ISO 912
26] ISO/IEC 9126-1:2001
9 1, Software Engineering
E – Software Product
P Quality,
See Secction 2.3
Bookss
[Beizer, 1990] Beizerr, B. (1990) Software
S Tessting Techniq
ques (2nd ed Nostrand Reinhold:
dition), Van N
Boston
See Secctions 1.2, 1.3
3, 2.3, 4.2, 4.3,
4 4.4, 4.6
[Black, 2001]
2 Black, R. (2001) Ma anaging the Testing Proccess (3rd ediition), John W
Wiley & Sons
s: New
York
See Secctions 1.1, 1.2
2, 1.4, 1.5, 2.3,
2 2.4, 5.1, 5.2, 5.3, 5.5,, 5.6
[Buwaldaa, 2001] Buw a (2001) Inttegrated Test Design and
walda, H. et al. d Automation
n, Addison Wesley:
W
Reading, MA
See Secction 6.2
[Copelan
nd, 2004] Co opeland, L. (22004) A Pracctitioner’s Gu
uide to Softw
ware Test Dessign, Artech
House: Norwood,
N MAA
See Secctions 2.2, 2.3
3, 4.2, 4.3, 4.4,
4 4.6
[Craig, 2002]
2 Craig, Rick
R D. and Jaskiel,
J Steffan P. (2002)) Systematic Software Te
esting, Artech
h
House: Norwood,
N MA A
See Secctions 1.4.5, 2.1.3,
2 2.4, 4.1, 5.2.5, 5.3, 5.4
[Fewsterr, 1999] Fewster, M. and Graham, D. (1999) Softw
ware Test Au
utomation, A
Addison Weslley:
Reading, MA
See Secctions 6.2, 6.3
3
om and Graham, Dorothyy (1993) Software Inspection, Addison
[Gilb, 1993]: Gilb, To n Wesley:
Reading, MA
See Secctions 3.2.2, 3.2.4
3
[Hetzel, 1988] Hetzel, W. (1988) Complete Guide
G to Softw
ware Testing
g, QED: Welle
esley, MA
See Secctions 1.3, 1.4
4, 1.5, 2.1, 2.2,
2 2.3, 2.4, 4.1,
4 5.1, 5.3
[Kaner, 2002]
2 Kaner,, C., Bach, J. and Petttico
ord, B. (2002
2) Lessons Learned
L in So
oftware Testiing,
John Willey & Sons: New
N York
See Secctions 1.1, 4.5
5, 5.2
Version 2011
2 Page 65 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Version 2011
2 Page 66 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
8. A
Append
dix A – Syllabu
S s Backg
ground
History
ry of this Documen
D nt
This doccument was prepared
p bettween 2004 anda 2011 by y a Working Group
G comprrised of mem mbers
appointe ed by the Inteernational So oftware Testing Qualificattions Board (ISTQB).
( It w
was initially
reviewed d by a selectted review pa anel, and theen by represeentatives draawn from the internationa al
software e testing commmunity. The rules used in the produc ction of this document
d are
e shown in
Appendix C.
This doccument is the ation Certificcate in Software Testing, the
e syllabus forr the Internattional Founda
first level internationa
al qualificatio
on approved by the ISTQ QB (www.istqb.org).
Objectives of th
he Found
dation Ce
ertificate Qualificat
Q tion
o To gain
g recognitiion for testing as an esse ential and proofessional sooftware enginneering
speccialization
o To provide
p a stan
ndard framew work for the developmen nt of testers' careers
c
o To enable
e professsionally quaalified testerss to be recognized by employers, customers and peers,p
and to raise the profile
p of testers
o To promote
p conssistent and good testing practices
p within all softwa
are engineerring discipline
es
o To iddentify testing topics thatt are relevantt and of value to industryy
o To enable
e softwaare supplierss to hire certified testers and
a thereby gaing commercial advanta age
overr their compe etitors by advvertising their tester recruuitment policyy
o To provide
p an oppportunity forr testers and those with ana interest in testing to accquire an
interrnationally re
ecognized qu ualification in the subject
Objectives of th
he International Qualificatio
Q on (adapted from ISTQB
meetin
ng at Solllentuna, Novembe
N er 2001)
o To be
b able to com mpare testing skills acrosss different countries
c
o To enable
e testers to move accross countryy borders mo ore easily
o To enable
e multin national projects to have a common understandin
national/intern u ng of testing issues
o To inncrease the number
n of qu
ualified teste
ers worldwide e
o To have
h more im
mpact/value as a an interna ationally-base ed initiative than from anyy country-specific
apprroach
o To develop
d a commmon international bodyy of understanding and kn nowledge abbout testing
throuugh the sylla
abus and term minology, and to increase e the level off knowledge about testing g for
all pa
articipants
o To promote
p testing as a profeession in mo ore countries
o To enable
e testers to gain a reecognized qu n their native language
ualification in
o To enable
e sharinng of knowled dge and reso ources acros ss countries
o To provide
p intern
national recoognition of tessters and this s qualification due to parrticipation from
many countries
Entry Requirem
ments forr this Qua
alification
The entrry criterion fo
or taking the ISTQB Foun ndation Certifficate in Softw
ware Testingg examinatioon is
that canddidates have e an interest in software testing.
t Howe ever, it is stro
ongly recommended thatt
candidattes also:
o Have e at least a minimal
m backkground in either software e developme ent or software testing, su
uch as
six months
m experience as a system
s or usser acceptanc ce tester or as
a a softwaree developer
Version 2011
2 Page 67 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
o Takee a course th
hat has been accredited to
t ISTQB sta
andards (by one
o of the IS
STQB-recogn
nized
Natio
onal Boards)).
Backgground an
nd Historyy of the Foundatio
F on Certificcate in So
oftware
Testin
ng
The indeependent cerrtification of software
s testters began inn the UK with h the British C
Computer
Society'ss Information
n Systems Exxamination Board
B (ISEB)), when a Software Testin ng Board wa as set
up in 19998 (www.bcss.org.uk/iseb). In 2002, ASQF
A in Germmany began to support a German tes ster
qualification scheme (www.asqf.d de). This sylllabus is baseed on the ISE EB and ASQ QF syllabi; it
includes reorganized d, updated an nd additionall content, and d the empha ed at topics that
asis is directe
will provide the mostt practical help to testers..
An existiing Foundation Certificatee in Software e Testing (e.g., from ISEB, ASQF or a an ISTQB-
recognizzed National Board) awarrded before this t Internatio
onal Certifica
ate was relea ased, will be
deemed to be equiva alent to the In
nternational Certificate. The
T Foundation Certificatte does not expire e
and doess not need too be renewed d. The date iti was awarded is shown on the Certificate.
Within eaach participa
ating countryy, local aspeccts are contro olled by a naational ISTQB B-recognized d
Software e Testing Boaard. Duties of
o National Boards are sp pecified by thhe ISTQB, bu ut are implem mented
within eaach country. The duties of
o the countryy boards are expected to o include accreditation of
training providers
p and the settingg of exams.
Version 2011
2 Page 68 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
9. Append
A dix B – Learnin
L ng Objec
ctives/C
Cognitiv
ve Level of
Know
wledge
The follo
owing learnin
ng objectives are defined as applying to this syllab
bus. Each top
pic in the syllabus
will be exxamined acccording to the
e learning ob
bjective for it..
Examplee
Can reco
ognize the deefinition of “fa
ailure” as:
o “Non n-delivery of service to ann end user or any other stakeholder”
s or
o “Actuual deviation
n of the comp s expected delivery, service or result””
ponent or sysstem from its
Examplees
Can exp
plain the reasson why testss should be designed
d as early as posssible:
o To find defects when
w they aree cheaper to o remove
o To find the most important de efects first
Can exp
plain the similarities and differences
d between integ gration and system
s testin
ng:
o Similarities: testing more thann one compo onent, and ca an test non-ffunctional asspects
o Diffe
erences: integration testinng concentraates on interffaces and intteractions, an nd system te
esting
conccentrates on whole-system aspects, suchs as end--to-end proce essing
Version 2011
2 Page 69 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
e
Example
o Anallyze product risks and propose preve entive and coorrective mitig
gation activitties
o Desccribe which portions
p of an
n incident report are factual and whicch are inferre
ed from results
Refere
ence
(For the cognitive levvels of learning objectivess)
Andersoon, L. W. and Krathwohl, D. R. (eds) (2001) A Tax xonomy for Learning, Tea aching, and
Assessinng: A Revisio on of Bloom'ss Taxonomyy of Education es, Allyn & Bacon
nal Objective
Version 2011
2 Page 70 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
10. Append
A dix C – Rules
R A
Applied to the ISTQB
Found
dation Syyllabus
The rules listed here were used in the develoopment and review
r of thiss syllabus. (A
A “TAG” is sh
hown
after eacch rule as a shorthand
s ab
bbreviation of
o the rule.)
10.1.3 Learning
g Objective
es
LO1. Lea arning objecttives should distinguish between
b item
ms to be recoognized/reme embered (cognitive
level K1)), items the candidate
c should understtand concepttually (K2), ittems the can ndidate should be
able to practice/use
p (
(K3), and items the candidate should be able to useu to analyzze a document,
softwaree or project situation in co
ontext (K4). (KNOWLEDG GE-LEVEL)
LO2. The e descriptionn of the conteent should bee consistent with the learrning objectivves. (LO-
CONSIS STENT)
LO3. To illustrate thee learning obbjectives, sam
mple exam questions for each major ssection shou uld be
issued along
a with the
e syllabus. (LLO-EXAM)
Refere
ences
SR1. So ources and reeferences willl be given fo or concepts in
n the syllabuus to help training provide
ers
find out more
m informaation about the topic. (RE EFS)
SR2. Wh here there arre not readilyy identified an
nd clear sources, more detail
d should be provided in the
syllabus. For examplle, definitionss are in the Glossary,
G ms are listed in the syllab
so only the term bus.
(NON-REF DETAIL)
Version 2011
2 Page 71 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Source
es of Inforrmation
Terms used in the syyllabus are defined in the
e ISTQB Glos
ssary of Term
ms used in S
Software Testting. A
version of
o the Glossaary is availab
ble from ISTQ
QB.
A list of recommende
r ed books on software tessting is also issued in parrallel with thiss syllabus. The
T
main boo ok list is partt of the Referrences sectio
on.
Version 2011
2 Page 72 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
11. A
Appendi
ix D – No
otice to Training
T Providerrs
Each ma ajor subject heading
h in th
he syllabus iss assigned an n allocated tiime in minutees. The purp
pose of
this is bo
oth to give gu
uidance on thhe relative prroportion of time
t to be allocated to ea
ach section of
o an
accrediteed course, and to give ann approximatte minimum time t for the teaching
t of e
each section..
Training providers may spend mo ore time thann is indicated
d and candidates may spend more tim me
again in reading and research. A course curriiculum does not have to follow the sa ame order ass the
syllabus.
The syllaabus contains referencess to establishhed standards, which musst be used inn the prepara ation
of trainin
ng material. Each
E sion quoted in the currentt version of this
standarrd used musst be the vers
syllabus. Other publications, templates or sta andards not referenced
r n this syllabus may also be
in b
used and d referenced
d, but will nott be examine
ed.
All K3 an
nd K4 Learning Objective
es require a practical
p exe
ercise to be in
ncluded in th
he training
materialss.
Version 2011
2 Page 73 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
12. A
Appendi
ix E – Re
elease No
otes
Release 2010
1. CChanges to Learning Ob bjectives (LO) include som me clarificatio
on
a. Wording
W chaanged for the e following LO Os (content and
a level of L LO remains
unchanged):: LO-1.2.2, LO-1.3.1, LO--1.4.1, LO-1..5.1, LO-2.1.1, LO-2.1.3, LO-
2
2.4.2, LO-4.1 1.3, LO-4.2.1 1, LO-4.2.2, LO-4.3.1, LO O-4.3.2, LO-4 4.3.3, LO-4.4 4.1,
LO-4.4.2, LO O-4.4.3, LO-4 4.6.1, LO-5.1 1.2, LO-5.2.2 2, LO-5.3.2, LLO-5.3.3, LO O-
5
5.5.2, LO-5.6 6.1, LO-6.1.1 1, LO-6.2.2, LO-6.3.2.
b. LO-1.1.5 hass been reworrded and upg graded to K2 2. Because a comparison n of
t
terms of defeect related teerms can be expected.
c. LO-1.2.3 (K2 2) has been added.
a The content wass already covvered in the 2007 2
s
syllabus.
d. LO-3.1.3 (K2 2) now comb bines the con ntent of LO-3.1.3 and LO--3.1.4.
e. LO-3.1.4 hass been removed from the e 2010 syllab bus, as it is p
partially redun
ndant
w LO-3.1.3.
with
f. LO-3.2.1 hass been reworrded for cons sistency withh the 2010 syyllabus conte ent.
g. LO-3.3.2 hass been modiffied, and its level l has bee en changed from K1 to K2, K for
c
consistency with LO-3.1.2.
h. LO 4.4.4 hass been modiffied for clarity y, and has been changed d from a K3 tot a
K4. Reason n: LO-4.4.4 had already been b written in
i a K4 mann ner.
i. LO-6.1.2 (K1 1) was dropp ped from the 2010 syllabu us and was rreplaced with h LO-
6
6.1.3 (K2). There
T is no LO-6.1.2
L in th
he 2010 sylla abus.
2. Consistent
C u for test approach acccording to the
use e definition in
n the glossaryy. The term test
t
s
strategy will not be required as term to t recall.
3. Chapter
C 1.4 now contains the concep pt of traceability between test basis and test cases.
4. Chapter
C 2.x now containss test objectss and test ba asis.
5. Re-testing
R iss now the ma ain term in the glossary in nstead of con nfirmation tessting.
6. The
T aspect data d quality and
a testing has h been add ded at severa al locations in the syllabuus:
d
data quality and
a risk in Chapter
C 2.2, 5.5,
5 6.1.8.
7. Chapter
C 5.2.3 Entry Crite eria are adde ed as a new subchapter.
s Reason: Consistency to Exit
C
Criteria (-> entry
e criteria added to LO O-5.2.9).
8. Consistent
C u of the terrms test strattegy and testt approach with
use w their definition in the
g
glossary.
9. Chapter
C 6.1 shortened be ecause the tool
t descriptions were too o large for a 45 minute leesson.
10. IEEE Std 829:2008 has been b release ed. This vers sion of the syyllabus does not yet consider
t
this new edittion. Section 5.2 refers to o the docume ent Master Test Plan. The e content of the
M
Master Test Plan is cove ered by the co oncept that the
t documen nt “Test Plan”” covers diffeerent
l
levels of plan nning: Test plans
p for the test levels ca an be create ed as well as a test plan ono the
p
project level covering mu s named Master Test Pla
ultiple test levvels. Latter is an in this syllabus
a in the IS
and STQB Glossa ary.
11. Code
C of Ethics has been moved from m the CTAL to o CTFL.
Release 2011
Changess made with the “mainten nance releasse” 2011
1. General:
G Wo orking Party replaced
r by Working
W Gro
oup
2. Replaced
R po
ost-conditionss by postcon
nditions in ord
der to be connsistent with the ISTQB
G
Glossary 2.1.
3. First
F occurreence: ISTQB replaced by ISTQB®
4. Introduction to this Syllab
bus: Descripttions of Cognnitive Levelss of Knowledgge removed,
b
because thiss was redunddant to Appendix B.
Version 2011
2 Page 74 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
5. SSection 1.6: Because the e intent was not to define e a Learning Objective forr the “Code of o
E
Ethics”, the cognitive
c leveel for the secction has bee en removed.
6. Section
S 2.2.11, 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting isssues in listss.
7. Section
S 2.2.22 The word failure
f was no ot correct forr “…isolate fa ailures to a sspecific compponent
…”. Thereforre replaced with w “defect” in that sente ence.
8. Section
S 2.3: Corrected fo ormatting of bullet
b list of test
t objectivees related to test terms in n
s
section Test Types (K2).
9. Section
S 2.3.44: Updated description
d off debugging to be consistent with Verrsion 2.1 of the
ISTQB Glosssary.
10. Section
S 2.4 removed
r worrd “extensive e” from “inclu udes extensivve regression n testing”,
b
because the “extensive” depends on the change (size, risks, value, v etc.) a
as written in the
t
n
next sentencce.
11. Section
S 3.2: The word “in ncluding” hass been remov ved to clarifyy the sentencce.
12. Section
S 3.2.11: Because thet activities of a formal review r had been
b incorrecctly formattedd, the
r
review proceess had 12 main
m activitiess instead of six,
s as intend ded. It has beeen changed d back
t six, which makes this section
to s comp pliant with thhe Syllabus 2007
2 and thee ISTQB Advanced
L
Level Syllabu us 2007.
13. Section
S 4: Word
W “developped” replaced by “defined d” because test cases ge et defined annd not
d
developed.
14. Section
S 4.2: Text change e to clarify ho ow black-box x and white-b box testing co ould be used d in
c
conjunction w experien
with nce-based te echniques.
15. Section
S 4.3.55 text change e “..between actors, inclu uding users anda the syste em..” to “ …
b
between actoors (users orr systems), … “.
16. Section
S 4.3.55 alternative path replace ed by alterna ative scenario o.
17. Section
S 4.4.22: In order to
o clarify the te erm branch testing
t in the
e text of Secttion 4.4, a
s
sentence to clarify the focus of brancch testing has s been chang ged.
18. Section
S 4.5, Section 5.2.6: The term “experienced d-based” tessting has bee en replaced byb the
c
correct term “experience-based”.
19. Section
S 6.1: Heading “6.1.1 Understa anding the Meaning
M and Purpose of T Tool Supportt for
T
Testing (K2)” replaced byy “6.1.1 Tooll Support for Testing (K2)”.
20. Section
S 7 / Books:
B 3 edition of [Black,2001] listed, repla
The 3rd acing 2nd edittion.
21. Appendix
A D: Chapters re equiring exerccises have been b replaced by the gen neric requiremment
t
that all Learn ning Objectivves K3 and higherh requiree exercises. This is a req quirement specified
i the ISTQB
in B Accreditatio on Process (Version
( 1.266).
22. Appendix
A E: The change ed learning objectives bettween Versio on 2007 and 2010 are no ow
c
correctly liste
ed.
Version 2011
2 Page 75 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
13. Index
action word .............................................. 63 dyynamic testin ng ..................... 13, 31, 32, 3 36
alpha tessting ...................................... 24, 27 emmergency ch hange ................................. 30
architectture ................ 15, 21, 22, 25, 28, 29 ennhancement .................................... 27, 2 30
archiving g ............................................ 17, 30 enntry criteria ............................................. 33
automation ............................................... 29 eqquivalence partitioning
p .......................... 40
benefits of independe ence ....................... 47 10, 11, 18, 43,
errror.................................. 1 4 50
benefits of using tooll ............................... 62 errror guessing g ............................. 18, 43, 4 50
beta testting ........................................ 24, 27 exxhaustive tessting ................................... 14
black-bo ox technique ....................
. 37, 39, 40 exxit criteria13,, 15, 16, 33, 35, 45, 48, 49, 4 50,
black-bo ox test design n technique ............. 39 51
black-bo ox testing ...................................... 28 exxpected resu ult ...................... 16, 38, 48, 4 63
bottom-u up................................................. 25 exxperience-ba ased technique ....... 37, 39, 3 43
boundarry value analyysis ......................... 40 exxperience-ba ased test dessign techniqu ue 39
bug ........................................................... 11 exxploratory tessting ............................. 43, 4 50
captured d script ......................................... 62 fa
actory accepttance testing g ...................... 27
checklistts ........................................... 34, 35 fa
ailure10, 11, 13, 14, 18, 2 21, 24, 26, 32 2, 36,
choosing g test techniq que .......................... 44 43, 46, 50, 51, 53, 54, 6 69
code covverage ................. 28, 29, 37, 42, 58 ailure rate .......................................... 50,
fa 5 51
commerccial off the sh helf (COTS) ............ 22 ault .............................................. 10, 11, 43
fa
compilerr ................................................... 36 ault attack ................................................ 43
fa
complexxity .............................. 11, 36, 50, 59 eld testing ......................................... 24,
fie 2 27
compone ent integratio on testing22, 25, 29, 59, fo
ollow-up ....................................... 33, 34, 3 35
60 fo
ormal review ....................
. ................. 31,
3 33
compone ent testing22 2, 24, 25, 27,, 29, 37, 41, fu
unctional requ uirement ...................... 24, 2 26
42 fu
unctional spe ecification ............................ 28
configura ation management ......... 45, 48, 52 unctional taskk ......................................... 25
fu
Configurration manag gement tool ............. 58 unctional testt .......................................... 28
fu
confirma ation testing.... 13, 15, 16, 21, 28, 29 unctional testting ..................................... 28
fu
contract acceptance testing .................... 27 fu
unctionality ................ 24, 2 25, 28, 50, 53, 5 62
control fllow............................. 28, 36, 37, 42 im
mpact analysis ........................... 21, 30, 3 38
coverage e 15, 24, 28, 29, 37, 38, 39, 40, 42, incident ... 15, 16, 17, 19, 2 24, 46, 48, 55 5, 58,
50, 51 1, 58, 60, 62 59, 62
coverage e tool ........................................... 58 incident loggin ng ....................................... 55
custom-d developed so oftware.................... 27 incident mana agement.................. 48, 55, 5 58
data floww .................................................. 36 incident mana agement tool ................. 58, 5 59
data-drivven approach h .............................. 63 incident reportt ................................... 46, 4 55
data-drivven testing ................................... 62 independence e ............................. 18, 47, 4 48
debuggin ng .............................. 13, 24, 29, 58 informal review w ............................ 31, 33,3 34
debuggin ng tool ................................... 24, 58 inspection ............................... 31, 33, 34, 3 35
decision coverage .............................. 37, 42 inspection leader ..................................... 33
decision table testing g ........................ 40, 41 integration13, 22, 24, 25, 2 27, 29, 36, 40, 41,
decision testing ........................................ 42 42, 45, 48, 59, 60, 69
defect10 0, 11, 13, 14, 16, 18, 21, 24, 26, 28, integration tessting22, 24, 2 25, 29, 36, 40 0, 45,
29, 31 1, 32, 33, 34, 35, 36, 37, 39, 40, 41, 59, 60, 69
43, 44 4, 45, 47, 49, 50, 51, 53, 54, 55, 59, interoperabilityy testing ............................. 28
60, 69 9 introducing a tool t into an o organization5 57, 64
defect de ensity..................................... 50, 51 IS
SO 9126 ................................ 11, 29, 30, 3 65
defect tra acking tool................................... 59 deevelopment model m ................................. 22
developm ment .. 8, 11, 12, 13, 14, 18, 21, 22, ite
erative-increm mental development mod del22
24, 29 9, 32, 33, 36, 38, 44, 47, 49, 50, 52, keeyword-drive en approach........................ 63
53, 55 5, 59, 67 keeyword-drive en testing ............................ 62
developm ment model ....................
. ...... 21, 22 kick-off ...................................................... 33
drawbaccks of indepe endence ................... 47 le
earning objecctive ... 8, 9, 10, 21, 31, 37 7, 45,
driver ........................................................ 24 57, 69, 70, 71
dynamicc analysis too ol ....................... 58, 60 lo
oad testing ................................... 28, 58, 5 60
Version 2011
2 Page 76 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board
International
Certiffied Teste
er Software Te esting
Founda
ation Level Syyllabus Q
Qualifications
s Board
Version 2011
2 Page 78 of 78 31-Ma
ar-2011
© Internationa
al Software Testing Qualifications
Q Board