Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

SYLLABUS MODULE 4

Concepts of Agile Development methodology; Scrum Framework.


Software testing principles, Program inspections, Program
walkthroughs, Program reviews;
Blackbox testing: Equivalence class testing, Boundary value testing,
Decision table testing, Pairwise testing, State transition testing, Use-
case testing; White box testing: control flow testing, Data flow
testing.
Testing automation: Defect life cycle; Regression testing, Testing
automation; Testing nonfunctional requirements.
What is Agile Software Engineering?
➢ Combines a philosophy and a set of development guidelines.
➢ Philosophy encourages
o customer satisfaction and early incremental delivery of software.
o Small and highly motivated project teams.
o Informal methods
o Overall development simplicity.
Manifesto for Agile Software Development

We are uncovering better ways of developing software by doing it and helping


others do it. Through this work we have come to value:
➢ Individuals and interactions over processes and tools
➢ Working software over comprehensive documentation
➢ Customer collaboration over contract negotiation
➢ Responding to change over following a plan
➢ That is, while there is value in the items on the right, we value the
items on the left more

12 Agile Principles
➢ Principles behind the Agile Manifesto We follow these principles:
1. Our highest priority is to satisfy the customer through early and continuous
delivery of valuable software.
2. Welcome changing requirements, even late in development. Agile processes
harness change for the customer's competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of
months, with a preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and
support they need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within
a development team is face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development. The sponsors, developers,
and users should be able to maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity--the art of maximizing the amount of work not done--is essential.
11. The best architectures, requirements, and designs emerge from self-organizing
teams.
12. At regular intervals, the team reflects on how to become more effective, then
tunes and adjusts its behavior accordingly.
Who does Agile Development?
➢ Software engineers, managers ,customers and end users work together on an agile
team.

Why Agile Development is important?


➢ In the modern business environment ,the computer based systems and software
products is fast paced and ever changing.
➢ Agile software engineering represents a reasonable alternative to conventional
software engineering for certain classes of software and certain types of projects.
➢ Deliver successful systems quickly.

Agile Software Development


➢ Agile software development is a conceptual framework for software engineering
that promotes development iterations throughout the life-cycle of the project.
➢ Software developed during one unit of time is referred to as an iteration, which may
last from one to four weeks.
➢ Agile methods also emphasize working software as the primary measure of progress

Agile Development-steps
➢ Communication
➢ Planning
➢ Modelling
➢ Construction
➢ Deployment
o Software increments are delivered to the customer on the appropriate commitment
date.

What is Agility?
❖ The ability to both create and respond to change in order to profit in a turbulent
business environment
❖ Agile-very active (ready to accept changes)
o Changes because of new technology.
o Support for changes should be built in everything we do in software.
❖ Encourages more effective communication .
❖ Emphasizes rapid delivery of operational software

Agility and the cost of change


❖ Is minimum when comparing with traditional software development
➢ Conventional
– cost of change increases as a project progresses.
– easy to accommodate a change when a team is gathering requirements early in a
project. If there are any changes, the costs of doing this work are minimal.
– But if the middle of validation testing, a stakeholder is requesting a major
functional change. Then the change requires a modification to the architectural
design, construction of new components, changes to other existing components, new
testing and so on. Costs escalate quickly.
➢ Agile process –
may “flatten” the cost of change curve by coupling incremental delivery with agile
practices such as continuous unit testing . Thus team can accommodate changes late
in the software project without dramatic cost and time impact

Characteristics
➢ Modularity
➢ Iterative
➢ Time-bound
➢ Incremental
➢ Convergent
➢ People-oriented
➢ Collaborative

What is an Agile Process?


➢ Agile software process is characterised in a number of key assumptions.
o It is difficult to predict in advance which software requirement will persist
and which will change.
o It is difficult to predict how much design is necessary before construction is
used to prove the design.
o Analysis, design, construction, and testing are not as predictable as we might
think.
➢ Agile process is adaptable(incrementally)
➢ Agile-incremental development strategy.
➢ Software increments must be delivered in short time periods so that adaptation keeps
pace with change.
➢ It enables the customer to evaluate the software increment regularly ,provide
necessary feedback to the customer team and influence the process adaptations that
are made to accommodate the feedback.

Human Factors
➢ Process moulds to the needs of the people and team, not the other way around
➢ key traits must exist among the people on an agile team and the team itself:
o – Competence. ( talent, skills, knowledge)
o – Common focus. ( deliver a working software increment )
o – Collaboration. ( peers and stakeholders)
o – Decision-making ability. ( freedom to control its own destiny)
o – Fuzzy problem-solving ability.(ambiguity and constant changes, today
problem may not be tomorrow’s problem)
o – Mutual trust and respect
o – Self-organization. ( themselves for the work done, process for its local
environment, the work schedule)

Agile Methodologies
➢ eXtreme Programming (XP)
➢ Scrum
➢ Crystal family of methodologies
➢ Feature-Driven Development (FDD)
➢ Adaptive Software Development (ASD)
➢ Dynamic System Development Model (DSDM)
➢ Agile Unified Process (AUP)

The 12 Practices
1. The Planning Game
2. Small Releases
3. Metaphor
4. Simple Design
5. Testing
6. Refactoring
7. Pair Programming
8. Collective Ownership
9. Continuous Integration
10. Forty Hour work week
11. On-Site Customer
12. Coding Standards

User stories
❖ Requirements via User Stories
❖ Short cards with natural language description of what a customer wants
❖ Prioritized by customer
❖ Resource and risk estimated by developers
❖ Via “The Planning Game”
❖ Play the Planning Game after each increment
1 - The Planning Game
➢ Planning for the upcoming iteration
➢ Uses stories provided by the customer
➢ Technical persons determine schedules, estimates, costs, etc
➢ A result of collaboration between the customer and the developers.
➢ Sometimes its is hard to estimate the required resources
➢ Small releases have less risk
➢ Requirements specification (SRS) is better than user stories (Written documentation
works well for large projects) Advantages
➢ Reduction in time wasted on useless features
➢ Greater customer appreciation of the cost of a feature
➢ Less guesswork in planning Disadvantages
➢ Customer availability
➢ Is planning this often necessary?
2- Small Releases
➢ Small in terms of functionality
➢ Small releases are really valuable
➢ Manage the risk of delivering something wrong
➢ Helps the customer to define better requirements
➢ Less functionality means releases happen more frequently
➢ Release every few weeks
➢ Large projects are not so flexible
➢ Try to release something, even you know that it will be changed
➢ Support the planning game Advantages
➢ Frequent feedback
➢ Tracking • Reduce chance of overall project slippage Disadvantages
➢ Not easy for all projects
➢ Not needed for all projects
➢ Versioning issues
3 – Metaphor
➢ The oral architecture of the system
➢ A common set of terminology Advantages
➢ Encourages a common set of terms for the system
➢ Reduction of buzz words and jargon
➢ A quick and easy way to explain the system Disadvantages
➢ Often the metaphor is the system
➢ Another opportunity for miscommunication
➢ The system is often not well understood as a metaphor
4 – Simple Design
➢ K.I.S.S. (keep it simple stupid)
➢ Do as little as needed, nothing more
➢ No Big Design Up Front (BDUF)
➢ Do The Simplest Thing That Could Possibly Work
➢ Not too much formal documentation.
➢ Simple design does not mean "no design"
➢ It is about establishing priorities
➢ If something is important for this release and for the whole system, it should be
designed well
➢ Don't lose time to design something you will not use soon! Advantages
➢ Time is not wasted adding superfluous functionality
➢ Easier to understand what is going on
➢ Refactoring and collective ownership is made possible
➢ Helps keeps programmers on track Disadvantages
➢ What is “simple?”
➢ Simple isn’t always best
5 – Testing
➢ Unit testing (use for complex logic only)
➢ Test-first design (Developers write unit tests before coding))
➢ All automated (regression test)
➢ Acceptance Tests (Acts as ‘contract’, Measure of progress) Advantages
➢ Unit testing promote testing completeness
➢ Test-first gives developers a goal
➢ Automation gives a suite of regression test Disadvantages
➢ Automated unit testing isn’t for everything
➢ Reliance on unit testing isn’t a good idea
➢ A test result is only as good as the test itself
6 – Refactoring
➢ Changing how the system does something but not what is done
➢ Improve the design of existing code without changing its functionality
➢ Relies on unit testing to ensure the code is not broken
➢ Improves the quality of the system in some way.
➢ Delivering working software faster is important!
➢ You can write the code to run somehow
➢ With simple design
➢ With less effort
➢ Later you can refactor the code if necessary
➢ Bad smells in code:
1. Long method / class
2. Duplicate code
3. Methods does several different things (bad cohesion)
4. Too much dependencies (bad coupling)
5. Complex / unreadable code
➢ Refactoring is not a reason to intentionally write bad code!
➢ Good coding style is always important! Advantages
➢ Prompts developers to proactively improve the product as a whole
➢ Increases developer knowledge of the system Disadvantages
➢ Not everyone is capable of refactoring
➢ Refactoring may not always be appropriate
➢ Would upfront design eliminate refactoring?
7 – Pair Programming
➢ Two Developers, One monitor, One Keyboard
➢ One “drives” and the other thinks
➢ Switch roles as needed.
➢ Pairs produce higher quality code
➢ Pairs complete their tasks faster
➢ Pairs enjoy their work more
➢ Pairs feel more confident in their work
➢ Pair programming is great for complex and critical logic
➢ When developers need good concentration
➢ Where quality is really important
➢ Especially during design
➢ Reduces time wasting
➢ Trivial tasks can be done alone
➢ Peer reviews instead pair programming is often alternative Advantages
➢ Two heads are better than one
➢ Focus • Two people are more likely to answer the following questions:
➢ Is this whole approach going to work?
➢ What are some test cases that may not work yet?
➢ Is there a way to simplify this? Disadvantages
➢ Many tasks really don’t require two programmers
➢ A hard sell to the customers
➢ Not for everyone
8 – Collective Ownership
➢ Code belongs to the project, not to an individual engineer!
➢ The idea that all developers own all of the code
➢ Any engineer can modify any code
➢ Enables refactoring
➢ Better quality of the code
➢ No need to wait for someone else to fix something.
➢ Collective code ownership is absolutely essential
➢ You need to fight the people who don't agree with this!
➢ Fire people writing unreadable and unmaintainable code
➢ Don't allow somebody to own some module and be irreplaceable Advantages
➢ Helps mitigate the loss of a team member leaving
➢ Promotes developers to take responsibility for the system as a whole rather then
parts of the system Disadvantages
➢ Loss of accountability
➢ Limitation to how much of a large system that an individual can practically “own”
9 – Continuous Integration
➢ New features and changes are worked into the system immediately
➢ Code is not worked on without being integrated for more than a day.
➢ Integrating often is really valuable
➢ Sometimes you cannot finish a task for one day and integrate it
➢ For small projects with small teams integration is not an issue
➢ For large and complex projects it's crucial Advantages
➢ Reduces to lengthy process
➢ Enables the Small Releases practice
➢ Disadvantages
➢ The one day limit is not always practical
➢ Reduces the importance of a well-thought-out architecture
10 – Forty Hour work week
➢ The work week should be limited to 40 hours
➢ Regular overtime is a symptom of a problem and not a long term solution
Advantages • Most developers lose effectiveness past 40-Hours
➢ Value is placed on the developers well-being
➢ Management is forced to find real solutions Disadvantages
➢ 40-Hours is a magic number
➢ Some may like to work more than 40-Hours
11 – On-Site Customer
➢ Just like the title says!
➢ Customer available on site
➢ Clarify user stories
➢ Make critical business decisions
➢ Acts to “steer” the project
➢ Developers don’t make assumptions
➢ Developers don’t have to wait for decisions
➢ Gives quick and continuous feedback to the development team
➢ Face to face communication minimizes the chances of misunderstanding Advantages
➢ Can give quick and knowledgeable answers to real development questions
➢ Makes sure that what is developed is what is needed
➢ Functionality is prioritized correctly. Disadvantages
➢ On-site customer actually does not work!
➢ Customers are busy
➢ Meetings every day is working better
➢ Customers are not competent!
➢ Customers always say "Yes, this is what I want" and later say the opposite
➢ You need to think instead of them (Use prototyping )
12 – Coding Standards
➢ All code should look the same
➢ It should not be possible to determine, who coded what, based on the code itself.
Advantages
➢ Reduces the amount of time developers spend reformatting others code
➢ Reduces the need for internal commenting
➢ Call for clear, unambiguous code Disadvantages
➢ Degrading the quality of inline documentation

SCRUM
Scrum Project Management
❖ Scrum project management is a methodology for managing software delivery that
comes under the broader umbrella of agile project management.
❖ It provides a lightweight process framework that embraces iterative and incremental
practices, helping organizations deliver working software more frequently. Overview
of Scrum
❖ Scrum (term borrowed from Rugby) is one of the agile frameworks for software
development
❖ Scrum is an iterative incremental framework for managing complex work (such as
new product development).
❖ It allows us to rapidly and repeatedly inspect actual working software (every two
weeks to one month).
❖ Project is divided into multiple short time boxes of 2-4 weeks duration called
“sprint”. • Every two weeks to a month anyone can see real working software.
❖ Scrum is an agile process that allows us to focus on delivering the highest business
value in the shortest time.
❖ The business sets the priorities.
❖ Teams self-organize to determine the best way to deliver the highest priority
features.
❖ Focus is on delivering working software fast and on team collaboration
❖ Developed by Jeff Sutherland, Ken Schwaber and Mike Beedle. Scrum
Characteristics • Self-organizing teams
❖ Product progresses in a series of month-long “sprints”
❖ Requirements are captured as items in a list of “product backlog”
❖ No specific engineering practices prescribed.
❖ One of the “agile processes”.
❖ No change during a sprint……
❖ Plan sprint durations around how long you can commit to keeping change out of the
sprint Sprint Scrum projects make progress in a series of “sprints”
❖ Analogous to Extreme Programming iterations
❖ Typical duration is 2–4 weeks or a calendar month at most
❖ A constant duration leads to a better rhythm
❖ Product is designed, coded, and tested during the sprint

Scrum View
ROLES
1.Product Owner
2.Scrum Master
3.Team
Product Owner
• Define the features of the product
• Decide on release date and content
• Be responsible for the profitability of the product
• Prioritize features according to market value
• Adjust features and priority every iteration, as needed
• Accept or reject work result
The Scrum Master
• Represents management to the project
• Responsible for enacting Scrum values and practices
• Removes impediments
• Ensure that the team is fully functional and productive
• Enable close cooperation across all roles and functions
• Shield the team from external interferences
• Scrum Master is a trained responsible person, who renders services as described below
:- Scrum Master Services to the Product Owner
• Finding techniques for effective Product Backlog management.
• Helping the Scrum Team understand the need for clear and concise Product Backlog
items.
• Understanding product planning in an empirical environment.
• Ensuring that the Product Owner knows how to arrange the Product Backlog to
maximize value.
• Understanding and practicing agility.
• Facilitating Scrum events as needed. Scrum Master Services to the Scrum Team
• Coaching the Scrum Team in self-organization and cross-functionality.
• Helping the Scrum Team to create high-value products.
• Removing impediments to the Scrum Team’s progress.
• Facilitating Scrum events as requested or needed.
• Coaching the Scrum Team in organizational environments in which Scrum is not yet
fully adopted and understood. Scrum Master Services to the Organization
• Leading and coaching the organization in its Scrum adoption.
• Planning Scrum implementations within the organization.
• Helping employees and stakeholders understand and enact Scrum and empirical
product development.
• Causing change that increases the productivity of the Scrum Team.
• Working with other Scrum Masters to increase the effectiveness of the application of
Scrum in the organization
Scrum Team
• Typically 5-9 people
• Cross-functional:
• Programmers, testers, user experience designers, etc.
• Members should be full-time
• May be exceptions (e.g., database administrator)
• Teams are self-organizing
• Ideally, no titles but rarely a possibility
• Membership should change only between sprints

CEREMONIES
• Sprint planning
• Sprint review
• Sprint retrospective

Sprint planning
• Team selects items from the product backlog they can commit to completing
• Sprint backlog is created
• Tasks are identified and each is estimated (1-16 hours)
• Collaboratively, not done alone by the Scrum Master
• High-level design is considered
The Sprint Review
• Team presents what it accomplished during the sprint
• Typically takes the form of a demo of new features or underlying architecture
• Informal
• 2-hour prep time rule
• No slides
• Whole team participates
• Invite the world
Sprint Retrospective
• Periodically take a look at what is and is not working
• Typically 15–30 minutes
• Done after every sprint(increment)
• Whole team participates
• Scrum Master
• Product owner
• Team
• Possibly customers and others
The Daily Scrum
• Parameters
• Daily
• 15-minutes
• Stand-up
• Not for problem solving
• Whole world is invited
• Only team members, Scrum Master, product owner, can talk
• Helps avoid other unnecessary meetings
• Is NOT a problem solving session
• Is NOT a way to collect information about WHO is behind the schedule
• Is a meeting in which team members make commitments to each other and to the
Scrum Master
• Is a good way for a Scrum Master to track the progress of the Team
• Everyone answers 3 questions 1. What did you do yesterday? 2. What will you do
today? 3. Is anything in your way?

ARTIFACTS
➢ Product backlog
➢ Sprint backlog
➢ Burndown charts
Scrum Product Backlog
• Scrum Product Backlog is simply a list of all things that needs to be done within the
project.
• It replaces the traditional requirements specification artifacts.
• These items can have a technical nature or can be user-centric e.g. in the form of user
stories.
• it is a prioritized features list, containing short descriptions of all functionality desired
in the product.
• When applying Scrum, it's not necessary to start a project with a lengthy, upfront
effort to document all requirements.
• A Product Backlog is never complete.
• The earliest development of it only lays out the initially known and best-understood
requirements.
• The Product Backlog evolves as the product and the environment in which it will be
used evolves.
• The Product Backlog is dynamic; it constantly changes to identify what the product
needs to be appropriate, competitive, and useful. As long as a product exists, its Product
Backlog also exists.
• The requirements
• A list of all desired work on the project
• Prioritized by the product owner
• Reprioritized at the start of each sprint
Scrum Sprint Backlog
• The sprint backlog is a list of tasks identified by the Scrum team to be completed
during the Scrum sprint (increment).
• During the sprint planning meeting, the team selects some number of product backlog
items, usually in the form of user stories, and identifies the tasks necessary to complete
each user story.
• A short statement of what the work will be focused on during the sprint
• Individuals sign up for work of their own choosing
• Work is never assigned
• Estimated work remaining is updated daily
• Any team member can add, delete or change the sprint backlog
• Work for the sprint emerges
• If work is unclear, define a sprint backlog item with a larger amount of time and break
it down later
• Update work remaining as more becomes known
Burn-down chart
• A burn down chart is a graphical representation of work left to do versus time.
• The outstanding work (or backlog) is often on the vertical axis, with time along the
horizontal.
• That is, it is a run chart of outstanding work.
• It is useful for predicting when all of the work will be completed.
• Are used to represent “work done”.
• Are wonderful Information Radiators
3 Types:
1. Sprint Burn down Chart (progress of the Sprint)
2. Release Burn down Chart (progress of release)
3. Product Burn down chart (progress of the Product)
• X-Axis: time (usually in days)
• Y-Axis: remaining effort
Sprint burn down chart
• Depicts the total Sprint Backlog hours remaining per day
• Shows the estimated amount of time to release
• Ideally should burn down to zero to the end of the Sprint
• Actually is not a straight line
• Can bump UP Release burn down chart
• Will the release be done on right time?
• X-axis: sprints
• Y-axis: amount of hours remaining
• The estimated work remaining can also burn up Product burn down chart
• Is a “big picture” view of project’s progress (all the releases

Scalability
• Typical individual team is 7 ± 2 people
• Scalability comes from teams of teams
• A technique to scale Scrum up to large groups (over a dozen people), consisting of
dividing the groups into Agile teams of 5-10.
• Each daily scrum within a sub-team ends by designating one member as “ambassador”
to participate in a daily meeting with ambassadors from other teams, called the Scrum
of Scrums.
• Scrum of scrums can be used to scale the daily stand-up meeting when multiple teams
are involved.
• Its purpose is to support agile teams in collaborating and coordinating their work with
other teams.
• Factors in scaling
• Type of application
• Team size
• Team dispersion
• Project duration
• The Scrum of Scrums is a regularly scheduled meeting among team representatives
whose purpose is to coordinate dependencies among teams, synchronize delivery
efforts, and provide a heads up for any changes that might be introduced that would
impede others' success.

Software Testing Principles


• Software testing is a process, or a series of processes, designed to make sure computer
code does what it was designed to do and, conversely, that it does not do anything
unintended.
• Software should be predictable and consistent, presenting no surprises to users.
• Even a seemingly simple program can have hundreds or thousands of possible input
and output combinations.
• Creating test cases for all of these possibilities is impractical.
• Complete testing of a complex application would take too long and require too many
human resources to be economically feasible.
• In addition, the software tester needs the proper attitude (perhaps ‘‘vision’’ is a better
word) to successfully test a software application.
• In some cases, the tester’s attitude may be more important than the actual process
itself.
• When you test a program, you want to add some value to it.
• Adding value through testing means raising the quality or reliability of the program.
• Raising the reliability of the program means finding and removing errors.
• Therefore, don’t test a program to show that it works; rather, start with the assumption
that the program contains errors (a valid assumption for almost any program) and then
test the program to find as many of the errors as possible.
• Thus, a more appropriate definition is this: Testing is the process of executing a
program with the intent of finding errors.
• Given our definition of program testing, an appropriate next step is to determine
whether it is possible to test a program to find all of its errors.
• it is impractical, often impossible, to find all the errors in a program.
• To combat the challenges associated with testing economics, you should establish
some strategies before beginning.
• Two of the most prevalent strategies include black-box testing and white-box testing
Black-Box Testing
• One important testing strategy is black-box testing (also known as data driven or
input/output-driven testing).
• Your goal is to be completely unconcerned about the internal behavior and structure of
the program. Instead, concentrate on finding circumstances in which the program does
not behave according to its specifications.
• In this approach, test data are derived solely from the specifications (i.e., without
taking advantage of knowledge of the internal structure of the program).
• If you want to use this approach to find all errors in the program, the criterion is
exhaustive input testing, making use of every possible input condition as a test case.
• Exhaustive input testing is impossible.
• Two important implications of this:
(1) You cannot test a program to guarantee that it is error free; and
(2) a fundamental consideration in program testingis one of economics.
White-Box Testing
• Another testing strategy, white-box (or logic-driven) testing, permits you to examine
the internal structure of the program.
• This strategy derives test data from an examination of the program’s logic (and often,
unfortunately, at the neglect of the specification).
• The analog is usually considered to be exhaustive path testing.
• Exhaustive path testing-. That is, if you execute, via test cases, all possible paths of
control flow through the program, then possibly the program has been completely
tested.
• Exhaustive path testing, like exhaustive input testing, appears to be impractical, if not
Principle 1: A necessary part of a test case is a definition of the expected output or
result.
• In spite of the proper destructive definition of testing, there is still a subconscious
desire to see the correct result.
• One way of combating this is to encourage a detailed examination of all output by
precisely spelling out, in advance, the expected output of the program.
• Therefore, a test case must consist of two components:
1. A description of the input data to the program.
2. A precise description of the correct output of the program for that set of
input data.
Principle 2: A programmer should avoid attempting to test his or her own
program.
• it’s a bad idea to attempt to edit or proofread his or her own work.
• they really don’t want to find errors in their own work
• After a programmer has constructively designed and coded a program, it is extremely
difficult to suddenly change perspective to look at the program with a destructive eye.
• The program may contain errors due to the programmer’s misunderstanding of the
problem statement or specification.
• If this is the case, it is likely that the programmer will carry the same
misunderstanding into tests of his or her own program.
• This does not mean that it is impossible for a programmer to test his or her own
program. Rather, it implies that testing is more effective and successful if someone else
does it.
• this argument does not apply to debugging (correcting known errors); debugging is
more efficiently performed by the original programmer.
Principle 3: A programming organization should not test its own programs.
• The argument here is similar to that made in the previous principle.
• One reason for this is that it is easy to measure time and cost objectives, whereas it is
extremely difficult to quantify the reliability of a program.
• Therefore, it is difficult for a programming organization to be objective in testing its
own programs, because the testing process, if approached with the proper definition,
may be viewed as decreasing the probability of meeting the schedule and the cost
objectives.
• this does not say that it is impossible for a programming organization to find some of
its errors, because organizations do accomplish this with some degree of success.
• Rather, it implies that it is more economical for testing to be performed by an
objective, independent party.
Principle 4: Any testing process should include a thorough inspection of the results
of each test.
• This is probably the most obvious principle, but again it is something that is often
unnoticed.
• There are numerous experiments that show many subjects failed to detect certain
errors, even when symptoms of those errors were clearly observable on the output
listings.
• Put another way, errors that are found in later tests were often missed in the results
from earlier tests.
Principle 5: Test cases must be written for input conditions that are invalid and
unexpected, as well as for those that are valid and expected.
• There is a natural tendency when testing a program to concentrate on the valid and
expected input conditions, to the neglect of the invalid and unexpected conditions.
• Test cases representing unexpected and invalid input conditions seem to have a higher
error detection yield than do test cases for valid input conditions.
Principle 6: Examining a program to see if it does not do what it is supposed to do
is only half the battle; the other half is seeing whether the program does what it is
not supposed to do.
• This is a consequence to the previous principle.
• Programs must be examined for unwanted side effects.
• For instance, a payroll program that produces the correct pay checks is still an
erroneous program if it also produces extra checks for non-existent employees
Principle 7: Avoid throwaway test cases unless the program is truly a throwaway
program.
• Whenever the program has to be tested again (e.g., after correcting an error or making
an improvement), the test cases must be reinvented.
• since this reinvention requires a considerable amount of work, people tend to avoid it.
• The retest of the program is rarely as rigorous as the original test, meaning that if the
modification causes a previously functional part of the program to fail, this error often
goes undetected.
• Saving test cases and running them again after changes to other components of the
program is known as regression testing.
Principle 8: Do not plan a testing effort under the tacit assumption that no errors
will be found.
• This is a mistake project managers often make and is a sign of the use of the incorrect
definition of testing—that is, the assumption that testing is the process of showing that
the program functions correctly.
• Once again, the definition of testing is the process of executing a program with the
intent of finding errors.
• It is impossible to develop a program that is completely error free.
• Even after extensive testing and error correction, it is safe to assume that errors still
exist; they simply have not yet been found.
Principle 9: The probability of the existence of more errors in a section of a
program is proportional to the number of errors already found in that section.
• Errors tend to come in clusters and that some sections seem to be much more prone to
errors than other sections.
• nobody has supplied a good explanation of why this occurs.
• The phenomenon is useful in that it gives us insight or feedback in the testing process.
The Surprising Relationship between Errors Remaining and Errors Found. Principle 10:
Testing is an extremely creative and intellectually challenging task.
• The creativity required in testing a large program exceeds the creativity required in
designing that program.

Inspections and Walkthroughs


• The three primary human testing methods are code inspections, walkthroughs and
user (or usability) testing. • Code inspections and walkthroughs are code-oriented
methods.
• These methods can be used at virtually any stage of software development, after an
application is deemed to be complete or as each module or unit is complete.
• The two code inspection methods have a lot in common (similarities). Similarities are
given below…
• Inspections and walkthroughs involve a team of people reading or visually inspecting
a program.
• With either method, participants must conduct some preparatory work.
• The climax is a ‘‘meeting of the minds,’’ at a participant conference.
• The objective of the meeting is to find errors but not to find solutions to the errors—
that is, to test, not debug.
• In a walkthrough, a group of developers—with three or four being an optimal
number—performs the review.
• Only one of the participants is the author of the program.
• Therefore, the majority of program testing is conducted by people other than the
author, which follows testing principle 2, which states that an individual is usually
ineffective in testing his or her own program.
• Inspections and walkthroughs are more effective, because people other than the
program’s author are involved in the process.
• Another advantage of walkthroughs, resulting in lower debugging (error-correction)
costs, is the fact that when an error is found it usually is located precisely in the code.
• Modifying an existing program is a process that is more error prone (in terms of errors
per statement written) than writing a new program.
• Therefore, program modifications also should be subjected to these testing processes
as well as regression testing techniques.

Code Inspections (Program Inspections)


• A code inspection is a set of procedures and error-detection techniques for group code
reading.
• An inspection team usually consists of four people.
• The first of the four plays the role of moderator (quality-control engineer ).
• The moderator is expected to be a competent programmer, but he or she is not the
author of the program and need not be familiar with the details of the program.
Moderator duties include:
• Distributing materials for, and scheduling, the inspection session.
• Leading the session.
• Recording all errors found.
• Ensuring that the errors are subsequently corrected.
• The second team member is the programmer.
• The remaining team members usually are the program’s designer and a test specialist.
• The specialist should be well versed in software testing and familiar with the most
common programming errors. Inspection Agenda:-
• Several days in advance of the inspection session, the moderator distributes the
program’s listing and design specification to the other participants.
• The participants are expected to familiarize themselves with the material prior to the
session.
• During the session, two activities occur:
1. The programmer narrates, statement by statement, the logic of the program. During
the discourse, other participants should raise questions, which should be pursued to
determine whether errors exist. In other words, the simple act of reading aloud a
program to an audience seems to be a remarkably effective error-detection technique.
2. The program is analysed with respect to checklists of historically common
programming errors.
• The moderator is responsible for ensuring that the discussions proceed along
productive lines and that the participants focus their attention on finding errors, not
correcting them.
• The programmer corrects errors after the inspection session.
• Upon the conclusion of the inspection session, the programmer is given a list of the
errors uncovered.
• List of errors is also analysed, categorized, and used to refine the error checklist to
improvetheeffectivenessoffutureinspections.
• The time and location of the inspection should be planned to prevent all outside
interruptions.
• The optimal amount of time for the inspection session appears to be from 90 to 120
minutes. • The session is a mentally taxing experience, thus longer sessions tend to be
less productive.
• Most inspections proceed at a rate of approximately 150 program statements per hour.
• For that reason, large programs should be examined over multiple inspections. Human
Agenda
• For the inspection process to be effective, the testing group must adopt an appropriate
attitude.
• The programmer must a leave his or her ego at the door and place the process in a
positive and constructive light, keeping in mind that the objective of the inspection is to
find errors in the program and, thus, improve the quality of the work.
• For this reason, most people recommend that the results of an inspection be a
confidential matter, shared only among the participants. The inspection process has
several beneficial side effects.
1. The programmer usually receives valuable feedback concerning programming
style, choice of algorithms, and programming techniques.
2. The inspection process is a way of identifying early the most error-prone sections
of the program, helping to focus attention more directly on these sections during the
computer-based testing processes.

Error Checklist for Inspections


• An important part of the inspection process is the use of a checklist to examine the
program for common errors.
• The checklist is divided into various categories:- Data Reference Errors, Data
Declaration Errors, Computation Errors, Comparison Errors, Control-Flow Errors,
Interface Errors and Input/Output Errors. Data Reference Errors
• Does a referenced variable have a value that is unset or uninitialized?
• This probably is the most frequent programming error, occurring in a wide variety of
circumstances.
• For each reference to a data item (variable, array element, field in a structure), attempt
to ‘‘prove’’ informally that the item has a value at that point.
• For all references through pointer or reference variables, is the referenced memory
currently allocated? This is known as the ‘‘dangling reference’’ problem. Data
Declaration Errors
• Have all variables been explicitly declared? A failure to do so is not necessarily an
error, but is, a common source of trouble.
• Where a variable is initialized in a declarative statement, is it properly initialized?
• Are there any variables with similar names (e.g., VOLT and VOLTS)?
• Is each variable assigned the correct length and data type? Computation Errors
• Is it possible for the divisor in a division operation to be zero?
• Are there any mixed-mode computations? (An example is when working with
floating-point and integer variables.)
• Are there any computations using variables having inconsistent data types?
• Are there any computations using variables having the same data type but of different
lengths?
• Is an overflow or underflow expression possible during the computation of an
expression? (That is, the end result may appear to have valid value, but an intermediate
result might be too bigortoosmallfortheprogramminglanguage’sdatatypes) Comparison
Errors
• Are there any mixed-mode comparisons or comparisons between variables of different
lengths? If so, ensure that the conversion rules are well understood.
• Are the comparison operators correct? Programmers frequently confuse relations such
as, at most, at least, greater than, not less than, and less than or equal ).
• Are the operands of a Boolean operator Boolean? Have comparison and Boolean
operators been erroneously mixed together? (example:- If you want to determine
whether i is greater than x or y, i>xIIy is incorrect. Instead, it should be (i>x)II(i>y).
Control-Flow Errors
• Will every loop eventually terminate? Devise an informal proof or argument showing
that each loop will terminate.
• Will the program, module, or subroutine eventually terminate?
• Is it possible that, because of the conditions upon entry, a loop will never execute?
Interface Errors
• Does the number of arguments passed by this module to another module equal the
number of parameters expected by that module?
• Does the units system of each argument passed to another module match the units
system of the corresponding parameter in that module?
• If built-in functions are invoked, are the number, attributes, and order of the arguments
correct? Input / Output Errors
• Does the program properly handle ‘‘File not Found’’ errors?
• Are I/O error conditions handled correctly?
• Are there spelling or grammatical errors in any text that is printed or
• displayed by the program?
• Have all files been opened before use?
• Have all files been closed after use?
• Is sufficient memory available to hold the file your program will read?
Walkthroughs
The code walkthrough, like the inspection, is a set of procedures and error-detection
techniques for group code reading.
• It shares much in common with the inspection process, but the procedures are slightly
different, and a different error-detection technique is employed.
• Like the inspection, the walkthrough is an uninterrupted meeting of one to two hours
in duration.
• The walkthrough team consists of three to five people.
• One of these people plays a role similar to that of the moderator in the inspection
process; another person plays the role of a secretary (a person who records all errors
found); and a third person plays the role of a tester. Of course, the programmer is one of
those people. Suggestions as to who the three to five people should be vary.
• Suggestions for the other participants include: A highly experienced programmer, A
programming-language expert, A new programmer (to give a fresh, unbiased outlook),
The person who will eventually maintain the program, Someone from a different
project, Someone from the same programming team as the programmer.
• The initial procedure is identical to that of the inspection process.
• The participants are given the materials several days in advance, to allow them time to
bone up on the program.
• the procedure in the meeting is different.
• Rather than simply reading the program or using error checklists, the participants
‘‘play computer.’’
• The person designated as the tester comes to the meeting armed with a small set of
paper test cases—representative sets of inputs (and expected outputs) for the program or
module.
• During the meeting, each test case is mentally executed; that is, the test data are
‘‘walked through’’ the logic of the program.
• The state of the program (i.e., the values of the variables) is monitored on paper or a
whiteboard.
• the test cases must be simple in nature and few in number, because people execute
programs at a rate that is many orders of magnitude slower than a machine.
• Hence, the test cases themselves do not play a critical role; rather, they serve as a
vehicle for getting started and for questioning the programmer about his or her logic and
assumptions.
• In most walkthroughs, more errors are found during the process of questioning the
programmer than are found directly by the test cases themselves.
• As in the inspection, the attitude of the participants is critical.
• Comments should be directed toward the program rather than the programmer.
• In other words, errors are not regarded as weaknesses in the person who committed
them. Rather, they are viewed as inherent to the difficulty of the program development.
• The walkthrough should have a follow-up process similar to that described for the
inspection process.
• Also, the side effects observed from inspections (identification of error-prone sections
and education in errors, style, and techniques) also apply to the walkthrough process.

Desk Checking
• A third human error-detection process is the older practice of desk checking.
• A desk check can be viewed as a one-person inspection or walkthrough:
• A person reads a program, checks it with respect to an error list, and/or walks test data
through it.
• For most people, desk checking is relatively unproductive.
• One reason is that it is a completely undisciplined process.
• A second, and more important reason, is that it runs counter to testing principle 2 (A
programmer should avoid attempting to test his or her own program).
• For this reason, desk checking is best performed by a person other than the author of
the program.

Peer Ratings (Peer Reviews or Program Reviews)


• The last human review process is not associated with program testing (i.e., its
objective is not to find errors).
• Peer rating is a technique of evaluating anonymous programs in terms of their overall
quality, maintainability, extensibility, usability, and clarity.
• The purpose of the technique is to provide programmer self evaluation.
• A programmer is selected to serve as an administrator of the process.
• The administrator, in turn, selects approximately 6 to 20 participants (6 is the
minimum to preserve anonymity).
•The participants are expected to have similar backgrounds.
• Each participant is asked to select two of his or her own programs to be reviewed.
• One program should be representative of what the participant considers to be his or her
finest work; the other should be a program that the programmer considers to be poorer
in quality.
• Once the programs have been collected, they are randomly distributed to the
participants.
• Each participant is given four programs to review. Two of the programs are the
‘‘finest’’ programs and two are ‘‘poorer’’ programs, but the reviewer is not told which
is which.
• Each participant spends 30 minutes reviewing each program and then completes an
evaluation form.
• After reviewing all four programs, each participant rates the relative quality of the four
programs.
• The evaluation form asks the reviewer to answer, on a scale from 1 to 10 (1 meaning
definitely yes and 10 meaning definitely no), such questions as:
• Was the program easy to understand?
• Was the high-level design visible and reasonable?
• Was the low-level design visible and reasonable?
• Would it be easy for you to modify this program?
• Would you be proud to have written this program?
• The reviewer also is asked for general comments and suggested improvements.
• After the review, the participants are given the anonymous evaluation forms for their
two contributed programs.
• They also are given a statistical summary showing the overall and detailed ranking of
their original programs across the entire set of programs, as well as an analysis of how
their ratings of other programs compared with those ratings of other reviewers of the
same program.
• The purpose of the process is to allow programmers to self-assess their programming
skills.

• Blackbox testing: Equivalence class testing, Boundary value testing, Decision


table testing, Pairwise testing, State transition testing, Use-case testing;
• White box testing: Control flow testing, Data flow testing
• Black box testing (Functional testing, Behavioral Testing, Data-driven testing,
and Closed box testing) is a strategy in which testing is based solely on the
requirements and specifications.
• Black box testing requires no knowledge of the internal paths, structure, or
implementation of the software under test.
• White box testing (Structural testing, Clear box testing, Code-based testing, and
Transparent testing and Glass box testing) is a strategy in which testing is based
on the internal paths, structure, and implementation of the software under test.
• White box testing generally requires detailed programming skills.
• One of the basic goals of white-box testing is to verify a working flow for an
application
• An additional type of testing is called Gray box testing.
• Grey Box Testing is also known as translucent testing as the tester has
limited knowledge of coding.
• Gray Box Testing is a software testing method, which is a combination of
both White Box Testing and Black Box Testing method.
• Grey Box Testing or Gray box testing is a software testing technique to test a
software product or application with partial knowledge of internal structure of
the application.
• The purpose of grey box testing is to search and identify the defects due to
improper code structure or improper use of applications.

• In White Box testing internal structure (code) is known


• In Black Box testing internal structure (code) is unknown
• In Grey Box Testing internal structure (code) is partially known

Here are the generic steps followed to carry out any type of Black Box Testing.
• Initially, the requirements and specifications of the system are examined
• Tester chooses valid inputs (positive test scenario) to check whether SUT (System
Under Test) processes them correctly. Also, some invalid inputs (negative test
scenario) are chosen to verify that the SUT is able to detect them
• Tester determines expected outputs for all those inputs
• Software tester constructs test cases with the selected inputs
• The test cases are executed
• Software tester compares the actual outputs with the expected outputs
Black-Box Testing Techniques
1) Equivalence class testing
2) Boundary value testing
3) Decision table testing
4) Pairwise testing
5) State transition testing
6) Use-case testing
Equivalence Class Testing
• Equivalence Partitioning or Equivalence Class Partitioning is type of black box
testing technique which can be applied to all levels of software testing like unit,
integration, system, etc.
• Equivalence class testing is a technique used to reduce the number of test cases (Test
cases consist of inputs, outputs, and order of execution) to a manageable level while
still maintaining reasonable test coverage.
• software testing technique that divides the input data of a software unit into partitions
of equivalent data from which test cases can be derived. In principle, test cases are
designed to cover each partition at least once.

• In this technique, input data units are divided into equivalent partitions that can be
used to derive test cases which reduces time required for testing because of small
number of test cases
• It divides the input data of software into different equivalence data classes. All it
requires are inputs or outputs that can be partitioned based on the system's
requirements.
• An equivalence class consists of a set of data that is treated the same by the module or
that should produce the same result.
• Any data value within a class is equivalent, in terms of testing, to any other value.
• Specifically, we would expect that:
1. If one test case in an equivalence class detects a defect, all other test cases in the
same equivalence class are likely to detect the same defect.
2. If one test case in an equivalence class does not detect a defect, no other test cases
in the same equivalence class is likely to detect the defect.
• Testing-by-Contract is based on the design-by-contract philosophy. Its approach is to
create test cases only for the situations in which the pre-conditions are met.
• Defensive Testing: an approach that tests under both normal and abnormal pre-
conditions.

Boundary Value Testing


• Boundary testing is the process of testing between extreme ends or boundaries
between partitions of the input values.
• Boundary value testing is a technique used to reduce the number of test cases to a
manageable size while still maintaining reasonable coverage.
• So these extreme ends like Start - End, Lower - Upper, Maximum -Minimum, Just
Inside - Just Outside values are called boundary values and the testing is called
‘boundary testing’.
• Boundary value testing focuses on the boundaries because that is where so many
defects hide.
• Boundary Testing comes after the Equivalence Class Partitioning
Decision Table Testing
• Decision table testing is a software testing technique used to test system behavior
for different input combinations. This is a systematic approach where the different
input combinations and their corresponding system behavior (Output) are captured in
a tabular form.
• A Decision Table is a tabular representation of inputs versus rules / cases / test
conditions.
• It is a very effective tool used for both complex software testing and requirements
management.
• Decision table helps to check all possible combinations of conditions for testing and
testers can also identify missed conditions easily.
• Decision table testing is a software testing technique used to test system behavior for
different input combinations
• Decision Table testing can be used whenever the system must implement complex
business rules when these rules can be represented as a combination of conditions and
when these conditions have discrete actions associated with them.
• This is a systematic approach where the different input combinations and their
corresponding system behavior (Output) are captured in a tabular form
• That is why it is also called as a Cause-Effect table where Cause and effects are
captured for better test coverage
• Decision tables are an excellent tool to capture certain kinds of system requirements
and to document internal system design
Pairwise Testing
• Pairwise Testing also known as All-pairs testing is a testing approach taken for
testing the software using combinatorial method.
• When the number of combinations to test is very large, do not to attempt to test all
combinations for all the values for all the variables, but test all pairs of variables. This
significantly reduces the number of tests that must be created and run.
• It's a method to test all the possible discrete combinations of the parameters involved.
• Assume we have a piece of software to be tested which has got 10 input fields and 10
possible settings for each input field, then there are 1010 possible inputs to be tested.
• In this case, exhaustive testing in impossible even if we wish to test all combinations.
• Example: If a system had four different input parameters and each one could take on
one of three different values, the number of combinations is 34 which is 81. It is
possible to cover all the pairwise input combinations in only nine tests.
Pairwise Testing Example
• Car Ordering Application:
• The car ordering application allows for Buying and Selling cars. It should support
trading in Delhi and Mumbai.
• The application should have registration numbers, may be valid or invalid. It should
allow the trade of following cars: BMW, Audi, and Mercedes.
• Two types of booking can be done: E-booking and In Store.
• Orders can be placed only during trading hours

Step #1: Let’s list down the variables involved.


• 1) Order category
a. Buy
b. Sell
• 2) Location
a. Delhi
b. Mumbai
• 3) Car brand
a. BMW
b. Audi
c. Mercedes
• 4) Registration numbers
a. Valid (5000)
b. Invalid
• 5) Order type
a. E-Booking
b. In-store
• 6) Order time
a. Working hours
b. Non-working hours
• If we want to test all possible valid combinations:
= 2 X 2 X 3 X 5000 X 2 X 2
= 240000 Valid test cases combinations :(
• There is also an infinite number of invalid combinations.
Step #2: Let’s simplify
• – Use a smart representative sample.
– Use groups and boundaries, even when data is non-discrete.
– Reduce Registration Number to Two
• Valid registration number
• Invalid registration number
• Now let’s calculate the number of possible combinations
=2X2X3X2X2X2
= 96
Step #3: Arranging variables and values involved.

Step #4: Arrange variables to create a test suite


• Let’s start filling in the table column by column. Initially, the table should look
something like this. The three values of Product (variable having the highest number
of values) should be written two times each (two is the number of values of next
highest variable i.e. Order category).
• The pairwise testing technique has some limitations as well.
• It fails when the values selected for testing are incorrect.
• It fails when highly probable combinations get too little attention.
• It fails when interactions between the variables are not understood well.

State Transition Testing


• State Transition Testing is a black box testing technique in which changes made in
input conditions cause state changes or output changes in the Application under Test
(AUT).
• State-Transition diagrams is an excellent tool to capture certain types of system
requirements and to document internal system design.
• These diagrams document the events that come into and are processed by a system as
well as the system's responses.
• When a system must remember something about what has happened before or when
valid and invalid orders of operations exist, state-transition diagrams are excellent
tools to record this information.
• State transition testing helps to analyze behaviour of an application for different input
conditions.
• Testers can provide positive and negative input test values and record the system
behaviour.
• State Transition Testing Technique is helpful where you need to test different system
transitions.
• State (represented by a circle)-A state is a condition in which a system is waiting for
one or more events. States "remember" inputs the system has received in the past and
define how the system should respond to subsequent events when they occur. These
events may cause state-transitions and/or initiate actions.
• Transition (represented by an arrow)-A transition represents a change from one state
to another caused by an event.
• Event (represented by a label on a transition)-An event is something that causes the
system to change state. Generally, it is an event in the outside world that enters the
system through its interface.
• Action (represented by a command following a "/")-An action is an operation
initiated because of a state change.
• The entry point on the diagram is shown by a black dot while the exit point is shown
by a bulls-eye symbol

State Transition Diagram

Use-Case Testing
• A use case is a scenario that describes the use of a system by an actor to accomplish a
specific goal.
• A scenario is a sequence of steps that describe the interactions between the actor and
the system.
• Use cases are made on the basis of user actions and the response of the software
application to those user actions.
• The set of use cases makes up the functional requirements of a system.
• An actor is a user, playing a role with respect to the system, seeking to use the system
to accomplish something worthwhile within a particular context.
• By "actor" we mean a user, playing a role with respect to the system, seeking to use
the system to accomplish something worthwhile within a particular context.
• Actors are generally people although other systems may also be actors.
• The use case is defined from the perspective of the user, not the system.
• It is widely used in developing test cases at system or acceptance level
• An actor is a user, playing a role with respect to the system, seeking to use the system
to accomplish something worthwhile within a particular context.
• By "actor" we mean a user, playing a role with respect to the system, seeking to use
the system to accomplish something worthwhile within a particular context.
• Actors are generally people although other systems may also be actors.
• The use case is defined from the perspective of the user, not the system.
• It is widely used in developing test cases at system or acceptance level

• White box testing is a strategy in which testing is based on the internal paths,
structure, and implementation of the software under test (SUT).
• White box testing is more than code testing—it is path testing.
• Generally, the paths that are tested are within a module (unit testing).
• We can apply the same techniques to test paths between modules within subsystems,
between subsystems within systems, and even between entire systems.
How do you perform White Box Testing?

White box testing has four distinct disadvantages.


• First, the number of execution paths may be so large than they cannot all be
tested. Attempting to test all execution paths through white box testing is
generally as infeasible.
• Second, the test cases chosen may not detect data sensitivity errors.
• Third, white box testing assumes the control flow is correct (or very close to
correct).
• Fourth, the tester must have the programming skills to understand and evaluate
the software under test.
Advantages:
• When using white box testing, the tester can be sure that every path through the
software under test has been identified and tested

WhiteBox Testing Techniques


❖ Control flow testing
❖ Data flow testing

Control Flow Testing


• Control flow testing identifies the execution paths through a module of program code
and then creates and executes test cases to cover those paths.
• Path: A sequence of statement execution that begins at an entry and ends at an exit.
• The aim of Control flow testing technique is to determine the execution order of
statements or instructions of the program through a control structure.
• The control structure of a program is used to develop a test case for the program.
• In this technique, a particular part of a large program is selected by the tester to set the
testing path
• Test cases represented by the control graph of the program.
• Control flow graphs are the foundation of control flow testing. These graphs
document the module's control structure. Modules of code are converted to graphs,
the paths through the graphs are analyzed, and test cases are created from that
analysis.
• Control Flow Graph is formed from the node, edge, decision node, junction node to
specify all possible execution path.

Notations used for Control Flow Graph


1) Node : Used to create a path of procedures. Basically, it represents the
sequence of procedures
2) Edge : Used to link the direction of nodes
3) Decision Node : Used to decide next node of procedure as per the value
4) Junction node : Point where at least three links meet
• Control flow testing is the cornerstone of unit testing.
• It should be used for all modules of code that cannot be tested sufficiently through
reviews and inspections.
• Its limitations are that the tester must have sufficient programming skill to understand
the code and its control flow.
• In addition, control flow testing can be very time consuming because of all the
modules and basis paths that comprise a system.

Data Flow Testing


• Data flow testing is a powerful tool to detect improper use of data values due to
coding errors.
• Data Flow Testing is a specific strategy of software testing that focuses on data
variables and their values
• Variables that contain data values have a defined life cycle. They are created, they are
used, and they are killed (destroyed).
• It is concerned with:
• Statements where variables receive values,
• Statements where these values are used or referenced.

Data flow testing pinpoint following issues


• A variable is defined but not used or referenced,
• A variable is used but never defined,
• A variable is defined twice before it is used
• Disadvantages of Data Flow Testing
• Time consuming and costly process
• Requires knowledge of programming languages
• A data flow graph is similar to a control flow graph in that it shows the processing
flow through a module.
• We will construct these diagrams and verify that the define-use-kill patterns are
appropriate.
• First, we will perform a static test of the diagram. By "static" we mean we
examine the diagram (formally through inspections or informally through
look-sees).
Second, we perform dynamic tests on the module. By "dynamic" we mean we construct
and execute test cases

• It keeps a check at the data receiving points by the variables and its usage points.
• The process is conducted to detect the bugs because of the incorrect usage of data
variables or data values.
• A common programming mistake is referencing the value of a variable without first
assigning a value to it.
• Data flow testing builds on and expands control flow testing techniques.
• As with control flow testing, it should be used for all modules of code that cannot be
tested sufficiently through reviews and inspections.
• Its limitations are that the tester must have sufficient programming skill to understand
the code, its control flow, and its variables.
• Like control flow testing, data flow testing can be very time consuming because of all
the modules, paths, and variables that comprise a system.
Comparison of Black Box and White Box Testing
What is Functional Testing?
➢ Functional testing is a type of testing which verifies that each function of the software
application operates in conformance with the requirement specification. This testing
mainly involves black box testing, and it is not concerned about the source code of the
application.
➢ Every functionality of the system is tested by providing appropriate input, verifying the
output and comparing the actual results with the expected results. This testing involves
checking of User Interface, APIs, Database, security, client/ server applications and
functionality of the Application Under Test. The testing can be done either manually or
using automation

What is Non-Functional Testing?


➢ Non-functional testing is a type of testing to check non-functional aspects (performance,
usability, reliability, etc.) of a software application. It is explicitly designed to test the
readiness of a system as per nonfunctional parameters which are never addressed by
functional testing.
➢ A good example of non-functional test would be to check how many people can
simultaneously login into a software.
➢ Non-functional testing is equally important as functional testing and affects client
satisfaction.

**Testing Automation
➢ Automation Testing or Test Automation is a software testing technique that performs
using special automated testing software tools to execute a test case suite.
➢ On the contrary, Manual Testing is performed by a human sitting in front of a computer
carefully executing the test steps.
➢ Test automation is the process of performing software testing activities with little or no
human interaction, in order to achieve greater speed and efficiency.

Types of Automation Tests


❖ Functional Testing
❖ Unit Testing
❖ Integration Testing
❖ Regression Testing
❖ Smoke Testing - Smoke testing is performed to examine whether the deployed build
is stable or not.
➢ UNIT TESTING is a type of software testing where individual units or components
of a software are tested. The purpose is to validate that each unit of the software code
performs as expected. Unit Testing is done during the development (coding phase) of
an application by the developers.
➢ Smoke Testing is a software testing process that determines whether the deployed
software build is stable or not. ... It consists of a minimal set of tests run on each build
to test software functionalities. Smoke testing is also known as "Build Verification
Testing" or “Confidence Testing.”

➢ INTEGRATION TESTING is a level of software testing where individual units / components


are combined and tested as a group. The purpose of this level of testing is to expose faults in the
interaction between integrated units. Test drivers and test stubs are used to assist in Integration
Testing.
➢ Regression testing is re-running functional and non-functional tests to ensure that previously
developed and tested software still performs after a change. If not, that would be called a
regression.

Which Test Cases to Automate?


➢ Test cases to be automated can be selected using the following criterion
o 1. High Risk - Business Critical test cases
o 2. Test cases that are repeatedly executed
o 3. Test Cases that are very tedious or difficult to perform manually
o 4. Test Cases which are time-consuming
➢ The following category of test cases are not suitable for automation:
o 1. Test Cases that are newly designed and not executed manually at least once
o 2. Test Cases for which the requirements are frequently changing
o 3. Test cases which are executed on an ad-hoc basis.

Automated Testing Process

Automated Functional Tools


• Cucumber, •JBehave, • Concordion, • Twist
Defect life cycle
➢ Defect life cycle (Bug life cycle) is a cycle which a defect goes through different stages
during its entire lifetime. (during testing process)
➢ It starts when defect is found (when reported by the tester ) and ends when a tester
ensures that a defect is closed, after ensuring that the issue is fixed and won't occur again.
➢ Defect Status (Bug Status) in defect life cycle is the present state from which the defect or
a bug is currently undergoing.
➢ The goal of defect status is to precisely convey the current state or progress of a defect or
bug in order to better track and understand the actual progress of the defect life cycle.
➢ The number of states that a defect goes through varies from project to project.
➢ Lifecycle diagram given here covers all possible states.
➢ New: When a new defect is logged and posted for the first time. It is assigned a status as
NEW.
➢ Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and
assigns the bug to the developer team
➢ Open: The developer starts analyzing and works on the defect fix
➢ Fixed: When a developer makes a necessary code change and verifies the change, he or
she can make bug status as "Fixed."
➢ Pending retest: Once the defect is fixed the developer gives a particular code for
retesting the code to the tester. Since the software testing remains pending from the
testers end, the status assigned is "pending retest."
➢ Retest: Tester does the retesting of the code at this stage to check whether the defect is
fixed by the developer or not and changes the status to "Re-test.“
➢ Reopen: If the bug persists even after the developer has fixed the bug, the tester changes
the status to "reopened". Once again the bug goes through the life cycle.
➢ Closed: If the bug is no longer exists then tester assigns the status "Closed."
➢ Duplicate: If the defect is repeated twice or the defect corresponds to the same concept
of the bug, the status is changed to "duplicate.“
➢ Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug
detected in the software, then the bug is fixed and the status assigned is "verified."
➢ Rejected: If the developer feels the defect is not a genuine defect then it changes the
defect to "rejected."
➢ Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in
the next release, then status "Deferred" is assigned to such bugs
➢ Not a bug: If it does not affect the functionality of the application then the status assigned
to a bug is "Not a bug".
➢ In Agile development, testing needs to happen early and often
➢ So, instead of waiting for development to be finished before testing begins, testing
happens continuously as features are added.
➢ Tests are prioritized just like user stories.
➢ Testers aim to get through as many tests as they can in an iteration.

Regression Testing
➢ The purpose of regression testing is to verify if code change introduces issues/defects into the
existing functionality.
➢ There are so many kinds of possible changes that can impact the existing functionality in an
application system.
➢ Even the simplest change to the code could impact previously tested functionality.
➢ Regression testing is performed when there is a code change in a software application.
➢ Regression testing:- Test the program's entire functionality to see if any thing changed when
you added new code to the project.
➢ Regression testing (which can also be performed at the unit level) is used to validate the
updated software against the old set of test cases that have already been passed.
➢ Regression testing helps to ensure that changes (due to testing or for other reasons) do not
introduce unintended behavior or additional errors

Testing Non-Functional Requirements.


•NON-FUNCTIONAL TESTING is defined as a type of Software testing to check non-
functional aspects (performance, usability, reliability, etc) of a software application. •It is
designed to test the readiness of a system as per non-functional parameters which are never
addressed by functional testing
1) Security: The parameter defines how a system is safeguarded against deliberate
and sudden attacks from internal and external sources. This is tested via Security
Testing.
2) Reliability: The extent to which any software system continuously performs the
specified functions without failure. This is tested by Reliability Testing
3) Survivability: The parameter checks that the software system continues to
function and recovers itself in case of system failure. This is checked by Recovery
Testing
4) Availability: The parameter determines the degree to which user can depend on
the system during its operation. This is checked by Stability Testing.
5) Usability: The ease with which the user can learn, operate, prepare inputs and
outputs through interaction with a system. This is checked by Usability Testing
6) Scalability: The term refers to the degree in which any software application can
expand its processing capacity to meet an increase in demand. This is tested by
Scalability Testing
7) Interoperability: This non-functional parameter checks a software system
interfaces with other software systems. This is checked by Interoperability Testing
8) Efficiency: The extent to which any software system can handles capacity,
quantity and response time.
9) Flexibility: The term refers to the ease with which the application can work in
different hardware and software configurations. Like minimum RAM, CPU
requirements.
10) Portability: The flexibility of software to transfer from its current hardware or
software environment
11) Reusability: It refers to a portion of the software system that can be converted for
use in another application.
Non-functional testing includes:
• Baseline testing
• Compliance testing
• Documentation testing
• Endurance testing or reliability testing
• Load testing
• Localization testing and Internationalization testing
• Performance testing
•Definition: Baseline testing refers to the validation of the documents and specifications
on which test cases are designed. Baseline, in general, refers to a benchmark that forms
the base of any new creation. In software testing, this refers to benchmarking the
performance of the application.
•A compliance test is an audit that determines whether an organization is following its
own policies and procedures in a particular area. ... The activities commonly used in a
compliance test are: Asking employees about their duties. Observing employees in the
conduct of their duties
•Documentation testing is part of non-functional testing of a product. It may be a type of
black box testing that ensures that documentation about how to use the system matches
with what the system does, providing proof that system changes and improvement have
been documented
•Endurance testing or reliability testing
• The main purpose of endurance testing is to ensure that the application is capable
enough to handle extended load without any deterioration of response time. This type of
testing is performed at the last stage of the performance run cycle. Endurance testing is a
long process and sometimes lasts for even up to a year
• Load testing generally refers to the practice of modeling the expected usage of a
software program by simulating multiple users accessing the program concurrently. As
such, this testing is most relevant for multi-user systems; often one built using a
client/server model, such as web servers.
•In Internationalization Testing the tester checks to see if the application successfully
adapts to other languages and regions without changes. Whereas Localization is a process
of adapting internationalized software for a specific region or language by adding local
specific components and translating text.
• Performance testing is a testing measure that evaluates the speed, responsiveness and
stability of a computer, network, software program or device under a workload.
Organizations will run performance tests in order to identify performance-related
bottlenecks.
•In software testing, recovery testing is the activity of testing how well an application is
able to recover from crashes, hardware failures and other similar problems. Recovery
testing is the forced failure of the software in a variety of ways to verify that recovery is
properly performed.
• Resilience testing is a type of software testing that observes how applications act under
stress. It's meant to ensure the product's ability to perform in chaotic conditions without a
loss of core functions or data; it ensures a quick recovery after unforeseen, uncontrollable
events.
• Security Testing is a type of Software Testing that uncovers vulnerabilities of the
system and determines that the data and resources of the system are protected from
possible intruders. It ensures that the software system and application are free from any
threats or risks that can cause a los
• Scalability Testing is a non-functional test methodology in which an application's
performance is measured in terms of its ability to scale up or scale down the number of
user requests or other such performance measure attributes. Scalability testing can be
performed at a hardware, software or database level.
• Stress Testing is a type of software testing that verifies stability & reliability of
software application. The goal of Stress testing is measuring software on its robustness
and error handling capabilities under extremely heavy load conditions and ensuring that
software doesn't crash under crunch situations.
•Usability Testing also known as User Experience(UX) Testing, is a testing method for
measuring how easy and user-friendly a software application is
•VOLUME TESTING is a type of Software Testing, where the software is subjected to
a huge volume of data. It is also referred to as flood testing. Volume testing is done to
analyze the system performance by increasing the volume of data in the database.
• Compatibility Testing is a type of Software testing to check whether your software is
capable of running on different hardware, operating systems, applications, network
environments or Mobile devices. Compatibility Testing is a type of Non-functional
testing

You might also like