Softwaretesting Unit2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Basics of Software Testing-II

UNIT II

Software Testing

Software Testing Basics of Software Testing-II The topics that are covered are: Software Vs Hardware Testing Testing Vs. Verification Defect Management Execution History or Execution Trace Test Generation Strategies Static Techniques: Static Testing, Static Analysis Types of Testing The saturation effect

1. Software Testing Vs Hardware The following differences exist between Software Testing and Hardware Testing: Software Testing Software Application does not degrade over time Model based in testing uses mutation, state transition diagram Test domain uses exhaustive set of test cases formed from data types Test coverage: Complete testing is impossible Hardware Testing Hardware may fail over time Built-in Self Test (BIST) internal monitoring mechanisms are installed to test the correct functioning of a circuit

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Fault model based test case generation at different levels- Transistor level, Gate level, circuit level, function level Test domain and test suite consists of a bit pattern and a set/ sequence of bit patterns Test coverage: Complete testing is impossible 2. Testing Vs. Verification Software quality is ensured by Verification and Validation Verification : "Is the system correct to specification?" Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled Validation : "Is this the right specification?" Confirmation by examination and through the provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. Program verification Program verification aims at proving the correctness of programs by showing that it contains no errors. This is very different from testing that aims at uncovering errors in a program. Program verification and testing are best considered as complementary techniques. In practice, one can shed program verification, but not testing. Testing is not a perfect technique in that a program might contain errors despite the success of a set of tests. Verification might appear to be perfect technique as it promises to verify that a program is free from errors. However, the person who verified a program might have made mistake in the verification process; there might be an incorrect assumption on the input conditions; incorrect assumptions might be made regarding the components that interface with the program, and so on. Testing Vs. Verification Testing aims at uncovering errors in program Testing focuses on reliability and building confidence to use the software Testing can not be ignored

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Testing is not a perfect process Verification focuses on correctness of program Verification is optional Verification might appear to be a perfect process but it is also not perfect 3. Defect Management Integral part of development and testing process Defect Management Software defects are expensive. Moreover, the cost of finding and correcting defects represents one of the most expensive software development activities. For the foreseeable future, it will not be possible to eliminate defects. While defects may be inevitable, we can minimize their number and impact on our projects. To do this development teams need to implement a defect management process that focuses on preventing defects, catching defects as early in the process as possible, and minimizing the impact of defects. A little investment in this process can yield significant returns.

This defect management model is not intended to be a standard, but rather a starting point for the development of a customized defect management process within an organization. Companies using the model can reduce defects and their impacts during their software development projects. The defect management process is based on the following general principles: The primary goal is to prevent defects. Where this is not possible or practical, the goals are to both find the defect as quickly as possible and minimize the impact of the defect. The defect management process should be risk driven -- i.e., strategies, priorities, and resources should be based on the extent to which risk can be reduced. Defect measurement should be integrated into the software development process and be used by the project team to improve the process. In other words, the project staff, by doing their job, should capture information on defects at the source. It should not be done after-the-fact by people unrelated to the project or system As much as possible, the capture and analysis of the information should be automated. Defect information should be used to improve the process. This, in fact, is the primary reason for gathering defect information.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Most defects are caused by imperfect or flawed processes. Thus to prevent defects, the process must be altered.

The figure summarizes the defect management process:

Defect Prevention

Use of Processes and Tools during development and Testing

Defect Discovery

Identification of Defects corresponding to failures found in static and dynamic testing Defects are recorded and Reported using proper forms or in tools

Recording and Reporting Defect Classification

Based on Source of Origin, Severity-wise, Type-wise, Cause-wise using defect classification schemes given by Biezer, or ODC

Defect Resolution

Defect Resolution

Defect Prediction

Defect Prediction

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Defect Management Process The major steps involved in the process are:

Defect Prevention: Implementation of techniques, methodology and standard processes to reduce the risk of defects. Deliverable Baseline: Establishment of milestones where deliverables will be considered complete and ready for further development work. When a deliverable is baselined, any further changes are controlled. Errors in a deliverable are not considered defects until after the deliverable is baselined. Defect Discovery: Identification and reporting of defects for development team acknowledgment. A defect is only termed discovered when it has been documented and acknowledged as a valid defect by the development team member(s) responsible for the component(s) in error. Defect Resolution: Work by the development team to prioritize, schedule and fix a defect, and document the resolution. This also includes notification back to the tester to ensure that the resolution is verified. Process Improvement: Identification and analysis of the process in which a defect originated to identify ways to improve the process to prevent future occurrences of similar defects. Also the validation process that should have identified the defect earlier is analyzed to determine ways to strengthen that process. Management Reporting: Analysis and reporting of defect information to assist management with risk management, process improvement and project management.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Defect Prevention Process Without a doubt, the best approach to defects is to eliminate them altogether. While that would be ideal, it is virtually impossible given current technology. In the meantime, developers need strategies to find defects quickly and minimize their impact. Identifying and implementing the best defect prevention techniques (which is a large part of identifying the best software development processes) should be a high priority activity in any defect management program. Defect prevention should begin with an assessment of the critical risks associated with the system. Getting the critical risks defined allows people to know the types of defects that are most likely to occur and the ones that can have the greatest system impact. Strategies can then be developed to prevent them. The major steps for defect prevention are as follows: Identify Critical Risks: Identify the critical risks facing the project or system. These are the types of defects that could jeopardize the successful construction, delivery and/or operation of the system. Estimate Expected Impact: For each critical risk, make an assessment of the financial impact if the risk becomes a problem. Minimize Expected Impact: Once the most important risks are identified try to eliminate each risk. For risks that cannot be eliminated, reduce the probability that the risk will become a problem and the financial impact should that happen. Identify Critical Risks The first step in preventing defects is to understand the critical risks facing the project or system. The best way to do this is to identify the types of defects that pose the largest threat. In short, they are the defects that could jeopardize the successful construction, delivery and/or operation of the system. These risks can vary widely from project to project depending on the type of system, the technology, the users of the software, etc. These risks might include: Missing a key requirement Critical application software that does not function properly Vendor supplied software does not function properly Performance is unacceptably poor Hardware malfunction

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Hardware and/or software does not integrate properly Hardware new to installation site Hardware not delivered on-time Users unable or unwilling to embrace new system User's inability to actively participate in project Etc. It should be emphasized that the purpose of this step is not to identify every conceivable risk, but to identify those critical risks that merit special attention because they could jeopardize the success of the project. Estimate Expected Impact Once the critical risks are identified, the financial impact of each risk should be estimated. This can be done by assessing the impact, in dollars, if the risk does become a problem combined with the probability that the risk will become a problem. The product of these two numbers is the expected impact of the risk. The expected impact of a risk (E) is calculated as E = P * I, where: P I = = Probability of the risk becoming a problem and Impact in dollars if the risk becomes a problem.

Once the expected impact of each risk is identified, the risks should be prioritized by the expected impact and the degree to which the expected impact can be reduced. While guess work will constitute a major role in producing these numbers, precision is not important. What will be important is to identify the risk, and determine the risk's order of magnitude. Large, complex systems will have many critical risks. Whatever can be done to reduce the probability of each individual critical risk becoming a problem to a very small number should be done. Doing this increases the probability of a successful project by increasing the probability that none of the critical risks will become a problem. One should assume that an individual critical risk has a low probability of becoming a problem only when there is specific knowledge justifying why it is low. For example, the likelihood that an important requirement was missed may be high if developers have not involved users in the project. If users have actively participated in the requirements definition, and the new system is not a radical departure from an existing system or process, the likelihood may be low.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

One of the more effective methods for estimating the expected impact of a risk is the annual loss expectation (ALE) formula. This is discussed below: The occurrence of a risk can be called an "event." Loss per event can be defined as the average loss for a sample of events. The formula states that the ALE equals the loss per event multiplied by the number of events. For example, if the risk is that the software system will abnormally terminate, then the average cost of correcting an abnormal termination is calculated and multiplied by the expected number of abnormal terminations associated with this risk For the annual calculation, the number of events should be the number of events per year Minimize Expected Impact The expected impact may be strongly affected not only by whether or not a risk becomes a problem, but also by how long it takes for a problem to become recognized and how long it takes to be fixed once recognized. In one reported example, a telephone company had an error in its billing system that caused it to under bill its customers by about $30 million. By law, the telephone company had to issue corrected bills within thirty days, or write-off the under billing. By the time the telephone company recognized it had a problem, it was too late to collect much of the revenue. Expected impact is also affected by the action that is taken once a problem is recognized. Once Johnson and Johnson realized it had a problem with Tylenol tampering, it greatly reduced the impact of the problem by quickly notifying doctors, hospitals, distributors, retail outlets, and the public of the problem. While the tampering itself was not related to a software defect, software systems had been developed by Johnson and Johnson to quickly respond to drug related problems. In this case, the key to Johnson & Johnson's successful management of the problem was how it minimized the impact of the problem once the problem was discovered. Minimizing expected impact involves a combination of the following three strategies: Eliminate the Risk: While this is not always possible or desirable, there are situations where the best strategy will be simply to eliminate the risk altogether. For example, reducing the scope of a system, or deciding not to use the latest unproven technology are ways to eliminate certain risks altogether. Reduce the Probability of a Risk Becoming a Problem: Most strategies will fall into this category. Inspections and testing are examples of approaches that reduce, but do not eliminate, the probability of problems.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Reduce the Impact if there is a Problem: In some situations, the risk can not be eliminated, and even when the probability of a problem is low, the expected impact is high. In these cases, the best strategy may be to explore ways to reduce the impact if there is a problem. Contingency plans and disaster recovery plans would be examples of this strategy. From a conceptual viewpoint, there are two ways to minimize the risk. These are deduced from the annual loss expectation formula. The two ways are to reduce the expected loss per event, or reduce the frequency of an event. If both of these can be reduced to zero, the risk will be eliminated. If the frequency is reduced, the probability of a risk becoming a problem is reduced. If the loss per event is reduced, the impact is reduced when the problem occurs. There is a well known engineering principle that says that if you have a machine with a large number of components, even if the probability that any given component will fail is small, the probability that one or more components will fail may be unacceptably high. Due to this phenomenon, engineers are careful to estimate the mean time between failure of the machine. If the machine can not be designed with a sufficiently large mean time between failure, the machine can not be made. When applied to software development, this principle would say that unless the overall expected impact of the system can be made sufficiently low, do not develop the system. Appropriate techniques to reduce expected impact are a function of the particular risk. 4. Execution History or Execution Trace An organized collection of information about various elements of a program during a given execution which is saved in memory An execution slice is an executable subsequence of executable history It is useful for - Debugging Functions - Performance Analysis Representing Execution History or Execution Trace Many Representations are available for Execution history. Sequence in which the functions in a program are executed against a given test input Sequence in which program blocks are executed Values of program variables In OOP, a sequence of objects and the corresponding methods accessed

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

5. Test Generation Strategies Are crucial for the success of the test effort and the accuracy of test plan and estimates Test generation strategies have the following major tasks: - Designing the Tests - Evaluating testability of the requirements and system. Designing the test environment set-up and identifying any required infrastructure and tools. Designing the Tests - Transforms a source document into test designs. - It involves a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. Types of Test Generation Strategies Analytical Test Generation Requirement Based Test Generation Risk Based Test Generation - Code based Test Generation Program Mutation Control Flow Based Test Generation Model Based Test Generation Mathematical Model for critical system behavior Requirements are modeled using a formal notations Examples: Finite State Machines, State Charts, Pertrinets, Timed I/O Automata, Algebraic and predicate logic, Sequence and Activity Diagram in UML Quality Profiling Based Test Generation Methodical Test Generation

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Check list evolved in the organization over years of time which follow industry standard for software quality Process or Standard Compliant Based Test Generation Agile methodologies such as Xtreme programming Industry process or standards like IEEE 829 - Dynamic Test Generation Exploratory Testing - Consultative or Directed Test Generation Involving users or developers Regression - averse Test Generation Trying to automate all tests of system functionality prior to release of the function Selection of any of the strategies depends on risks, skills, objectives, regulations, product, business. Test generation Any form of test generation uses a source document. In the most informal of test methods, the source document resides in the mind of the tester who generates tests based on knowledge of the requirements. In most commercial environments, the process is a bit more formal. The tests are generated using a mix of formal and informal methods either directly from the requirements document serving as the source. In more advanced test processes, requirements serve as a source for the development of formal models. Test generation strategies can be summarized as follows: Model based: require that a subset of the requirements be modeled using a formal notation (usually graphical). Models: Finite State Machines, Timed automata, Petri net, etc. Specification based: require that a subset of the requirements be modeled using a formal mathematical notation. Code based: generate tests directly from the code

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Test generation strategies (Summary)

6. Static testing Techniques Static testing is a form of software testing where the software isn't actually used. This is in contrast to dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code and/or manually reviewing the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used. From the black box testing point of view, static testing involves reviewing requirements and specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation. Even static testing can be automated. A static testing test suite consists of programs to be analyzed by an interpreter or a compiler that asserts the programs syntactic validity. Bugs discovered at this stage of development are less expensive to fix than later in the development cycle. The people involved in static testing are application developers, testers, and business analyst.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

6.1 Static Techniques Objectives To improve the quality of software work products by Assisting Engineers to recognize and fix their own defects early in the software development process Static Techniques - Preventive quality measure taken to filter out errors in work products at the very point of their injection so as to prevent them entering following phases - It involves a visual static analysis of a work product a meeting by technical people for technical people - Symbolizes maturity of organization The formality of the review process is related to maturity of the development process Static testing and static analysis They have the same objective - identifying defects in order to prevent them from becoming more expensive. Two approaches to evaluation, revealing defects and quality. Static testing: Software work products are examined but not executed manually or using tools Static analysis: To find defects in software source code and software models.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Overview of Static Testing:

Static Testing Informal Reviews

Walkthroughs

Technical Reviews

Inspection Static Analysis Data Flow Control Flow

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Objectives of Static Testing - To carry out testing as early as possible - Finding and fixing defect more cheaply Preventing defects from appearing at later stages of the project Examples of static testing: - Reviews - Inspection - Walkthroughs - Audits Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection and walkthrough. - Informal review: A review not based on a formal (documented) procedure. Formal review: A review characterized by documented procedures and requirement, eg., inspection. Peer review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough - Technical Review: A peer group discussion activity that focuses on achieving consensus on the technical Walkthrough: - A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. Inspection: - A type of peer review that relies on visual examination of documents to detect defects, eg., violation of development standards and non-conformance to higherlevel documentation. The most formal review technique and therefore always based on a documented procedure.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

6.2 Static Analysis Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding, program comprehension or code review. The sophistication of the analysis performed by tools varies from those that only consider the behavior of individual statements and declarations, to those that include the complete source code of a program in their analysis. Uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., the lint tool) to formal methods that mathematically prove properties about a given program (e.g., its behavior matches that of its specification). It can be argued that software metrics and reverse engineering are forms of static analysis. A growing commercial use of static analysis is in the verification of properties of software used in safety-critical computer systems and locating potentially vulnerable code. The goal of Static Analysis is to find defects, whether or not they may cause failures Static Analysis is ideally performed before formal reviews Objective of static analysis - To find defects in software source code and software models. Static analysis is performed without actually executing the software being examined by the tool - Static analysis can locate defects that are hard to find in testing Static Analysis: Examples Adherence to Coding Standards - Programming Style - Naming Conventions - Layout Specifications Understanding Code Metrics: Helps to understand complexity and size of the code; to infer from this : whether it becomes difficult to maintain; Also, to check for re-designing or to look for design alternative

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Code metrics are: - Comment Frequency - Depth of Nesting - Cyclomatic Number - Lines of Code Use of Static Code Analysis Tools in Static Testing A Static code Analysis could give a complete list of modules and line numbers where each variable is defined and used. A Static code Analysis Tool can provide control flow Control Flow Structure Helps to understand a sequence in which instructions (events or paths) are executed in a component or system; Helps to understand loops and iterations, nesting, complexity, Helps to identify unreachable (dead) code;

Data Flow Structure Follows the trail of a data item as it is accessed and modified by the code.

Data Structure - Refers to the organization of the data Software Complexity and Static Testing A more complex module is likely to have errors and must be accorded higher priority during inspection than a lower priority module. Model-Based Testing and Model Checking Model-Based Testing refers to the acts of modeling and the generation of tests from a formal model of application behavior. Model Checking refers to a class of techniques that allow the validation of one or more properties from a given model of an application.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Model-Based Testing and Model Checking

Source: Requirements, Experience, Program

Model Property

Model Checker

Property Satisfied ?

No

Update model or source


7. Types of testing One possible classification is based on the following five classifiers: C1: Source of test generation. C2: Lifecycle phase in which testing takes place C3: Goal of a specific testing activity C4: Characteristics of the artifact under test C5: Test Process Models

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

C1: Source of test generation

C2: Lifecycle phase in which testing takes place

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

C3: Goal of specific testing activity

C4: Artifact under test

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

C5: Test Process Models Process Testing in waterfall model Testing in V-model Attributes Usually done toward the end of the development cycle Explicitly specifies testing activities in each phase of the development cycle Applied to software increments, each increment might be a prototype that eventually leads to the application delivered to the customer. Proposed for evolutionary software development Used in agile development methodologies such as eXtreme Programming (XP) Requirements specified as tests

Spiral Testing

Agile testing

Test -driven development (TDD)

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

8. The Saturation Effect The saturation effect is an abstraction of a phenomenon observed during the testing of complex software systems

Confidence Reliability

TGAT-Test generation and assessment techniques SR-Saturation Region

TGAT3 SR3 TGAT2 SR2

TGAT1

SR1

The horizontal axis refers to the test effort that increases over time. The test effort can be measured as, for example, the number of test cases executed or total person days spent during the test and debug phase. The vertical axis refers to true reliability (solid ones) and the confidence in the correct behavior (dotted lines) of the application under test. Note that the application under test evolves with an increase in test effort due to error correction. The vertical axis can also be labeled as the cumulative count of failures that are observed over time, as the test effort increases. The error correction process removes the cause of one or more failures. However, as the test effort increases, additional failures may be found that causes the cumulative failure count to increase though it saturates.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Regardless of the number of tests generated and given the set of tests generated using TGAT1, the true reliability stops increasing after a certain amount of test effort has been spent. This saturation in true reliability is shown as shaded region labeled SR1. Inside SR1, true reliability remains constant while the test effort increases. No New faults are found and fixed in the saturation region. Thus, the saturation region is indicative of wasted test effort under the assumption that Program A contains faults not detected while the test phases in saturation region. Program representation: Basic Blocks A basic block in program P is a sequence of consecutive statements with a single entry and a single exit point. Thus a block has unique entry and exit points. Control always enters a basic block at its entry point and leaves from its exit point. There is no possibility of exit or a halt at any point inside the basic block except at its exit point. The entry and exit points of a basic block coincide when the block contains only one statement. Example: Computing x raised to y

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Control Flow Graph (CFG) A control flow graph (CFG) in computer science is a representation, using graph notation, of all paths that might be traversed through a program during its execution. Contents: 1 Overview 2 Terminologies 3 Examples Overview In a control flow graph each node in the graph represents a basic block, i.e. a straight-line piece of code without any jumps or jump targets; jump targets start a block, and jumps end a block. Directed edges are used to represent jumps in the control flow. There are, in most presentations, two specially designated blocks: the entry block, through which control enters into the flow graph, and the exit block, through which all control flow leaves. The CFG is essential to many compiler optimizations and static analysis tools. Reachability is another graph property useful in optimization. If a block/subgraph is not connected from the sub graph containing the entry block, that block is unreachable during any execution, and so is unreachable code; it can be safely removed. If the exit block is unreachable from the entry block, it indicates an infinite loop (not all infinite loops are detectable, of course. See Halting problem). Again, dead code and some infinite loops are possible even if the programmer didn't explicitly code that way: optimizations like constant propagation and constant folding followed by jump threading could collapse multiple basic blocks into one, cause edges to be removed from a CFG, etc., thus possibly disconnecting parts of the graph.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Terms concerned to CFG: These terms are commonly used when discussing control flow graphs. entry block block through which all control flow enters the graph exit block block through which all control flow leaves the graph back edge an edge that points to an ancestor in a depth-first (DFS) traversal of the graph critical edge an edge which is neither the only edge leaving its source block, nor the only edge entering its destination block. These edges must be split (a new block must be created in the middle of the edge) in order to insert computations on the edge without affecting any other edges. abnormal edge an edge whose destination is unknown. Exception handling constructs can produce them. These edges tend to inhibit optimization. impossible edge (also known as a fake edge) An edge which has been added to the graph solely to preserve the property that the exit block postdominates all blocks. It cannot ever be traversed. dominator block M dominates block N if every path from the entry that reaches block N has to pass through block M. The entry block dominates all blocks. postdominator block M postdominates block N if every path from N to the exit has to pass through block M. The exit block postdominates all blocks. immediate dominator block M immediately dominates block N if M dominates N, and there is no intervening block P such that M dominates P and P dominates N. In other words, M is the last dominator on all paths from entry to N. Each block has a unique immediate dominator. immediate postdominator Analogous to immediate dominator.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

dominator tree An ancillary data structure depicting the dominator relationships. There is an arc from Block M to Block N if M is an immediate dominator of N. This graph is a tree, since each block has a unique immediate dominator. This tree is rooted at the entry block. Can be calculated efficiently using Lengauer-Tarjan's algorithm. postdominator tree Analogous to dominator tree. This tree is rooted at the exit block. loop header Sometimes called the entry point of the loop, a dominator that is the target of a loopforming back edge. Dominates all blocks in the loop body. loop pre-header Suppose block M is a dominator with several incoming edges, some of them being back edges (so M is a loop header). It is advantageous to several optimization passes to break M up into two blocks Mpre and Mloop. The contents of M and back edges are moved to Mloop, the rest of the edges are moved to point into Mpre, and a new edge from Mpre to Mloop is inserted (so that Mpre is the immediate dominator of Mloop). In the beginning, Mpre would be empty, but passes like loop-invariant code motion could populate it. Mpre is called the loop pre-header, and Mloop would be the loop header. Examples Consider the following fragment of code: 0: (A) t0 = read_num 1: (A) if t0 mod 2 == 0 goto 4 2: (B) print t0 + " is odd." 3: (B) goto 5 4: (C) print t0 + " is even." 5: (D) end program In the above, we have 4 basic blocks: A from 0 to 1, B from 2 to 3, C at 4 and D at 5. In particular, in this case, A is the "entry block", D the "exit block" and lines 4 and 5 are jump targets. A graph for this fragment has edges from A to B, A to C, B to D and C to D.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Control Flow Graph A control flow graph (or flow graph) G is defined as a finite set N of nodes and a finite set E of edges. An edge (i, j) in E connects two nodes ni and nj in N. We often write G= (N, E) to denote a flow graph G with nodes given by N and edges by E. In a flow graph of a program, each basic block becomes a node and edges are used to indicate the flow of control between blocks. Blocks and nodes are labeled such that block bi corresponds to node ni. An edge (i, j) connecting basic blocks bi and bj implies that control can go from block bi to block bj. We also assume that there is a node labeled Start in N that has no incoming edge, and another node labeled End, also in N, that has no outgoing edge. CFG Example N = {Start, 1, 2, 3, 4, 5, 6, 7, 8, 9, End} E = {(Start,1), (1, 2), (1, 3), (2,4), (3, 4), (4, 5), (5, 6), (6, 5), (5, 7), (7, 8), (7, 9), (8,9), (9, End)} Same CFG with statements removed

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Paths Consider a flow graph G= (N, E). A sequence of k edges, k>0, (e_1, e_2, e_k) , denotes a path of length k through the flow graph if the following sequence condition holds. Given that np, nq, nr, and ns are nodes belonging to N, and 0< i<k, if ei = (np, nq) and ei+1 = (nr, ns) then nq = nr. Paths: sample paths through the exponentiation flow graph

Two feasible and complete paths: p1= ( Start, 1, 2, 4, 5, 6, 5, 7, 9, End) p2= (Start, 1, 3, 4, 5, 6, 5, 7, 9, End) Bold edges: complete path. Dashed edges: subpath

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Complete paths A path through a flow graph is considered complete if the first node along the path is Start and the terminating node is End Examples of Incomplete paths P3 = (5,7,8,9) P4 = (6,5,7,9,End) Feasible paths A path p through a flow graph for Program P is considered feasible if there exists at least one test case which when input to P causes p to be traversed. If no such test case exists, then p is considered infeasible Examples of complete but infeasible paths P5 = (Start, 1, 3, 4, 5, 6, 5, 7, 8, 9, End) P6 = (Start, 1, 2, 4, 5, 7, 9, End) Examples of invalid paths P7 = (Start, 1, 2, 4, 8, 9, End) P8 = (Start, 1, 2, 4, 7, 9, End) Number of paths There can be many distinct paths through a program. A program with no condition contains exactly one path that begins at node Start and terminates at node End. Each additional condition in the program can increases the number of distinct paths by at least one. Depending on their location, conditions can have a multiplicative effect on the number of paths. This Program has two distinct paths, one is traversed when C1 is true and the other when C1 is false.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Begin S1; S2; : : If (C1) {..} : : Sn-1; End This Program has four distinct paths, corresponding to the combination of conditions C1 and C2. Begin S1; S2; : If (C1) {..} : if (C2) {..} Sn-1; End

If a new condition is added within the scope of an if statement then the number of distinct path increases only by one. Begin S1; S2; : If (C1) { : if (C2) {..} : } Sn-1; End

The presence of loops can enormously increase the number of paths. Each traversal of the loop body adds a condition to the program, thereby increasing the number of paths by at least one.

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

Sometimes, the number of times a loop is to be executed depends on the input data cannot be determined prior to program execution. This becomes another cause of difficulty in determining the number of paths in a program. One can compute an upper limit on the number of paths based on some assumption on the input data. Program with Loop 1 Begin 2 int num, product, power; 3 bool done; 4 product =1; 5 input(done); 6 While(!done) { 7 input(num); 8 product=product * num; 9 input(done); 10 } 11 Output(product); 12 End

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

CFG for this program:

Start

int num, product, power; bool done; product =1; input(done);

While(!done) input(num); product=product * num; input(done);

Output(product);

End

Program inputs a sequence of integers and Computes their product. A Boolean variable done controls the number of integers to be multiplied. (Start, 1, 2, 4, End) is the path traversed when done is true the first time the loop condition is checked. If there is only one value of num to be processed, then the path followed is (Start, 1, 2, 3, 2, 4, End).

PROF G C SATHISH, RevaITM, Bangalore

Basics of Software Testing-II

UNIT II

Software Testing

When there are two input integers to be multiplied then the path traversed is (start, 1, 2, 3, 2, 3, 2, 4, End). Notice that the length of the path traversed increases with the number of times the Loop body is traversed. When the input sequence is empty (length 0), the length of the path traversed is 4. For an input sequence of length 1, it is 6, For 2, it is 8 and so on.

PROF G C SATHISH, RevaITM, Bangalore

You might also like