Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Software Testing: ISTQB Foundation Level Certification

What is Software Testing? Software Testing is the process of finding defects in software and reporting them to the developers. Software testing only identifies the bugs in software and does not fix them. What are the principles in software testing? 1. 2. 3. 4. 5. 6. 7. Testing shows the presence of bugs Early Testing Exhaustive Testing is impossible Testing is context dependent Defect clustering Pesticide Paradox Absence of Errors is a fallacy

What is the fundamental test process? Test Planning and control Test analysis and design Test Implementation and execution Evaluating exit criteria and reporting Test closure activities

What are an error, defect and failure? Error leads to a defect, which can cause an observed failure. Resources triangle consist of Time, Money and quality (vertices last two) What is defect clustering? Any small module or functionality may contain more number of defects concentrate more testing on this functionality What is pesticide paradox? If prepared test cases are not finding defects, add/revise test cases to find more defects. What is debugging? The process undertaken by developers to identify the cause of bugs in code and undertake corrective actions

What is static testing and dynamic testing? Static testing deals with testing where code is not exercised. Dynamic testing exercises the program under test with some kind of test data so we speak of test execution in this context. What is exhaustive testing? The test approach in which all the possible data combinations are used which also includes the implicit data combinations present in the state of the software/data at the start of testing. Note: Various reasons for defect clustering include volatile code, system complexity, development staff inexperience, Development staff experience, the effects of change upon change Test Planning and Control Planning and control is determining what is going to be tested. It is where we define test completion criteria. Control on the other is what to do when the results do not match with plans. Progress against plan Testing analysis and Design Analysis and design is concerned with fine detail of what is going to be tested. Test conditions Combining the test conditions into test cases, so that a small number of cases can cover a large number of conditions Design part requires considering the test data that will be required for the test conditions and test cases that have been drawn up Reviewing requirements, architecture, design, interface specifications and other parts, which collectively comprise the test basis. Analyzing test items, the specification, behavior and structure to identify test conditions and test data required. Designing the tests, including assigning priority. Determining whether the requirements and the system are testable. Detailing what the test environment should look like, and whether there are any infrastructure and tools required. Highlighting the test data required for the test conditions and test cases. Creating bi-directional traceability between test basis and test cases.

Test Implementation and execution Test environment is checked before testing Most important tests are run first

Developing and prioritizing test cases, creating test data, writing test procedures and, optionally, preparing test harnesses and writing automated test scripts. Collecting test cases into test suites, where tests can be run one after another for efficiency. Checking the test environment set-up is correct. Running test cases in the determined order. This can be manually or using test execution tools. Keeping a log of testing activities, including the outcome (pass/fail) and the versions of software, data, tools and testware (scripts, etc.). Comparing actual results with expected results. Reporting discrepancies as incidents with as much information as possible, including if possible causal analysis (code defect, incorrect test specification, test data error or test execution error). Where necessary, repeating test activities when changes have been made following incidents raised. This includes re-execution of a test that previously failed in order to confirm a fix (retesting), execution of a corrected test and execution of previously passed tests to check that defects have not been introduced (regression testing).

Evaluating exit criteria and Reporting Determining if more tests are needed Writing up the test results for the clients Test Closure activities Ensuring that the documentation is in order Closing down and archiving the test environment, test infrastructure Passing over testware to the maintenance team Noting down methods to improve the testing process also called as testing maturity

Note: Independence Developer Same team member Other team member third party member What is bidirectional traceability? Bidirectional traceability needs to be implemented both forward and backward (i.e., from requirements to end products and from end product back to requirements). When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source.

What is test harness? Test Harness is configuring a set of tools and test data to test an application in various conditions, which involves monitoring the output with expected output for correctness. The benefits of Testing Harness are: Productivity increase due to process automation and increase in product quality What is retesting and regression testing? Retesting is testing of module once the identified bug has been fixed by the development team. Regression testing is testing the entire system if the change in software has caused any defects in other modules of the software. Note: System testing should investigate both functional and non-functional requirements of the system. SDLC Life Cycles Chapter Two Development life cycle models include: 1. 2. 3. 4. 5. 6. Requirements Specification Functional Specification Technical Specification Program Specification Coding Testing

Waterfall Model also called as linear sequential model. Testing is carried out once the code has been fully developed. Checks throughout the lifecycle include: Verification: Checks that work product meets the requirements set out for it Validation: Changes the focus of work product evaluation to evaluation against user needs V Model also called as sequential development model 1. 2. 3. 4. 5. Requirements Specification Acceptance Testing Functional Specification System Testing Technical Specification Integration Testing Program Specification Unit Testing Coding

This ensures that the defects can be found as early as possible in product life cycle. If the user requirements gets changed these issues may not be covered until the user testing is carried out Iterative Incremental development models

Spiral model where in the requirements design code test are all in a circular shape The users are involved in this type development during testing which minimizes the risk of developing an unsatisfied product Lack of proper documentation makes it difficult to test Test driven development is used to counter this where the functional tests are first written and code is then created and tested. Traceability is thus reduced as the project progresses. Type of iterative development model includes RAD, Agile Software development, Rational Unified Process. Unit testing units are also called programs, modules or components. Thus the test bases for integration testing can include: the software and system design; a diagram of the system architecture; workflows and use-cases. The test objects would essentially be the interface code. This can include subsystems database implementations. Big Bang Integration All the components are linked at once Top Down Integration Components call other components Stubs are built in case components are yet built. A stub is a passive component, called by other components. Bottom up integration - When these special components call other components, they are called drivers. They are so called because, in the functioning program, they are active, controlling other components. System testing checks the end to end functionality of the system. Functional Requirement and Non Functional Requirement A functional requirement is a requirement that specifies a function that a system or system component must perform. Functional requirements can be specific to a system. Non-functional system testing looks at those aspects that are important but not directly related to what functions the system performs. Non Functional Requirements include installability, maintainability, Performance, Load handling, Stress Handling, Portability, Recovery, Reliability, Recovery, Reliability and Usability System Testing: Test bases for system testing can include: system and software requirement specifications; use cases; functional specifications; risk analysis reports; and system, user and operation manuals. The test object will generally be the system under test.

Acceptance testing: The test bases can include: user requirements; system requirements; use cases; business processes; and risk analysis reports. The test objects can include: the fully integrated system; forms and reports produced by the system. Acceptance Testing types include UAT Testing called as User Acceptance Testing; Operational Acceptance Testing; Contract and Regulation acceptance Testing; Alpha and Beta Testing Operational Acceptance Testing involves checking: Back-up facilities Procedures for disaster recovery Training for end users Maintenance procedures Data load and migration tasks Security procedures

TEST TYPES Functional Testing; Non-Functional Testing; Structural Testing; Testing after the code has been changed To facilitate different types of testing, models may be used as follows: Functional testing: process flows; state transition models; security threat models; plain language specifications. Non-functional testing: performance model; usability model. Structural testing: control flow model; menu structure model. Security Testing and Inter-operability testing are functional testing types. Functional testing also called as specification based testing. Behavioral aspects of the system are tested in non-functional testing. For non-functional testing only black box testing techniques are used. ISO-9126 Software Engineering Product Quality To measure how much testing is carried out structural testing is used. To check the thoroughness of the testing carried out on the system that has actually been built. A common measure is to look at how much of the actual code that has been written has been tested. Testing related to changes Retesting is also called as confirmation testing Regression testing should be carried out if the test environment has been changed

Regression testing is suitable for automation testing as the tests are repeated whenever there is a change in the software code. MAINTAINANCE TESTING Once the system is release it may be required to make changes. Changes may be due to: Additional features being required. The system being migrated to a new operating platform. The system being retired data may need to be migrated or archived. Planned upgrade to COTS-based systems. New faults being found requiring fixing (these can be hot fixes) Testing that takes place on a system that is live or in production is called as maintenance testing. Impact analysis is analyzing the impact of the changes on the system. This would be difficult on the system that has been released as the specifications may be out of date or the development might have moved. Sample paper Note: V Model includes the verification of designs Oracle assumption is that the tester can routinely identify the correct outcome of a test We estimate the amount of Retesting required based on Metrics from previous similar projects and discussions with the development team Test Execution provides the biggest potential cost saving from the use of CAST Data flow analysis studies the use of data on paths through the code.

Static Testing
Static testing is the process of testing the software without executing the code. Static testing includes testing of specifications and requirements, testing the code without actually executing it. Reviews and Static analysis A review is a systematic examination of a document by one or more people with the aim of finding and removing errors. Benefits of finding defects early in the life cycle include: Development productivity can be improved and time scales reduced Testing costs and time can be reduced by removing the main delays in test execution Reduction in lifetime costs Improved communication results

Types of defects found through reviews include Deviations from standards Requirements Defects Design Defects Insufficient Maintainability Incorrect Interface Specifications

Review objectives include Finding Defects Gaining Understanding Generating Discussion Decision making by consensus

Activities of a formal Review includes Planning; selecting the personnel; allocating roles; defining entry and exit criteria; selecting the parts of the documents to be reviewed.

Planning; Kick off Checking entry criteria; Individual Preparation; Noting Incidents; Review Meeting; Examine; Rework; Fixing Defects; Follow-up; Checking exit criteria Kick off involves distributing the documents, explaining the objectives, process and documents to the participants Review Meeting approach is based in the time available; Requirements of the author; Type of review

Roles and Responsibilities


Manager Decides what is to be reviewed; ensures sufficient time is allocated in the project plan for all the required review activities; and determines if the review objectives have been met; normally they do not get involved in the review process. Moderator- known as the review leader; leads the review of the documents, planning the review, running the meeting, follow ups after the meeting, mediates between various points of view; makes the final decision whether to release an updated document Author- writer or person with chief responsibility for the development of the documents to be reviewed; in most cases takes responsibility for fixing any agreed defects

Reviewers- individuals with specific technical or business knowledge who after necessary preparation identify and describe findings in the product under review Scribe- documents all the issues and defects, problems and open points that were identified during the meeting

Types of Review

Level of formality Low to high Informal Review: Review may be documented but not required as many informal reviews are not documented. Purpose is to find defects and is an inexpensive way to achieve some limited benefit; review may be implemented by pair programming. Walkthrough: Meeting is led by the author of the document under review; preparation by reviewers before the walkthrough meeting, list of findings, main purpose is to enable learning about the content of the document under review and to explore scenarios to find defects Technical Review: These are documented and use a well-defined defect detection process that includes peers and technical experts; performed as a peer review without the management participation and is ideally led by a trained moderator who is not the author; reviewers prepare for the meeting with review report and list of findings; discussion, decision making, evaluating alternatives, finding defects, solving technical problems and checking conformance to specifications and standards. Inspection: They are led by a trained moderator who is not the author and usually involve peer examination of a document; it is formal based on rules and checklists and uses entry and exit criteria; Pre-meeting preparation is necessary to ensure consistency; An inspection report with list of findings is prepared which includes metrics that can be used to aid improvements to the process as well as correcting defects in the document under review; formal follow up process is used to ensure that corrective action is completed and timely; the main purpose is to find defects and process improvements is a secondary purpose.

Success Factors for reviews


Should have pre-defined and clear agreed objective. Any defects found are welcomed and expressed objectively Checklists should be used Management support is quite essential for a good review process Emphasis should be on learning and process improvements

Static Analysis by tools


Static analysis looks for defects without executing the code. They can be used once the code has been written. The value of static analysis is early detection of defects prior to test execution; early warning about suspicious aspects of the code or design, by the calculation of metrics, such as high complexity measure; Identification of defects not easily found by dynamic testing; improved maintainability of the code and design; Prevention of defects. Typical defects found by static analysis tools include: Referencing a variable with an undefined value Inconsistent interface between modules and components Variables that are never used Unreachable code Programming standard violations Security vulnerabilities Syntax violations of code and software models

Static analysis tools add greatest value when used during component and integration testing. Developers use static analysis tools Note: The process starting with terminal modules is called bottom-up integration The inputs for developing a test plan are taken from project plan Tests are prioritized so that you do the best testing in the time available Error guessing is not a static testing technique Data flow analysis, walkthroughs, inspections are static testing techniques

Test Design Techniques Test development process The design of tests comprises three main steps Identify test conditions Specify test cases Specify test procedures

A test condition a characteristic of software which we can check with a test or set of tests A test case a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition A test procedure specification a sequence of actions for the execution of a test A good test case should be traceable back to the test condition and the element of specification that it is testing Test coverage is important because it provides a quantitative assessment of the extent and quality of testing by measuring what has been achieved. It provides a way of estimating how much more testing needs to be done. Using quantitative measures we can set targets for test coverage and measure progress against them.

Categories of Test case Design Techniques


Black box techniques White box test design techniques Experienced based techniques also called as ad-hoc testing

You might also like