Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 9

XIV.

Summary SOFTWARE TESTING- Software defect have many interchangeable names: software bug, error, fault, failure, crash and anomaly. The institute for electrical and electronic engineering (IEEE), the standards body regulating even the software profession, defines a software bug as a programming error that causes software to malfunction. IEEE sees defect as a product anomaly when software does not conform to customer expectation and specification. A crash is an extreme case of a software defect that stops the software from further working. Defect Life Cycle- Software is rarely free from errors or defects. Though defects in software may sometimes be disastrous, they can be prevented or minimized through good quality assurance practices, configuration management and defect tracking technologies. The software defect life cycle, which overlaps the software development life cycle (SDLC), enables us to track defects and eliminate them as early as possible in the SDLC. Defect- is a variance from a desired product attribute. Defects has two categories: Variance from Product Specifications product built varies from the product specified and Variance from Customer/User Specification a specification by the user not in the built product, but something not specified has been included. Defects can belong to one of the following three categories: Wrong- the specifications have been implemented incorrectly. Missing- a specified requirement is not in the built product and the Extra- a requirement incorporated into the product that was not specified. They are some objectives of software testing: Execute a program with the intent of

finding an error. Check if the system meets the requirements and be executed successfully in the intended environment. Check if the system is Fit for purpose. Check if the system does what it is expected to do. A good test case is one that has a probability of finding an as yet undiscovered error. A successful test is one that uncovers a yet undiscovered error. A good test is not redundant. A good test should be best of breed and a good test should neither be too simple nor too complex. Common Software Problems: Incorrect calculation, incorrect data edits & ineffective data edits, Incorrect matching and merging of data, Data searches that yields incorrect results, Incorrect processing of data relationship, Incorrect coding / implementation of business rules, Inadequate software performance, Confusing or misleading data, Software usability by end users & Obsolete Software, Inconsistent processing, Unreliable results or performance, Inadequate support of business needs, Incorrect or inadequate interfaces with other systems, Inadequate performance and security controls and Incorrect file handling.

LIFECYCLE TESTING PLAN (P) - Device a plan. Define your objective and determine the strategy and supporting methods required to achieve that objective. DO (D) - Execute the plan. Create the conditions and perform the necessary training to execute the plan.

CHECK (C) - Check the results. Check to determine whether work is progressing according to the plan and whether the results are obtained. ACTION (A) -take the necessary and appropriate action. If checkup reveals that the work is not being performed according to plan or not as anticipated.

SOFTWARE TESTING LIFECYCLE- PHASES Requirements study Testing Cycle starts with the study of clients requirements. Understanding of the requirements is very essential for testing the product. Test Case Design and Development- Component Identifications, Test Specification Design and Test Specification Review. Test Execution- Code Review, Test execution and evaluation and Performance and simulation. Test Closure- Test summary report, Project De-brief, Project Documentation Test Process Analysis- Analysis done on the reports and improving the applications performance by implementing new technology and additional features.

AXIOMS OF TESTING Myers (1976) gives the following axioms that are generally true for testing: A good test is one that has a high probability of detecting a previously undiscovered defect, not one that shows that the program works correctly. It is impossible to test your own program. As the number of detected defects in a

piece of software increases, the probability of the existence of more undetected defects also increases. Assign your best programmers to testing. Never alter the program to make testing easier (unless it is a permanent change).

LEVELS OF TESTING Unit testing- the most micro scale of testing. Tests done on particular functions or code modules. Requires knowledge of the internal program design and code. Done by Programmers (not by testers). Incremental Integration Testing- Continuous testing of an application when a new functionality is added. Applications functionality aspects are required to be independent enough to work separately before completion of development. Done by programmers or testers. Integration Testing- testing of combined parts of an application to determine their functional correctness. Types of Integration Testing: Big Bang Testing- A type of integration testing in which software elements, hardware element, or both are combined all at once into an overall system, rather than in stages. Top Down Integration Testing- Pertaining to an activity that starts with the highest level component of a hierarchy and proceeds through progressively lower levels; for example, top-down design; top-down testing.

Bottom Up Integration Testing- Pertaining to an activity that starts with the lowest level software components of a hierarchy and proceeds through progressively higher levels.

TESTING METHODOLOGIES AND TYPES Black Box Testing- No knowledge of internal design or code required. Tests are based on requirements and functionality. Black Box Testing Technique- Incorrect or missing functions. Interface errors. Errors in data structures or external database access. Performance errors. Initialization and termination errors. Black Box / Functional Testing- Based on requirements and functionality. Not based on any knowledge of internal design or code. Covers all combined parts of a system. Tests are data driven. White Box Testing-Knowledge of the internal program design and code required. Tests are based on coverage of code statements, branches, paths, conditions. White Box Testing / Structural Testing- Based on knowledge of internal logic of an application's code. Based on coverage of code statements, branches, paths, conditions. Tests are logic driven.

Incremental Testing- Is a way of integration testing in which first you test each module of the software individually then continue testing by adding another module to it then another. Thread Testing- A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

MAJOR TESTING TYPES Stress / Load Test- Evaluates a system or component at or beyond the limits of its specified requirements. Determines the load under which it fails and how. Performance Test- Evaluate the compliance of a system or component with specified performance requirements. Often performed using an automated test tool to simulate large number of users. Recovery Test- Confirms that the system recovers from expected or unexpected events without loss of data or functionality. Conversion Test- Testing of code that is used to convert data from existing systems for use in the newly replaced systems. Usability Test- Testing the system for the users to learn and use the product. Configuration TestExamines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

MISCELLANEOUS TESTS End-to-End Testing Similar to system testing, it involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Sanity Testing- It is an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing the system every 5 minutes or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. Usability Testing- It tests the user-friendliness of the software. User interviews and surveys and video recording of user sessions are used for this type of testing. Compatibility Testing- It tests how well software performs in a particular hardware, software, operating system/network environment. Comparison Testing- This testing is useful in comparing software weaknesses and strengths with available competing products. Mutation Testing- By deliberately introducing bugs in the code and retesting with the original test data/cases to determine if the bugs are detected, the test determines if a set of test data or test cases is useful.

TESTING STANDARDS

External Standards- Familiarity with and adoption of industry test standards from organizations. Internal Standards- Development and enforcement of the test standards that testers must meet. IEEE STANDARDS Institute of Electrical and Electronics Engineers designed an entire set of standards for software and to be followed by the testers. IEEE Recommended Practice for Software Requirement SpecificationThe content and qualities of a good software requirements specification (SRS) are described, and several sample SRS outlines are presented. This recommended practice is aimed at specifying the requirements of software to be developed, but can also be applied to assist in the selection of in-house and commercial software products. Guidelines for compliance with IEEE/EIA Std 12207.1-1997 are also provided. IEEE Standard for Software Unit Testing The primary objective is to specify a standard approach to software unit testing that can be used as a basis for sound software engineering practice. A second objective is to describe the software engineering concepts and testing assumptions on which the standard approach is based. A third objective is to provide guidance and resource information to assist with the implementation and usage of the standard unit testing approach. IEEE Standard for Software Verification and Validation This standard provides uniform and minimum requirements for the format and content of

Software Verification and Validation Plans (SVVPs). Performing software verification and validation (V and V) as defined in this standard provides for a comprehensive evaluation throughout each phase of the software project to help ensure that: errors are detected and corrected as early as possible in the software life cycle, project risk, cost, and schedule effects are lessened, software quality and reliability are enhanced, management visibility into the software

process is improved, proposed changes and their consequences can be quickly assessed. This standard applies to both critical and noncritical software.

You might also like