MB0033 Set I

You might also like

Download as rtf, pdf, or txt
Download as rtf, pdf, or txt
You are on page 1of 11

MB0033 Set-1

Name GURAV Roll No Learning Centre Subject Assignment No Date of Submission

ARUNKUMAR

: 511110681 : VASHI : MB0033 (Software Engineering) : Set 1 : 02nd May 2012

Submitted By: Arunkumar Gurav

Question 1: Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them. Answer: The quality movement started in the 1940s with a major contribution on quality aspects from W. Edwards Deming. One of the major benefits of quality has been the saving in the overall cost of production. A system of continuous improvement helps in achieving good quality. Kaizen, refers to a system of continuous process improvement. The purpose of kaizen is to develop a process that is visible, repeatable, and measurable. After Kaizen it is atarimae hinshitsu, which refers to examination of intangibles that affect the process and works to optimize their impact. Both kaizen and atarimae hinshitsu focuses on processes. The next stage is kansei which leads to improvement in the product itself and, potentially, to the process that created it. The final stage is miryokuteki hinshitsu which broadens the management concern beyond the immediate product. Quality Concepts: It is a well-known fact that all engineered and manufactured parts exhibit some or the other variation. The variations may not be clearly visible always. The variations are sometimes microscopic which can be identified by means of some equipment necessary to measure the geometrical attributes, electrical characteristics etc. Quality: Designers specify the characteristics of the quality of a product. The grade of materials used in the product development and product characteristics, permissible tolerances, and performance specifications contribute to the quality of design. For higher-grade of materials the tolerances are very small. When the tolerance is set to a very low level the expected design characteristics would be of high quality. When greater levels of performance are specified, there is an increase in the design quality of a product and the manufacturing processes and the product specification are set according to the specified quality norms. Quality of conformance is expressed as the degree to which the design specifications are followed during the process of manufacturing. If the degree of conformance is high then the level of quality of conformance is also deemed as high. Quality of conformance is mainly focused on the implementation of the software. Quality Control: Quality is the buzz word of every organization today. But how does one work towards achieving quality in the organization and within the organization at various process levels. There are a number of ways of achieving quality. One can consider the fundamental step of quality where the variations are measured with respect to the expected values in any process or characteristics of the product. The first step towards quality is to see that the variations are minimized. Controlling quality can be done by

MB0033 Set-1

means of measuring various characteristics of the product and understanding the behavior of the product towards changes in the product characteristics. It involves a series of inspections, reviews, and tests on the software processes. A feedback mechanism in the process list will help in constantly reviewing the performance and enhancement in the performance. A combination of the measurement and the feedback allows the software developer to refine the software process and tend to approach perfection. It is possible to automate these steps in the quality control process of the software system. One of the concepts of quality control is that every process can be measured. The measurement will tell as to whether there has been any improvement in the process or not. Quality Assurance: Quality assurance is a process of auditing various areas and identifying the non conformances in such areas. A non conformance is reported if a deviation is observed in the actual performance when compared with the planned performance against certain expectation. The expectations are listed out based on the requirement of certain standards norms. The nonconformances are reported area wise or process wise. The report based on the audit provides the management with the information that is necessary for them to take suitable actions. Cost of Quality: There are many activities involved in a software project leading to the completion of the intended service or the product. Every such activity is associated with some cost. And associated with every process is the quality which again comes with certain cost. The total cost of quality means the sum total of all the costs involved in setting up a quality process or a quality activity and additional resources procured towards maintaining and running the quality process. The main categories under which the quality costs may be listed are the ones dealing with processes towards prevention, processes towards appraisal, and processes towards maintenance. The main components contributing towards the cost are the cost component of quality planning, cost component of formal technical reviews and the cost component pertaining to the test equipment. Software Reliability: The need for quality is there in the minds of everybody associated with the software project. One of the key issues pertaining to the quality aspect is the reliability of the software product. There are number of methods to ensure reliability of the product which depends upon the characteristics of the product and its features and the expectations from the product and its services. One of the task before the software engineer or the software manager is to establish the relevant reliability measures well in advance before the implementation so that the quality is assured. A series of audits may be conducted to keep a tab on the deviations if they tend to occur. Statistically the software reliability may be defined as the probability of an operation of a computer program which is free from error or has not failed during the operation time, tested under a specified environment and for specified time. Failure refers to
Submitted By: Arunkumar Gurav

nonconformance to the requirements of the software stated. One of the simple measures of reliability is the express it as the meantime between failure (MBF) which is the sum of mean time of occurrence of failure (MTF) and mean time towards repair (MTR). It is necessary to identify and assess the hazards in software projects that affect the software performance. If it is possible to identify the hazards in the early stages of the software project then a module to counteract such hazards could be developed or built in to the software which will then be able to rectify errors leading to hazards. Suitable models could be used to achieve this safety. Background Issues: The quality assurance processes are very vital in establishing quality features in the product. Various standard mechanisms are developed in the companies to focus on the quality of the product. These mechanisms have to undergo improvements time to time in order to maintain the competition in the market. The product has to be viewed from the user point of view. A satisfaction note on the various features of the product is necessary to be reviewed to bring a change in the product to enhance it and to make it a quality product. Q2. Discuss the Objective & Principles Behind Software Testing. Answer: TESTING OBJECTIVES: 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as yet undiscovered error. 3. A successful test is one that uncovers an as yet undiscovered error. Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present. Glen Myers states a number of rules that can serve well as testing objectives: 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as return discovered error. 3. A successful test is one that uncovers an as-yet-undiscovered error. TESTING PRINCIPLES: Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing.

MB0033 Set-1

1. All tests should be traceable to customer requirements. As we have seen, the objective of software testing is to uncover errors. It follows that the most severe defects (from the customers point of view) are those that cause the program to fail to meet its requirements. 2. Tests should be planned long before testing begins. Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code has been generated. 3. The Pareto principle applies to software testing. Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them. 4. Testing should begin in the small and progress toward testing in the large. The first tests planned and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system 5. Exhaustive testing is not possible. The number of path permutations for even a moderately sized program is exceptionally large. For this reason, it is impossible to execute every combination of paths during testing. It is possible, however, to adequately cover program logic and to ensure that all conditions in the component-level design have been exercised. To be most effective, testing should be conducted by an independent third party. 6. By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing).

Davis suggests a set of testing principles that have been adapted: 1. All tests should be traceable to customer requirements: As we have seen, the objective of software testing is to uncover errors. It follows that the most severe defects (from the customers point of view) are those that cause the program to fail to meet its requirements. 2. Tests should be planned long before testing begins: Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code has been generated. 3. The Pareto principle applies to software testing: Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will most likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them. 4. Testing should begin in the small and progress toward testing in the large: The first tests planned and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system. 5. Exhaustive testing is not possible: The number of path permutations for even a moderately sized program is exceptionally large. For this reason, it is
Submitted By: Arunkumar Gurav

impossible to execute every combination of paths during testing. It is possible, however, to adequately cover program logic and to ensure that all conditions in the component-level design have been exercised. 6. To be most effective, testing should be conducted by an independent third party- By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing). 7. For reasons that have been introduced earlier in this unit, the software engineer who created the system is not the best person to conduct all tests for the software. Q3. Discuss the CMM 5 Levels for Software Process. Answer: Levels of the CMM: Level 1 - Initial Processes are usually ad hoc and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects. Organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again. Software project success depends on having quality people. Level 2 - Repeatable Software development successes are repeatable. The processes may not repeat for all the projects in the organization. The organization may use some basic project management to track cost and schedule. Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans. Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks). Basic project management processes are established to track cost, schedule, and functionality. The minimum process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is still a significant risk of exceeding cost and time estimate. Level 3 - Defined The organizations set of standard processes, which is the basis for level 3, is established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by the organizations set of standard processes according to tailoring guidelines. The organizations management establishes process objectives based on the organizations set of standard processes and ensures that these objectives are appropriately addressed. A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organizations set of standard processes

MB0033 Set-1

to suit a particular project or organizational unit. Level 4 - Managed Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. At this level organization set a quantitative quality goal for both software process and software maintenance. Sub processes are selected that significantly contribute to overall process performance. These selected sub processes are controlled using statistical and other quantitative techniques. A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable. Level 5 - Optimizing Focusing on continually improving process performance through both incremental and innovative technological improvements. Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organizations set of standard processes are targets of measurable improvement activities. Process improvements to address common causes of process variation and measurably improve the organizations processes are identified, evaluated, and deployed. Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organizations ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative processimprovement objectives.

Submitted By: Arunkumar Gurav

Q4. Discuss the Water Fall model for Software Development. Answer: Water fall model: The simplest software development life cycle model is the waterfall model, which states that the phases are organized in a linear order. A project begins with feasibility analysis. On the successful demonstration of the feasibility analysis, the requirements analysis and project planning begins. The design starts after the requirements analysis is done. And coding begins after the design is done. Once the programming is completed, the code is integrated and testing is done. On successful completion of testing, the system is installed. After this the regular operation and maintenance of the system takes place. The following figure demonstrates the steps involved in waterfall life cycle model.

The Waterfall Software Life Cycle Model With the waterfall model, the activities performed in a software development project are requirements analysis, project planning, system design, detailed design, coding and unit testing, system integration and testing. Linear ordering of activities has some important consequences. First, to clearly identify the end of a phase and beginning of the others. Some certification mechanism has to be employed at the end of each phase. This is usually done by some verification and validation. Validation means confirming the output of a phase is consistent with its input (which is the output of the previous phase) and that the output of the phase is consistent with overall requirements of the system. The consequence of the need of certification is that each phase must have some defined output that can be evaluated and certified. Therefore, when the activities of a phase are completed, there should be an output product of that phase and the goal of a phase is to produce this product. The outputs of the earlier phases are often called intermediate products or design document. For the coding phase, the output is the code. From this point of view, the output of a software project is to justify the final program along with the use of documentation with the requirements document, design document, project plan, test plan and test results. Another implication of the linear ordering of phases is that after each phase is completed and its outputs are certified, these outputs become the inputs to the next phase and should not be changed or modified. However, changing requirements cannot be avoided and must be faced. Since changes performed in the output of one phase

MB0033 Set-1

affect the later phases that might have been performed. These changes have to make in a controlled manner after evaluating the effect of each change on the project. This brings us to the need for configuration control or configuration management. The certified output of a phase that is released for the best phase is called baseline. The configuration management ensures that any changes to a baseline are made after careful review, keeping in mind the interests of all parties that are affected by it. There are two basic assumptions for justifying the linear ordering of phase in the manner proposed by the waterfall model. For a successful project resulting in a successful product, all phases listed in the waterfall model must be performed anyway. Any different ordering of the phases will result in a less successful software product. Q5. Explain the Advantages of Prototype Model, & Spiral Model in Contrast to Water Fall model. Answer: The spiral model, also known as the spiral lifecycle model, is a systems development lifecycle (SDLC) model used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favored for large, expensive, and complicated projects. The steps in the spiral model can be generalized as follows:

The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. A preliminary design is created for the new system. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and designing the second prototype; (4) constructing and testing the second prototype. At the customer's option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product. The existing prototype is evaluated in the same manner as was the previous prototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above. The preceding steps are iterated until the customer is satisfied that the refined
Submitted By: Arunkumar Gurav

prototype

represents

the

final

product

desired.

The final system is constructed, based on the refined prototype. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime. Q6. Explain the COCOMO Model & Software Estimation Technique. Answer: COCOMO Model: COCOMO stands for constructive cost model. It is used for software cost estimation and uses regression formula with parameters based on historic data. COCOMO has a hierarchy of 3 accurate and detail forms, namely: Basic, Intermediate and Detailed. The Basic level is good for a quick and early overall cost estimate for the project but is not accurate enough. The intermediate level considers some of the other project factors that influence the project cost and the detailed level accounts for various project phases that affect the cost of the project. Advantages of COCOMO estimating model are: COCOMO is factual and easy to interpret. One can clearly understand how it works. Accounts for various factors that affect cost of the project. Works on historical data and hence is more predictable and accurate. Disadvantages of COCOMO estimating model 1. 2. 3. 4. 5. 6. COCOMO model ignores requirements and all documentation. It ignores customer skills, cooperation, knowledge and other parameters. It oversimplifies the impact of safety/security aspects. It ignores hardware issues It ignores personnel turnover levels It is dependent on the amount of time spent in each phase.

Software Estimation Technique: Accurately estimating software size, cost, effort, and schedule is probably the biggest challenge facing Software developers today. Estimating Software Size An accurate estimate of software size is an essential element in the calculation of estimated project costs and schedules. The fact that these estimates are required very early on in the project (often while a contract bid is being prepared) makes size estimation a formidable task. Initial size estimates are typically based on the known system requirements. You must hunt for every known detail of the proposed system, and use these details to develop and validate the software size estimates. In general, you present size estimates as lines of code (KSLOC or SLOC) or as function points. There are constants that you can apply to convert function points to lines of code for specific languages, but not vice versa. If possible, choose and adhere to one unit of

MB0033 Set-1

measurement, since conversion simply introduces a new margin of error into the final estimate. Regardless of the unit chosen, you should store the estimates in the metrics database. You will use these estimates to determine progress and to estimate future projects. As the project progresses, revise them so that cost and schedule estimates remain accurate. The following section describes techniques for estimating software size. Estimating Software Cost The cost of medium and large software projects is determined by the cost of developing the software, plus the cost of equipment and supplies. The latter is generally a constant for most projects. The cost of developing the software is simply the estimated effort, multiplied by presumably fixed labor costs. For this reason, we will concentrate on estimating the development effort, and leave the task of converting the effort to dollars to each company. Estimating Effort There are two basic models for estimating software development effort (or cost): holistic and activity-based. The single biggest cost driver in either model is the estimated project size. Holistic models are useful for organizations that are new to software development, or that do not have baseline data available from previous projects to determine labor rates for the various development activities. Estimates produced with activity-based models are more likely to be accurate, as they are based on the software development rates common to each organization. Unfortunately, you require related data from previous projects to apply these techniques. Estimating Software Schedule There are many tools on the market (such as Timeline, MacProject, On Target, etc.) which help develop Gantt and PERT charts to schedule and track projects. These programs are most effective when you break the project down into a Work Breakdown Structure (WBS), and assign estimates of effort and staff to each task in the WBS.

Submitted By: Arunkumar Gurav

You might also like