Assignment - Set-1 (60 Marks) : Master in Business Administration-MBA SEM - III MI0033 - Software Engineering

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 27

Master in Business Administration-MBA SEM III MI0033 Software Engineering

Assignment - Set- 1 (60 Marks)

Q1. Quality and reliability are related on e!ts but are fundamentally different in a number of ways. "is uss t#em. Reliability is an underlying part of quality. Quality can be assured after reliable release of software over a period of time. Other hand both are interrelated though fundamentally different. Basically quality of design refers to the characteristics that designers specify for an item. The grade of materials, tolerances, and performance specifications all contribute to the quality of design. As higher grade materials are used, tighter tolerances and greater levels of performance are specified, the design quality of a product increases, if the product is manufactured according to specifications. Quality of conformance is the degree to which the design specifications are followed during manufacturing. Again, the greater the degree of conformance, the higher is the level of quality of conformance. !n software development, quality of design encompasses requirements, specifications, and the design of the system. Quality of conformance is an issue focused primarily on implementation. !f the implementation follows the design and the resulting system meets its requirements and performance goals, conformance quality is high.

Quality $ontrol "ariation control may be equated to quality control. But how do we achieve quality control# Quality control involves the series of inspections, reviews, and tests used throughout the software process to ensure each wor$ product meets the requirements placed upon it. Quality control includes a feedbac$ loop to the process that created the wor$ product. The combination of measurement and feedbac$ allows us to tune the process when the wor$ products created fail to meet their specifications. This approach views quality control as part of the manufacturing process. Quality control activities may be fully automated, entirely manual, or a combination of automated tools and human interaction. A $ey concept of quality control is that all wor$ products have defined measurable specifications to which we may compare the output of each process. The feedbac$ loop is essential to minimi%e the defects produced. Quality Assuran e Quality assurance consists of the auditing and reporting functions of management. The goal of quality assurance is to provide management with the data necessary to be informed about product quality, thereby gaining insight and confidence that product quality is meeting its goals. Of course, if the data provided through quality assurance identifies problems, it is the management&s responsibility to address the problems, and apply the necessary resources to resolve quality issues. $ost of Quality The cost of quality includes all costs incurred in the pursuit of quality or in performing quality related activities. 'ost of quality studies are conducted to provide a base line for the current cost of quality, identify opportunities for reducing the cost of quality, and provide a normali%ed basis of comparison. The basis of normali%ation is almost always dollars. Once we have normali%ed quality costs on a dollar basis, we have the necessary data to evaluate where the opportunities lie, to improve our processes. (urthermore, we can evaluate the effect of changes in dollar based terms. Quality costs may be divided into costs associated with prevention, appraisal, and failure. )revention costs include * Quality planning * (ormal technical reviews * Test equipment On the other hand there is no doubt that the reliability of a computer program is an important element of its overall quality. !f a program repeatedly and frequently fails to perform, it matters little whether other software quality factors are acceptable. +oftware reliability, unli$e many other quality factors, can be measured directed and estimated using historical and developmental data. +oftware reliability is defined in statistical terms as ,the probability of failure free operation of a computer program in a specified environment for a specified time, -./+012. To illustrate, program 3 is estimated to have a reliability of 4.56 over eight elapsed processing hours. !n other words, if program 3 were to be e7ecuted 844 times and

require eight hours of elapsed processing time 9e7ecution time:, it is li$ely to operate correctly 9without failure: 56 times out of 844. ;henever software reliability is discussed, a pivotal question arises< ;hat is meant by the term failure# !n the conte7t of any discussion of software quality and reliability, failure is nonconformance to software requirements. =et, even within this definition, there are gradations. (ailures can be only annoying or catastrophic. One failure can be corrected within seconds while another requires wee$s or even months to correct. 'omplicating the issue even further, the correction of one failure may in fact result in the introduction of other errors that ultimately result in other failures. Measures of %eliability and A&ailability >arly wor$ in software reliability attempted to e7trapolate the mathematics of hardware reliability theory 9e.g., -A?"6@2: to the prediction of software reliability. .ost hardware related reliability models are predicated on failure due to wear, rather than failure due to design defects. !n hardware, failures due to physical wear 9e.g., the effects of temperature, corrosion, shoc$: are more li$ely than a design related failure. /nfortunately, the opposite is true for software. !n fact, all software failures can be traced to design or implementation problemsA wear 9see 'hapter 8: does not enter into the picture. There has been debate over the relationship between $ey concepts in hardware reliability and their applicability to software 9e.g., -?!T052, -ROO542:. Although an irrefutable lin$ has yet to be established, it is worthwhile to consider a few simple concepts that apply to both system elements. !f we consider a computer based system, a simple measure of reliability is meantime between failure 9.TB(:, where .TB( B .TT( C .TTR The acronyms .TT( and .TTR are mean time to failure and mean time to repair, respectively. 9ii: Software Safety ;hen software is used as part of the control system, comple7ity can increase by an order of magnitude or more. +ubtle design faults induced by human error D something that can be uncovered and eliminated in hardware based conventional control D become much more difficult to uncover when software is used. +oftware safety is a software quality assurance activity that focuses on the identification and assessment of potential ha%ards that may affect software negatively, and cause an entire system to fail. !f ha%ards can be identified early in the software engineering process, software design features can be specified that will either eliminate or control potential ha%ards. A modeling and analysis process is conducted as part of software safety. !nitially, ha%ards are identified and categori%ed by criticality and ris$. (or e7ample, some of the ha%ards associated with a computer based cruise control for an automobile might be causes uncontrolled acceleration that cannot be stopped.

Q ' "is uss t#e (b)e ti&e * +rin i!les be#ind Software ,esting The importance of software testing and its implications with respect to software quality cannot be overemphasi%ed .The development of software systems involves a series of production activities where opportunities for inEection of human fallibilities are enormous. >rrors may begin to occur at the very inception of the process where the obEectives may be erroneously or imperfectly specified, as well as -in2 later design and development stages. Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity. +oftware testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and code generation. The increasing visibility of software as a system element and the attendant ,costs, associated with a software failure are motivating forces for well planned, thorough testing. !t is not unusual for a software development organi%ation to e7pend between F4 and @4 percent of the total proEect effort on testing. !n the e7treme, testing of human rated software 9e.g., flight control, nuclear reactor monitoring: can cost three to five times as much as all other software engineering steps combinedG ,esting (b)e ti&es !n an e7cellent boo$ on software testing, Hlen .yers states a number of rules that can serve well as testing obEectives< 8. Testing is a process of e7ecuting a program with the intent of finding an error. I. A good test case is one that has a high probability of finding an as return discovered error. F. A successful test is one that uncovers an as yet undiscovered error. These obEectives imply a dramatic change in viewpoint. They move counter to the commonly held view that a successful test is one in which no errors are found. Our obEective is to design tests that systematically uncover different classes of errors, and to do so with a minimum amount of time and effort. !f testing is conducted successfully 9according to the obEectives stated previously:, it will uncover errors in the software. As a secondary benefit, testing demonstrates that software functions appear to be wor$ing according to specification, that behavioral and performance requirements appear to have been met. !n addition, data collected as testing is conducted provide a good indication of software reliability, and some indication of software quality as a whole. But testing cannot show the absence of errors and defects, it can show only that software errors and defects are present. !t is important to $eep this 9rather gloomy: statement in mind as testing is being conducted.

,esting +rin i!les Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing. Javis -JA"5K2 suggests a set of testing principles that have been adapted for use in this boo$< All tests s#ould be tra eable to ustomer re-uirements. As we have seen, the obEective of software testing is to uncover errors. !t follows that the most severe defects 9from the customers point of view: are those that cause the program to fail to meet its requirements. ,ests s#ould be !lanned long before testing begins. Test planning can begin as soon as the requirements model is complete. Jetailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code has been generated. ,#e +areto !rin i!le a!!lies to software testing. +tated simply, the )areto principle implies that 04 percent of all errors uncovered during testing will most li$ely be traceable to I4 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them. ,esting s#ould begin in t#e small and !rogress toward testing in t#e large. The first tests planned and e7ecuted generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system. E.#austi&e testing is not !ossible. The number of path permutations for even a moderately si%ed program is e7ceptionally large. (or this reason, it is impossible to e7ecute every combination of paths during testing. !t is possible, however, to adequately cover program logic and to ensure that all conditions in the component level design have been e7ercised. ,o be most effe ti&e/ testing s#ould be ondu ted by an inde!endent t#ird !arty. By most effective, we mean testing that has the highest probability of finding errors 9the primary obEective of testing:. (or reasons that have been introduced earlier in this unit, the software engineer who created the system is not the best person to conduct all tests for the software. Q 3. "is uss t#e $MM 0 1e&els for Software +ro ess ,#e +ro ess '.'.1 ,#e Software +ro ess

!n recent years, there has been a significant emphasis on process maturity. The +oftware >ngineering !nstitute 9+>!: has developed a comprehensive model predicated on a set of software engineering capabilities that should be present as organi%ations reach different levels of process maturity. To determine an organi%ations current state of process maturity, the +>! uses an assessment that results in a five point grading scheme. The grading scheme determines compliance with a capability maturity model 9'..: -)A/5F2 that defines $ey activities required at different levels of process maturity. The +>! approach provides a measure of the global effectiveness of a companyLs software engineering practices, and establishes five process maturity levels that are defined in the following manner< 1e&el 12 Initial. The software process is characteri%ed as ad hoc and occasionally even chaotic. (ew processes are defined, and success depends on individual effort. 1e&el '2 %e!eatable. Basic proEect management processes are established to trac$ cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on proEects with similar applications. 1e&el 32 "efined. The software process for both management and engineering activities is documented, standardi%ed, and integrated into an organi%ation wide software process. All proEects use a documented and approved version of the organi%ationLs process for developing and supporting software. This level includes all characteristics defined for level I. 1e&el 32 Managed. Jetailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled using detailed measures. This level includes all characteristics defined for level F. 1e&el 02 (!timi4ing. 'ontinuous process improvement is enabled by quantitative feedbac$ from the process and from testing innovative ideas and technologies. This level includes all characteristics defined for level @. The five levels defined by the +>! were derived as a consequence of evaluating responses to the +>! assessment questionnaire that is based on the '... The results of the questionnaire are distilled to a single numerical grade that provides an indication of an organi%ationLs process maturity. The +>! has associated $ey process areas 9M)As: with each of the maturity levels. The M)As describe those software engineering functions 9e.g., software proEect planning, requirements management: that

must be present to satisfy good practice at a particular level. >ach M)A is described by identifying the following characteristics< Hoals the overall obEectives that the M)A must achieve. 'ommitments requirements 9imposed on the organi%ation: that must be met to achieve the goals, or provide proof of intent to comply with the goals. Abilities those things that must be in place 9organi%ationally and technically: to enable the organi%ation to meet the commitments. Activities the specific tas$s required to achieve the M)A function. .ethods for monitoring implementation the manner in which the activities are monitored as they are put into place. .ethods for verifying implementation the manner in which proper practice for the M)A can be verified.

Q 3. "is uss t#e 5ater 6all model for Software "e&elo!ment.

The ;aterfall software development model has been in use for a number of decades and it is still commonly used in software development proEects today. It is a se-uential model w#ere t#e de&elo!ment !ro ess goes t#roug# a number of !#ases in a ertain order. ;hile it has been replaced to a large degree by the iterative models of software development, ;aterfall still has its place in todayLs !T world. Basically, it requires that any proEect goes through the stages of requirements analysis, design, implementation 9coding:, verification, and maintenance. In comparison to iterative models, the Waterfall model is seen as inflexible and linearthough it's preferred by many who feel iterative software development methodologies lack discipline. Although there are variations, in the true ;aterfall model, the proEect only moves from one phase to the ne7t when a phase is completed in its entirety. Therefore, no wor$ will begin

on the design phase until requirements analysis is complete. Also, there is no room for bac$trac$ing, so when a phase is complete it has to be right. The Waterfall model is often used for very large software development pro ects and may involve development teams working in different locations. Once implementation, or coding, is complete the various components will be integrated into a wor$ing piece of software. The verification phase will involve testing and debugging of the software before it is released. Ad&antages (ans of the ;aterfall software development model will argue that the amount of pre planning that goes into the requirements and design phases ma$es it the most economical and ris$ free way to develop software as it identifies and weeds out any potential problems at the outset. !f these problems arose later in a proEect they could be very costly. The ;aterfall model also puts an emphasis on documentation and structure. This is an advantage when someone leaves the development team as the necessary documentation is there to help a new person ta$e over. "isad&antages The ;aterfall model certainly isnLt to everyoneLs taste. Those who argue against it are usually opposed to its rigid structure and the inability to bac$trac$. !t also isnLt very client focused as it ma$es any requests to change the software during the development process almost impossible to agree to. And while each phase of development should be 844N perfect before it is completed, it can become very complicated if they are not. !or these reasons, many modified Waterfall models have developed over the years that allow for increased flexibility

Q0. E.!lain t#e Ad&antages of +rototy!e Model/ * S!iral Model in $ontrast to 5ater 6all model. ,#e +rototy!ing Model Often, a customer defines a set of general obEectives for software but does not identify detailed input, processing, or output requirements. !n other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that humanOmachine interaction should ta$e. !n these, and many other situations, a prototyping paradigm may offer the best approach. The prototyping paradigm begins with requirements gathering. Jeveloper and customer meet and define the overall obEectives for the software, identify whatever requirements are $nown, and outline areas where further definition is mandatory. A ,quic$ design, then occurs. The quic$ design focuses on a representation of those aspects of the software that will be visible to the customerOuser 9e.g., input approaches and output formats:.

The quic$ design leads to the construction of a prototype. The prototype is evaluated by the customerOuser and used to refine requirements for the software to be developed. !teration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done. The prototype can serve as ,the first system., The one that Broo$s recommends we throw away. But this may be an ideali%ed view. !t is true that both customers and developers li$e the prototyping paradigm. /sers get a feel for the actual system, and developers get to build something immediately.

The other advantages are < 'reating software using the prototype model also has its benefits. One of the $ey advantages a prototype modeled software has is the time frame of development. !nstead of concentrating on documentation, more effort is placed in creating the actual software. This way, the actual software could be released in advance. The wor$ on prototype models could also be spread to others since there are practically no stages of wor$ in this model. >veryone has to wor$ on the same thing and at the same time, reducing man hours in creating software. The wor$ will even be faster and efficient if developers will collaborate more regarding the status of a specific function and develop the necessary adEustments in time for the integration. Another advantage of having a prototype modeled software is that the software is created using lots of user feedbac$s. !n every prototype created, users could give their honest opinion about the software. !f something is unfavorable, it can be changed. +lowly the program is created with the customer in mind. POver JesignQ could also be avoided using this model. POver JesignQ happens when software has so many things to offer that it sacrifices the original use of the software. This also goes bac$ in giving only what the customer wants.

Henerally, a prototype model has great advantage over other +J?' models since it doesn&t rely on what is suppose to happen in written documentation. !nstead it goes directly to the users and as$ing them what they really want from software. +lowly the product is developed by professionals, catering to the needs of the users. ,#e S!iral Model The spiral model, originally proposed by Boehm -BO>002, is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the linear sequential model. !t provides the potential for rapid development of incremental versions of the software. /sing the spiral model, software is developed in a series of incremental releases. Juring early iterations, the incremental release might be a paper model or prototype. Juring later iterations, increasingly more complete versions of the engineered system are produced. A spiral model is divided into a number of framewor$ activities, also called tas$ regions. Typically, there are between three and si7 tas$ regions. A spiral model contains si7 tas$ regions< $ustomer $ommuni ation +lanning %is7 Analysis Engineering $onstru tion and %elease $ustomer E&aluation ;aterfall .odel also called as linear +equential .odel or classical life cycle model is purely a pre )lanned O strategic in nature. The requirements gathering is the important phase and once the requirements have been identified and agreed, the subsequent activities li$e analysis, design, and implementation e7actly resembles to the requirements specification. !f any flaw is uncovered in the specification it leads to a seriousOmaEor errorOdefect in the building product. Ro subsequent changes can be done once you have s$ipped to one phase another, similar to waterfall from a hill to ground, flow of water can&t be reversed. !t is the best model if requirements gathering have done perfectly and clearly stated in the initial phase. 'oming to spiral model, unli$e waterfall model is iterative in nature.

This model suits for certain situations where you need to build a novel system or product which you don&t have adequate requirement&s gathering. +piral model starts in developing a newOnovel concept and upon subsequent refinements, the concept can be turned into a product or a system. At each iteration, the daemon activities li$e analysis, design, coding, testing etc will be commenced. The errorsOdefects uncovered can be fi7ed on ne7t version or iteration. Also the spiral model stresses more on the ris$ analysis which is not much considered in waterfall model. The maEor drawbac$ of waterfall model is 'ustomer should have patience and both customer and sOw engineer should have a perfect picture of what they are going to do, in real time situations this will happen rarely as most of customers and engineers are not fully e7perts in particular domain, and of course changing requirements continuously ma$es you select spiral model rather than the waterfall model.

Q8. E.!lain t#e $($(M( Model * Software Estimation ,e #ni-ue. ,#e $($(M( Model !n his classic boo$ on software engineering economics, Barry Boehm -BO>082 introduced a hierarchy of software estimation models bearing the name 'O'O.O, for 'Onstructive 'Ost .Odel. The original 'O'O.O model became one of the most widely used and discussed software cost estimation models in the industry. !t has evolved into a more comprehensive estimation model, called 'O'O.O !! -BO>56, BO>442. ?i$e its predecessor, 'O'O.O !! is actually a hierarchy of estimation models that address the following areas< A!!li ation om!osition model2 /sed during the early stages of software engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and evaluation of technology maturity are paramount. Early design stage model2 /sed once requirements have been stabili%ed and basic software architecture has been established. +ost-ar #ite ture-stage model2 /sed during the construction of the software. Software +ro)e t Estimation +oftware cost and effort estimation will never be an e7act science. Too many variables human, technical, environmental, political can affect the ultimate cost of software and effort applied to develop it. Sowever,

software proEect estimation can be transformed from a blac$ art to a series of systematic steps that provide estimates with acceptable ris$. ,o a #ie&e reliable ost and effort estimates/ a number of o!tions arise2 8. Jelay estimation until late in the proEect 9obviously, we can achieve 844N accurate estimates after the proEect is completeG:. I. Base estimates on similar proEects that have already been completed. F. /se relatively simple decomposition techniques to generate proEect cost and effort estimates. @. /se one or more empirical models for software cost and effort estimation. /nfortunately, the first option, however attractive, is not practical. 'ost estimates must be provided ,up front., Sowever, we should recogni%e that the longer we wait, the more we $now, and the more we $now, the less li$ely we are to ma$e serious errors in our estimates. The second option can wor$ reasonably well, if the current proEect is quite similar to past efforts and other proEect influences 9e.g., the customer, business conditions, the +>>, deadlines: are equivalent. /nfortunately, past e7perience has not always been a good indicator of future results. "e om!osition ,e #ni-ues +oftware proEect estimation is a form of problem solving, and in most cases, the problem to be solved 9i.e., developing a cost and effort estimate for a software proEect: is too comple7 to be considered in one piece. (or this reason, we decompose the problem, re characteri%ing it as a set of smaller 9and hopefully, more manageable: problems. The decomposition approach was discussed from two different points of view< decomposition of the problem and decomposition of the process. >stimation uses one or both forms of partitioning. But before an estimate can be made, the proEect planner must understand the scope of the software to be built and generate an estimate of its si%e. 9i: Software Si4ing The accuracy of a software proEect estimate is predicated on a number of things< 98: the degree to which the planner has properly estimated the si%e of the product to be builtA 9I: the ability to translate the si%e estimate into human effort, calendar time, and dollars 9a function of the availability of reliable software metrics from past proEects:A 9F: the degree to which the proEect plan reflects the abilities of the software

teamA and 9@: the stability of product requirements and the environment that supports the software engineering effort. 9ii: +roblem-Based Estimation ?ines of code and function points were described as measures from which productivity metrics can be computed. ?O' and () data are used in two ways during software proEect estimation< 98: as an estimation variable to ,si%e, each element of the software and 9I: as baseline metrics collected from past proEects and used in conEunction with estimation variables to develop cost and effort proEections. ?O' and () estimation are distinct estimation techniques. =et both have a number of characteristics in common. The proEect planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. ?O' or () 9the estimation variable: is then estimated for each function. Alternatively, the planner may choose another component for si%ing such as classes or obEects, changes, or business processes affected. Baseline productivity metrics 9e.g., ?O'Opm or ()Opm5: are then applied to the appropriate estimation variable, and cost or effort for the function is derived. (unction estimates are combined to produce an overall estimate for the entire proEect. Em!iri al Estimation Models An estimation model for computer software uses empirically derived formulas to predict effort as a function of ?O' or (). "alues for ?O' or () are estimated using the approach described in +ections K.6.I and K.6.F. But instead of using the tables described in those sections, the resultant values for ?O' or () are plugged into the estimation model. The empirical data that support most estimation models are derived from a limited sample of proEects. (or this reason, no estimation model is appropriate for all classes of software and in all development environments. Therefore, the results obtained from such models must be used Eudiciously. 9i: ,#e Stru ture of Estimation Models A typical estimation model is derived using regression analysis on data collected from past software proEects. The overall structure of such models ta$es the form -.AT5@2 > B A C B 7 9ev:' 9K I: where A, B, and ' are empirically derived constants, > is effort in person months, and ev is the estimation variable 9either ?O' or ():. !n addition to the relationship noted in >quation 9K I:, the maEority of estimation models have some form of proEect adEustment component that

enables > to be adEusted by other proEect characteristics 9e.g., problem comple7ity, staff e7perience, development environment:. Among the many ?O' oriented estimation models proposed in the literature are > B K.I 7 9M?O'4T4.58 ;alston (eli7 model > B K.K C 4.1F 7 9M?O':8.86 Bailey Basili model > B F.I 7 9M?O':8.4K Boehm simple model > B K.I00 7 9M?O':8.4@1 Joty model for M?O' U 5 () oriented models have also been proposed. These include > B T8F.F5 C 4.4K@K () Albrecht and Haffney model > B 64.6I 7 1.1I0 7 84 0 ()F Memerer model > B K0K.1 C 8K.8I () .atson, Barnett, and .ellichamp model A quic$ e7amination of these models indicates that each will yield a different result 8@ for the same values of ?O' or (). The implication is clear. >stimation models must be calibrated for local needsG

MI0033 Software Engineering - 4 Credits Assignment - Set- 2 (60 Marks)


Q1. 5rite a note on myt#s of Software. Software Myt#s

Today, most $nowledgeable professionals recogni%e myths for what they are misleading attitudes that have caused serious problems for managers and technical people ali$e. Sowever, old attitudes and habits are difficult to modify, and remnants of software myths are still believed. Management myt#s .anagers with software responsibility, li$e managers in most disciplines, are often under pressure to maintain budgets, $eep schedules from slipping, and improve quality. ?i$e a drowning person who grasps at a straw, a software manager often grasps at belief in a software myth, if that belief will lessen the pressure 9even temporarily:. Myt#2 ;e already have a boo$ thatLs full of standards and procedures for building softwareA wonLt that provide my people with everything they need to $now# %eality2 The boo$ of standards may very well e7ist, but is it used# Are software practitioners aware of its e7istence# Joes it reflect modern software engineering practice# !s it complete# !s it streamlined to improve time to delivery while still maintaining a focus on quality# !n many cases, the answer to all of these questions is ,no., Myt#2 .y people have state of the art software development tools, after all, we buy them the newest computers. %eality2 !t ta$es much more than the latest model mainframe, wor$station, or )' to do high quality software development. 'omputer aided software engineering 9'A+>: tools are more important than hardware for achieving good quality and productivity, yet the maEority of software developers still do not use them effectively. Myt#2 !f we get behind schedule, we can add more programmers and catch up 9sometimes called the .ongolian horde concept:. %eality2 +oftware development is not a mechanistic process li$e manufacturing. !n the words of Broo$s -BRO1K2< ,adding people to a late software proEect ma$es it later., At first, this statement may seem counterintuitive. Sowever, as new people are added, people who were wor$ing must spend time educating the newcomers, thereby reducing the amount of time spent on productive development effort. )eople can be added but only in a planned and well coordinated manner. Myt#2 !f ! decide to outsource the software proEect to a third party, ! can Eust rela7 and let that firm build it.

%eality2 !f an organi%ation does not understand how to manage and control software proEects internally, it will invariably struggle when it outsources software proEects. $ustomer myt#s2 A customer who requests computer software may be a person at the ne7t des$, a technical group down the hall, the mar$etingOsales department, or an outside company that has requested software under contract. !n many cases, the customer believes myths about software because software managers and practitioners do little to correct misinformation. .yths lead to false e7pectations 9by the customer: and ultimately, dissatisfaction with the developer. Myt#2 A general statement of obEectives is sufficient to begin writing programs we can fill in the details later. %eality2 A poor up front definition is the maEor cause of failed software efforts. A formal and detailed description of the information domain, function, behavior, performance, interfaces, design constraints, and validation criteria is essential. These characteristics can be determined only after thorough communication between customer and developer. Myt#2 )roEect requirements continually change, but change can be easily accommodated because software is fle7ible. %eality2 !t is true that software requirements change, but the impact of change varies with the time at which it is introduced. (igure 8.F illustrates the impact of change. !f serious attention is given to up front definition, early requests for change can be accommodated easily. The customer can review requirements and recommend modifications with relatively little impact on cost. ;hen changes are requested during software design, the cost impact grows rapidly. Resources have been committed and a design framewor$ has been established. 'hange can cause upheaval that requires additional resources and maEor design modification, that is, additional cost. 'hanges in function, performance, interface, or other characteristics during implementation 9code and test: have a severe impact on cost. 'hange, when requested after software is in production, can be over an order of magnitude more e7pensive than the same change requested earlier. +ra titioner9s myt#s2 .yths that are still believed by software practitioners have been fostered by K4 years of programming culture. Juring the early days of software, programming was viewed as an art form. Old ways and attitudes die hard. Myt#2 Once we write the program and get it to wor$, our Eob is done.

%eality2 +omeone once said that ,the sooner you begin Lwriting codeL, the longer itLll ta$e you to get done., !ndustry data 9-?!>042, -VOR582, -)/T512: indicates that between 64 and 04 percent of all effort e7pended on software will be e7pended after it is delivered to the customer for the first time. Myt#2 /ntil ! get the program ,running, ! have no way of assessing its quality. %eality2 One of the most effective software quality assurance mechanisms can be applied from the inception of a proEect the formal technical review. +oftware reviews 9described in 'hapter 0: are a ,quality filter, that have been found to be more effective than testing for finding certain classes of software defects. Myt#2 The only deliverable wor$ product for a successful proEect is the wor$ing program. %eality2 A wor$ing program is only one part of a software configuration that includes many elements. Jocumentation provides a foundation for successful engineering and, more importantly, guidance for software support. Myt#2 +oftware engineering will ma$e us creates voluminous and unnecessary documentation and will invariably slow us down. %eality2 +oftware engineering is not about creating documents. !t is about creating quality. Better quality leads to reduced rewor$. And reduced rewor$ results in faster delivery times. .any software professionals recogni%e the fallacy of the myths Eust described. Regrettably, habitual attitudes and methods foster poor management and technical practices, even when reality dictates a better approach. Recognition of software realities is the first step towards formulation of practical solutions for software engineering. Q' E.!lain :ersion $ontrol * $#ange $ontrol. Any software development proEect that involves a team of people, as most of them do, requires some form of version control. Also $nown as revision control, version control allows the management of multiple revisions of the same proEect. Ta$ing even a small software development proEect as an e7ample, a number of graphic designers and coders could be wor$ing on a proEect simultaneously. >very time they complete an aspect of the proEect they may wish to update the proEect. "ersion allows that each revision of the proEect is stored and all changes are associated with the person who made these changes.

There are a number of software tools available for version control that allow software development teams to manage their proEects. Basic version control elements can also be found in applications li$e .icrosoft ;ord, and is also commonly found on web applications li$e wi$is. Sowever, they are at their most useful in software development environments whether the proEect team is based in one location or spread all over the globe. The real beauty of version control is that developers can return to any earlier state of the proEect that was Lchec$ed inL. +o, for e7ample some code is Lchec$ed inL that corrupts the whole proEect, it is simple to return to the previous state of the software before the damage was done. "ersion control software also ma$es everyone accountable for their own wor$ as it trac$s what was Lchec$ed inL by whom. "ersion control is also invaluable when it comes to bug fi7es and dealing with $nown issues. As software is developed new bugs will emerge and version control can be valuable in determining when and where bugs appeared and what caused them. "ersion control uses a centrali%ed model where different stages of a proEect are stored in a shared server. ;hen a file, or an entire proEect, is chec$ed in it is updated on this server and given a version code or number. This is normally simple enough but things can get tric$y if two developers ma$e changes to and chec$ in the same file at once. "ersion control deals with this in one of two ways. !t can either use file loc$ing, where only one developer can access certain files at any one time. Alternatively, it can use file merging where a merged file with both developersL changes included. These may require further changes later to merge both sets of changes successfully. Sowever, if possible it is best if only one developer wor$s on a certain file at a time. The e7ample software&s for version control are :SS ;:isual sour e safe< and +:$S $#ange $ontrol The reality of change control in a modern software engineering conte7t has been summed up beautifully by Vames Bach -BA'502< 'hange control is vital. But the forces that ma$e it necessary also ma$e it annoying. ;e worry about change because a tiny perturbation in the code can create a big failure in the product. But it can also fi7 a big failure or enable wonderful new capabilities. ;e worry about change because a single rogue developer could sin$ the proEectA yet brilliant ideas originate in the minds of those rogues, and a burdensome change control process could effectively discourage them from doing creative wor$. !n a software organi%ation the change control document will contain the following items

%oles and %es!onsibilities %ole $$B $#air

"es ri!tion 'hairperson of the change control boardA has final decision ma$ing authority if the ''B does not reach agreementA as$s someone to be the >valuator for each change request and as$s someone to be the .odifier for each approved change request The group that decides to approve or reEect proposed changes for a specific proEect The person whom the ''B 'hair as$s to analy%e the impact of a proposed change The person who is assigned responsibility for ma$ing changes in a wor$ product in response to an approved change requestA updates the status of the request over time The person who submits a new change request The person who is responsible for overall planning and trac$ing of the development proEect activities The person who determines whether a change was made correctly

$#ange $ontrol Board E&aluator

Modifier

(riginator +ro)e t Manager :erifier

$#ange %e-uest Status Status $#anges A requested change will pass through several possible statuses during its life. These statuses, and the criteria for moving from one status to another, are depicted in the state transition diagram in (igure 8 and described in the )ossible +tatuses table. Any time an issue status is changed, the change control tool will send an e mail notification automatically to the issue Originator, the issue .odifier, andOor the ''B 'hair, as specified below. Status A!!ro&ed Meaning The ''B decided to implement the request and allocated it to a specific future build or product release. The ''B 'hair has assigned a .odifier. The Originator or someone else decided to cancel an approved change. The .odifier has completed implementing the requested change. The change made has been verified 9if required:, the modified wor$ products have been installed, and the request is now completed. The >valuator has performed an impact analysis of the request.

=otifi ations

+ossible Statuses

$an eled $#ange Made $losed

E&aluated

%e)e ted Submitted :erified

The ''B decided not to implement the requested change. The Originator has submitted a new issue to the change control system. The "erifier has confirmed that the modifications in affected wor$ products were made correctly.
#riginator submitted an issue

Submitted
Evaluator performed impact anal"sis

Evaluated

CC! decided not to make the change

Rejected

CC! decided to make the change

Approved

change was canceled back out of modifications

verification failed

Modifier has made the change and requested verification

Change Made
no verification required Modifier has installed modified work products

change was canceled back out of modifications

Canceled

Verifier has confirmed the change

Verified
Modifier has installed modified work products

change was canceled back out of modifications

Closed

Q3. "is uss t#e S$M +ro ess.

,#e S$M +ro ess +oftware configuration management is an important element of software quality assurance. !ts primary responsibility is the control of change. Sowever, +'. is also responsible for the identification of individual +'!s and various versions of the software, the auditing of the software configuration to ensure that it has been properly developed, and the reporting of all changes applied to the configuration. Any discussion of +'. introduces a set of comple7 questions< Sow does an organi%ation identify and manage the many e7isting versions of a program 9and its documentation: in a manner that will enable change to be accommodated efficiently# Sow an organi%ation control changes does before and after software is released to a customer# ;ho has responsibility for approving and ran$ing changes# Sow can we ensure that changes have been made properly# ;hat mechanism is used to apprise others of changes that are made# These questions lead us to the definition of five +'. tas$s< identification, version control, and change control, configuration auditing, and reporting. Identifi ation of (b)e ts in Software $onfiguration To control and manage software configuration items, each must be separately named and then organi%ed using an obEect oriented approach. Two types of obEects can be identified -'SO052< basic obEects and aggregate obEects.I A basic obEect is a ,unit of te7t, that has been created by a software engineer during analysis, design, code, or test. (or e7ample, a basic obEect might be a section of a requirements specification, a source listing for a component, or a suite of test cases that are used to e7ercise the code. An aggregate obEect is a collection of basic obEects and other aggregate obEects. 'onceptually, it can be viewed as a named 9identified: list of pointers that specify basic obEects such as data model and component R. :ersion $ontrol

"ersion control combines procedures and tools to manage different versions of configuration obEects that are created during the software process. 'lemm -'?>052 describes version control in the conte7t of +'.< 'onfiguration management allows a user to specify alternative configurations of the software system through the selection of appropriate versions. This is supported by associating attributes with each software version, and then allowing a configuration to be specified -and constructed2 by describing the set of desired attributes. $#ange $ontrol The reality of change control in a modern software engineering conte7t has been summed up beautifully by Vames Bach -BA'502< 'hange control is vital. But the forces that ma$e it necessary also ma$e it annoying. ;e worry about change because a tiny perturbation in the code can create a big failure in the product. But it can also fi7 a big failure or enable wonderful new capabilities. ;e worry about change because a single rogue developer could sin$ the proEectA yet brilliant ideas originate in the minds of those rogues, and a burdensome change control process could effectively discourage them from doing creative wor$. $onfiguration Audit !dentification, version control, and change control help the software developer to maintain order in what would otherwise be a chaotic and fluid situation. Sowever, even the most successful control mechanisms trac$ a change only until an >'O is generated. Sow can we ensure that the change has been properly implemented# The answer is twofold< 98: formal technical reviews and 9I: the software configuration audit. The reviewers assess the +'! to determine consistency with other +'!s, omissions, or potential side effects. A formal technical review should be conducted for all but the most trivial changes. S$M Standards Over the past two decades a number of software configuration management standards have been proposed. .any early +'. standards, such as .!? +TJ @0F, JOJ+TJ @04A and .!? +TJ 8KI8A, focused on software developed for military applications. Sowever, more recent AR+!O!>>> standards, such as AR+!O!>>> +tds. Ro. 0I0 850F, Ro. 84@I 8501, and +td. Ro. 84I0 8500 -!>>5@2, are applicable for nonmilitary software and are recommended for both large and small software engineering organi%ations.

+oftware configuration management is an umbrella activity that is applied throughout the software process. +'. identifies, controls, audits, and reports modifications that invariably occur while software is being developed, and after it has been released to a customer. All information produced as part of software engineering becomes part of a software configuration. The configuration is organi%ed in a manner that enables orderly control of change. The software configuration is composed of a set of interrelated obEects, also called software configuration items, which are produced as a result of some software engineering activity. !n addition to documents, programs, and data, the development environment that is used to create software can also be placed under configuration control. Once a configuration obEect has been developed and reviewed, it becomes a baseline. 'hanges to a baselined obEect result in the creation of a new version of that obEect. The evolution of a program can be trac$ed by e7amining the revision history of all configuration obEects. Basic and composite obEects form an obEect pool from which variants and versions are created. "ersion control is the set of procedures and tools for managing the use of these obEects. 'hange control is a procedural activity that ensures quality and consistency, as changes are made to a configuration obEect. The change control process begins with a change request, leads to a decision to ma$e or reEect the request for change, and culminates with a controlled update of the +'! that is to be changed. Jevelop a need to $now list for every +'! and $eep it up to date. ;hen a change is made, be sure that everyone on the list is informed. Q3. E.!lain i. Software doesn>t 5ear (ut. ii. Software is engineered * not manufa tured.

Basically the software doesn&t ;ear Out and +oftware is engineered W not manufactured are the characters of software +oftware is a logical rather than a physical system element. Therefore, software has characteristics that are considerably different than those of hardware< The failure rate is a function of time for hardware. The relationship, often called the ,bathtub curve,, indicates that hardware e7hibits relatively high failure rates early in its life 9these failures are often attributable to design or manufacturing defects:A defects are corrected and the failure rate drops to a steady state level 9ideally, quite low: for some period of time. As time passes, however, the failure rate rises again as hardware components suffer from the cumulative effects of dust, vibration, abuse,

temperature e7tremes, and many other environmental maladies. +tated simply, the hardware begins to wear out. F. Although the industry is moving toward component based assembly, most software continues to be custom built. 'onsider the manner in which the control hardware for a computer based product is designed and built. The design engineer draws a simple schematic of the digital circuitry, does some fundamental analysis to assure that proper function will be achieved, and then goes to the shelf where catalogs of digital components e7ist. >ach integrated circuit 9called an !' or a chip: has a part number, a defined and validated function, a well defined interface, and a standard set of integration guidelines. After each component is selected, it can be ordered off the shelf. Q0. E.!lain t#e "ifferent ty!es of Software Measurement ,e #ni-ues. Software Measurement .easurements in the physical world can be categori%ed in two ways< direct measures 9e.g., the length of a bolt: and indirect measures 9e.g., the ,quality, of bolts produced, measured by counting reEects:. +oftware metrics can be categori%ed similarly. Jirect measures of the software engineering process include cost and effort applied. Jirect measures of the product include lines of code 9?O': produced, e7ecution speed, memory si%e, and defects reported over some set period of time. !ndirect measures of the product include functionality, quality, comple7ity, efficiency, reliability, maintainability, and many other , abilities, 98: Si4e oriented metri s2 +i%e oriented software metrics are derived by normali%ing quality andOor productivity measures by considering the si%e of the software that has been produced. !f a software organi%ation maintains simple records, a table of si%e oriented measures, such as the one shown in (igure @.@, can be created. The table lists each software development proEect that has been completed over the past few years and corresponding measures for that proEect. 8I,844 lines of code were developed with I@ person months of effort at a cost of X860,444. !t should be noted that the effort and cost recorded in the table represent all software engineering activities 9analysis, design, code, and test:, not Eust coding. (urther information for proEect alpha indicates that F6K pages of documentation were developed, 8F@ errors were recorded before the software was released, and I5 defects were encountered after release to the customer within the first year of operation. Three people wor$ed on the development of software for proEect alpha.

I: 6un tion oriented metri s2 (unction oriented software metrics use a measure of the functionality delivered by the application as a normali%ation value. +ince functionality cannot be measured directly, it must be derived indirectly using other direct measures. (unction oriented metrics were first proposed by Albrecht -A?B152, who suggested a measure called the function point. (unction points are derived using an empirical relationship based on countable 9direct: measures of softwareLs information domain and assessments of software comple7ity. 9F: E.tended 6un tion +oint Metri s2 The function point measure was originally designed to be applied to business information systems applications. To accommodate these applications, the data dimension 9the information domain values discussed previously: was emphasi%ed to the e7clusion of the functional and behavioral 9control: dimensions. (or this reason, the function point measure was inadequate for many engineering and embedded systems 9which emphasi%e function and control:. A number of e7tensions to the basic function point measure have been proposed to remedy this situation. Q8. 5rite a =ote on S!iral Model. The s!iral model is a software development process combining elements of both design and prototyping in stages, in an effort to combine advantages of top down and bottom up concepts. Also $nown as the spiral lifecycle model 9or spiral development:, it is a systems development method 9+J.: used in information technology 9!T:. This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, e7pensive and complicated proEects.

This should not be confused with the Selical model of modern systems architecture that uses a dynamic programming 9mathematical not software type programmingG: approach in order to optimi%e the systemLs architecture before design decisions are made by coders that would cause problems.

?istory The spiral model was defined by Barry Boehm in his 8506 article ,A +piral .odel of +oftware Jevelopment and >nhancement,. This model was not the first model to discuss iterative development. As originally envisioned, the iterations were typically 6 months to I years long. >ach phase starts with a design goal and ends with the client 9who may be internal: reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the proEect, with an eye toward the end goal of the proEect. The steps in the spiral model iteration can be generali%ed as follows< 8. The system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the e7ternal or internal users and other aspects of the e7isting system. I. A preliminary design is created for the new system. This phase is the most important part of ,+piral .odel,. !n this phase all possible 9and available: alternatives which can help in developing a cost effective proEect are analy%ed and strategies to use them are decided. This phase has been added specially in order to identify and resolve all the possible ris$s in the proEect development. !f ris$s indicate any $ind of uncertainty in requirements, prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements. 3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled down system, and represents an appro7imation of the characteristics of the final product. @. A second prototype is evolved by a fourfold procedure< 8. evaluating the first prototype in terms of its strengths, wea$nesses, and ris$sA I. defining the requirements of the second prototypeA F. planning and designing the second prototypeA

@. 'onstructing and testing the second prototype. K. At the customerLs option, the entire proEect can be aborted if the ris$ is deemed too great. Ris$ factors might involve development cost overruns, operating cost miscalculation, or any other factor that could, in the customerLs Eudgment, result in a less than satisfactory final product. 6. The e7isting prototype is evaluated in the same manner as was the previous prototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above. 1. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired. 0. The final system is constructed, based on the refined prototype. 5. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large scale failures and to minimi%e downtime.

84. The spiral model is mostly used in large proEects. (or smaller proEects, the concept of agile
software development is becoming a viable alternative. The /+ military had adopted the spiral model for its (uture 'ombat +ystems program. The ('+ proEect was canceled after si7 years 9I44FDI445:, it had a two year iteration 9spiral:. The ('+ should have resulted in three consecutive prototypes 9one prototype per spiralYevery two years:. !t was canceled in .ay I445. The spiral model thus may suit small 9up to XF million: software applications and not a complicated 9XF billion: distributed interoperable, system of systems.

88. Also it is reasonable to use the spiral model in proEects where business goals are unstable but the
architecture must be reali%ed well enough to provide high loading and stress ability. (or e7ample, the +piral Architecture Jriven Jevelopment is the spiral based +oftware Jevelopment ?ife 'ycle 9+J?': which shows one possible way how to reduce the ris$ of non effective architecture with the help of a spiral model in conEunction with the best practices from other models.

You might also like