Professional Documents
Culture Documents
Software Engineering
Software Engineering
Software:-
A set of program , use for a particular in computer system is known as software.
Computer software consists of the sequence of instructions (called program) and data
that the Computer manipulates to perform various data processing tasks. So not
touchable and not tenable.
Software Engineering:-
software engineering discusses systematic and cost-affective software
development approaches. Alternatively , software engineering as the engineering
approach to develop software. A team of programmers or individual programmer
develop software using a programming language for a specific task.
requirements gathering;
The activity typically involves interviewing the end-users and customers to
collect all possible information regarding the system. If the project involves
automating some existing procedures , then the task of the system analyst
becomes a little easier as they can immediately obtain the input and the output
data formats and the details of the operational procedures.
Anomaly :
An anomaly is an ambiguity in the requirement. When a requirement is
anomalous , several interpretations of the requirement are possible.
Inconsistency :
The requirements become inconsistent if any one of the requirements contradicts another .
Incompleteness :
An incomplete requirement is one where some of the requirements have been overlooked .
When the analyst detects any inconsistencies , anomalies or incompleteness in the gathered
requirements , he resolves then by carrying out further discussions with the end-uses and
the customers .
the SRS document usually contains all the user requirements in an informal form Different
people need the SRS document for very different purpose.
some of the important categories of users of the SRS document and their needs
are as follows------
@ Users , customers and marketing personnel :
the goal of this set of audience is to unsure that the system as describe
in the SRS document will meet their needs.
@ Software developers :
The software developers refer to the SRS document to make sure that
they develop exactly what is required by the customer.
@ Test engineers :
Their goal is to unsure that the requirements are understandable from a
functionality point of view , so that they can test the software and validate its working.
@ Maintenance engineers :
The SRS document helps the maintenance engineers to understand the
functionality of the system which can help them to understand the design and code . And
also would enable them to determine the modification to the system’s needed for a specific
purpose.
The SRS document can even be used as a legal document to settle , disputes between the
customers and the developers.
Different Software life cycle Model
Feasibility
study
Requirements analysis
and specification
Design
Coding and
Module
testing
Delivery
Fig : Waterfall Model Maintenance
Now , the different phases of this model are described below ------
Feasibility study :
The main aim of the feasibility study activity is to determine whether it would be
financially and technically feasible to develop the product . It involves the analysis of the
problem and collection of all relevant information related to the product such as input data
to the system . Therefore , Feasibility study is considered to be very important stage.
[*Feasibility study define the precise costs and benefits of a software system ]
Design :
Once the requirements for a system have been documented , software design a
software system to meet them.
This phase is sometimes split into two sub phases -------
a) Architectural / High level design :
It does with the overall module structure and organization.
4. Review and plan for the next phase 3. develop verify the next
level of
Product .
The diagrammatic representation of this model appears like a spiral with many loops.
The exact no. of loops in the spiral is not fixed.
Each loop of the spiral represents a phase of the software process. For eg., the
innermost loop might be concerned with feasibility study. The next loop with requirement
specification and something.
The first quadrant identifies the objectives of the phase and the alternative solution
possible for the phrase under consideration.
During the second quadrant, the alternative solutions are evaluated to select the
best solution possible for the phase under consideration.
During the 3rd quadrant, developing and verifying the next phase.
With each iteration around the spiral, progressively a more complete version of the
software gets built, usually, after several iterations along the spirals, all risks are resolved
and the software is ready for development.
Requirements
Gathering
Quick
design
Refine requirements
incorporating
Build
customer
Prototype
suggestions
Customer
Evaluation of
Prototype
Design
Implemen
t
Test
Maintain
The actual system is developed using the iterative waterfall approach. However, in
this model, the requirements analysis and specification phase becomes redundant as the
working of the prototype approved by the customer becomes an animated requirements
specification.
The code for the prototype is usually thrown away. However, the experience
gathered from developing the prototype helps a great deal in developing the actual system.
Feasibility
study
Requirements
analysis and
specification
Design
Coding
and
Unit
testing
Integratio
n
And
system
testing
Maintenanc
e
(fig : Iterative waterfall model)
To allow for the correction of the errors committed during a phase that are
detected in later phases.
(5) Evolutionary Model :
The life cycle model is also referred to as the successive versions model and
sometimes as the incremental model. In this life cycle model, the software is first broken
down into several modules (or functional units) which can be incrementally constructed and
delivered. The development team first develop the core modules of the system. This initial
product skeleton is refined into increasing levels of capability by adding new functionalities
in successive versions.
Rough requirements
specification
Maintenance
In the model, the user gets a chance to experiment with a partially developed
software much before the complete of the system is released. Therefore, the evolutionary
model helps to accurately elicit user requirements during the delivery of the complete
software are minimized.
The core modules gets tested thoroughly thereby reducing chances of errors in the
core modules of the final product.
The main disadvantage of the successive version model is that for most practical
problems it is difficult to divide the problem into several functional units which can be
incrementally implement and delivered.
Therefore the evolutionary model is normally useful for only very large products,
where it is easier to find modules for incremental implementation.
In other words, a life cycle model maps the different activities performed on a
software product from inception to retirement. Different life cycle models may map the
basics development activities to phase in different ways.
Spiral Model :
Advantages :
1) It is very flexible model.
2) Estimates ( I.e . budget, schedule, etc) get more realistic as work progress, because
important issues are discovered earlier .
3) Good for large and mission-critical projects.
4) Software is produced early in the software life cycle.
Disadvantages :
1) Doesn't work well for smaller projects
2) It may be higher than the cost for building the system.
3) It is much customized for every project.
4) Risk analysis require highly specific expertise.
Trerative Waterfall Model :-
Advantages :
1) Produces working software early during the life cycle.
2) More feasible as scope and requirement changes can be implemented at low cost.
3) Testing and debugging is easier , as the iterations are small.
4) Low risks factors as the risk can be identified and resolved during each iteration .
Disadvantages :
1) This model has phases that are very rigid and do not overlap.
2) Not all the requirements are gathered before starting the development.
Prototype Model :
Advantages :
1) Benefits from user input.
2) Errors or risks can be detected at a much earlier stage .
Disadvantages :
1) Increases completely of the overall system
2) Involves exploratory methodology and therefore involves higher risk.
3) Involves implementing and their repairing the way a system is built , so no errors are
or inherent part of the development process.
Evolutionary Model :-
Advantages :
1) Risk analysis is better.
2) It supports changing requirements
3) Initial operating time is less.
4) Better suited for large and mission critical projects.
5) During life cycle software is produced early which facilitates customer evaluation
and feedback.
Disadvantages :
1) Not suitable for smaller projects.
2) Management complexity is more
3) End of the project may not known which a risk is
4) Can be costly to use
5) Highly skilled resources are required for risk analysis
6) Projects progress is highly dependant upon the risk analysis phase.
Comparation between different life cycle model :-
2. Interviewing :
Most widely used technique. Requires the most skills and sensitivity. Structured
meeting between analysis and stuff. Discussion of one or more area of work of the stuff. Can
be using fixed set questions or extempore questions. Close and open probes.
Advantages :
1. Produces high quality information.
2. Provides greater depth of understanding of a person’s work and exceptions.
Disadvantages :
1. Time consuming processes.
2. Interview can provide conflicting information which becomes difficult to
resolve later.
3. Observation :
Watching people in their normal work flow carrying out their operations. Analysis
watch and note the type of the information the work is using processing in the existing
system. Can be open ended or close ended.
Advantages :
1. Provides first hand experience..
2. Real time data collection
Disadvantages :
1. Most people don’t like being observed and many behave differently
2. Requires recursive training to have.
3. Logistics.
4.Document sampling :
Done in two ways. First , collect copies of completed documents of the interviews
and observations and Second, statistical analysis of the documents to find out patterns of
data.
Advantages :
1. Used for qualitative data.
2. Used to find errors rates in paper documents
Disadvantages :
1. Existing documents don’t show what changes will be in future.
4.Questionnaires :
Effective fact finding instrument / technique. Has series of questions to be
answered. Multiple choice or yes/no questions. Covers question ranging from coding to
feedback.
Advantages :
1. Economical way of gathering data.
2. If well defined , results are effectively analyzed.
Disadvantages :
Software Matric
1. Line of code :
This metric also called size oriented metric . LOC is the simplest among
all metrics available to estimate projects size, so it is very popular using this
metric , the product size is estimated by counting the no of lines of source
instructions (the lines used for commenting the code only) and the header lines
are ignored in the developed program.
In order to estimate the LOC , count at the beginning of the beginning
of project , project managers , usually divide the problem into modules and
each module into sub modules and so on, until the size of the different leaf
level modules can be approximately predicted .
If a software organization maintains simple records , a table of size
oriented measures, such as the one show in figure , can be created. The table
lists a set of simple size oriented metrics can be developed for each project.
Name of project
Errors per K LOC (100 lines of code)
Defects for per LOC
$ per LOC
Effort per person month
Page of documentation per k LOC etc
Disadvantages :
However , LOC as a measure of problems size has several short comings ----
--
1. LOC gives a numerical value of problem size that can very widely with
individual coding style different programs lay out their code in different
ways .
2. A good problem size measure should consider the overall complexity
of the problem and the effort needed to solve it but, LOC , however ,
focuses on the coding activity alone.
3. LOC metric measures the lexical complexity of a program and does not
address the more important but subtle issues of logical or structural
complexities.
4. LOC measures co-relates poorly with the quality and efficiency of the
code.
5. It is very difficult to accurately estimate LOC in the final product from
the problem specification.
Advantages :
1. LOC is the simplest among all metrics available to estimate project size.
2. This metric is very popular
3. Determining the LOC count at the end of a project is a very simple job.
Query book
Output data
Issue book
Return book
No of inputs :-
Each data item input by the user is counted
No of outputs :-
Each users that provides application oriented data to the user is counted . the
output refer to reports printed , screen , error message etc.
No of inquiries :-
An inquiries is defined as an online input that results of an online output.
No of files :-
Each logical file (means groups of logically related data) is counted.
No of interfaces :-
All machine readable interfaces (example , data files on storage media) that
are used to transmit information to another system are counted.
2. TCF :- once UFP is completed the technical complexity factor is computed next . it refines
the UFP measure by considering 14 other factors such as high transaction rates, throughput ,
responses time requirements etc.
Advantages :
It can be used to easily estimate the size of a software product directly from the
problem specification
Disadvantage :
It does not take into account the algorithmic complexity of a software.
Software quality
Portability :
A software product is said to be portable if it can be easily made to work in
different operating system environments , in different machines, with other software
products etc. Portability is the case of transporting a program from one hardware
configuration to another.
Usability :
Usability is the efforts required to understand and operate a system . a
software products has good usability , if different categories of user can easily invoke the
functions of the product.
Re usability :
A software product has good re usability if different modules of the
product can easily be reused to develop new product.
Correctness :
A software product is correct,is different requirements as specified in the
SRS document have been correctly implemented.
Maintainability :
Maintainability is the case with which the program errors can be easily
detected and corrected . new functions can be easily added to the project and the
functionalities of the project can be easily modified.
Quality Control
Quality control involves the series of inspections . reviews and tests used throughout
the software process that ensure each work product meets the requirements placed upon it.
Quality control includes a feedback loop to the process that created the work
product . quality control as part of the manufacturing process.
A key concept of quality control is that it focuses not only on detecting the defective
products and eliminating them but also on determining the causes behind the defect. The
feedback loop is essential to minimize the defect produced.
Quality control is a set of methods used by organizations to achieve
quality parameters or quality goals and continually improve the organization's
ability to ensure that a software product will meet quality goals.
Quality Control Process:
Advantages
It can help to prevent faulty goods and services being sold.
It is not disruptive to production- workers continue producing,
inspectors do the checking.
As with any quality system, the business may benefit from an improved
reputation for quality and and this may increase sales.
Disadvantages
Once errors are identified , it is necessary to first locate the precise program
statements responsible for the errors and then to fix them.
Software debugging is an important approaches that are available to identify the
errors locations.
The following are some of the approaches popularity adopted by programmers for
debugging
1. Brute force method
2. Backtracking
3. Cause elimination method
4. Program slicing etc.
1. Brute force method :
It is the least efficient method . in this approach the program is loaded
with print statements to print the immediate values with the hope that of printed values will
help to identify the statement in error.
2. Backtracking :
This is also a fairly common approach . in this approach beginning from
the statement at which an error symptoms is observed. The source code is traced backwards
until the error is discovered.
Debugging guidelines :
We also provide some guidelines for effective debugging are follows ---
1. Many times debugging requires a thoroughly understanding of the
program design
2. Debugging may sometimes even require full redesign of the system
3. Any one error correction may introduce new errors.
Software Measure
A software measure provides a quantitative indication of the extent amount dimension
capacity or size of some attribute of a product of process. Measurement is the act of
determining a measure.
Software metric :
Software metric is a quantitative measure of the degree to which a system
component or process a given attribute.
Software indicators :
Software indicators is a metric or combination of metric that provides in sight
into the software process a software project or the product itself . an indicator in sight that
enables the project manager or software engineer to adjust the process to make things
better.
A quality metric that provides benefit at both the project and process level is
defect removal efficiency (DRE ) , it is a measure of the filtering ability of quality assurance
and control activities as they are applied through out all processes frame work activities.
For project as a whole , DRE is defined in the following manner :
DRE = E/(E+D)
Here, E = number of errors before delivery
D = number of defects found after delivery
The ideal value for DRE is 1 . that is no defects are found in the software . DRE
encourages a software project team for finding as many errors as possible before delivery.
DRE can also be used within the project to access a team’s ability to finds errors
before they are passed to the next framework activity. For example the requirements
analysis task when used in this context we redefine DRE as -----------
A quality object for a software team (or individual engineer ) is to achieve DREI that
approaches . That is errors should be filtered out before they are passed on to the next
activity.
There are three type of requirements of software maintenance . They are ---------
A) Corrective Maintenance :
Corrective maintenance of a software product becomes necessary to rectify the
bugs observational the system is in use. It is universally used to refer to maintenance for
fault repair.
B) Adoptive Maintenance :
A software product might need maintenance when the customer need the
product to run on new platforms , on new operating systems or when they need the product
to be interfaced with new hardware for software . this is called adoptive maintenance.
C) Perfective Maintenance :
A software product need maintenance to support the new features that users
want (it to support to change different functionalities of the system ) according to customer
demands, or to enhance the performance of the system. This is called perfection
maintenance.
Reliability behaviour for hardware and software is very different . for example hardware
failures are inherently different from software failures.
The changes in failure rate over the product lifetime for a typical hardware
product panda software product are shown in figure below ------
A) Hardware product :
The failure rate is high initially but decreases as the faulty components are
identified and removed . the system then enters it useful life. After some time (called the
product lifetime) the components wear out and failure rate increases.
B) Software product :
For software the failure rate is at its highest during integration and testing
phases. As the system is tested , more errors are identified and removed resulting in a
reduced failure rate . this error removal continues at a slower pace during the useful life of
the product. As the software becomes absolute , no more errors correction occurs and
failure rate remains unchanged.
Boehm (1981) proposed a formula for estimate maintenance costs as part of his
COCOMO cost estimation model .
Where, ACT = annual change traffic. [ACT the fraction of a software product which undergo
change during a typical year either through addition or deletion.
KLOC (added) is the total kilo lines of source code added during maintenance.
KLOC (deleted) is the KLOC (deleted) during maintenance.
The ACT is multiplied with the total development cost to arrive at the maintenance
cost.
Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis code.
The purpose of reverse engineering is to facilitate maintenance work by
improving the understand ability of a system and to produce the necessary documents for a
legacy system.
Reverse engineering is becoming important since legacy software products lack
proper documentation and are highly unstructured.
The first stage of reverse engineering usually focuses on carrying out cosmetic
changes to the code to improve it’s readability , structure and understand ability , without
changing any of its functionalities.
The way to carry out these cosmetic changes is shown schematically in fig.
Requirements specification
Design
Module Specification
code
After the cosmetic changes have been carried out on a legacy software the process of
extraction the code , design and the requirements specification can begin these activities
are systematically shown in fig :
Simplify
Remove errors
processing
Output
Input
Disadvantages :
1. Due to the fact that a skilled tester is needed to perform white box
testing , the costs are increased .
2. Sometimes it is impossible to look into every nook and corner to find out
hidden errors that may create problems , as many paths will go untested.
3. It is difficult to maintain white Box testing , as it requires specialized
tools like code analyzer and debugging tools.
2. Also known as closed box testing, data- 2. Also known as clear-box testing, structural
driver testing or functional testing. testing or code based testing.
4. It is exhaustive and the least time 4. The most exhaustive and time consuming
consuming. type of testing.
6. This can only be done by trial and error 6. Data domains and internal boundaries can
method. be better tested.
Software validation
Validation is the process of examining whether or not the software satisfies the
user requirements. It is carried out at the end of the SDLC. If the software matches
requirements for which it was made, it is validated.
Validation ensures the product under development is as per the user requirements.
Validation emphasizes on user requirements.
Software verification
Unit testing :
This type of testing is performed by developers before the setup is handed over
to the testing team to formally execute the test cases. Unit testing is performed by
respective developers or the individual unites of source code assigned areas. The developers
use test data that is different from the test data of the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.
Integration testing :
Integration testing is defined as the testing of combined parts of an
application to determine if they function correctly. Integration testing can be done in two
ways . bottom up integration testing ans top down integration testing.
System testing :
System testing tests the system as a whole once all the components are
integrated , the application as a whole is tested rigorously to see that it meets the specified
quality standards. This type of testing id performed by a specified testing team.
System testing is important because of the following reasons -----------
-----
System testing is the first step in the software development life cycle ,
where the application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional
and technical specifications.
The application is tested in an environment that is very close to the
production environment where the application will be deployed.
System testing enables us to test, verify, and validate both the business
requirements as well as the application architecture.
Alpha testing :
This test is the first stage of testing and will e performed among-st the teams
(developer and QA team). unit testing , integration testing and system testing when
combined together is known as alpha testing. During this phase the following aspects will be
tested in the application.
Spelling mistakes
Broken links
Cloudy directions
Beta testing :
This test is performed after alpha testing been successfully performed. In beta
testing , a sample of the intended audience tests the application . beta testing is also known
as pre-release testing.
In this phase the audience will be testing the following ----
Users will install , run the application and send their feedback to the
project team.
Typographical errors , confusing application flow and even crashes.
Getting the feedback , the project team can fix the problem , the
higher the quality of our application will be.
The more issues us fix that solve real user problems, the higher the
quality of our application will be.
Having a higher quality application when we release it to the general
public will increase customer satisfaction.
Verification vs validation :
2. Ensure that the software system meets all 2. Ensure that the functionalities meet the
the functionality. intended behaviour.
3. Verification takes first place and includes 3. Validation occurs after verification and
the checking for do cementation , code etc. mainly involves the checking of the overall
product.
4. Done by developers. 4. Done by tester.
Structure design :
Structure design is a conceptualization of problem into several well organized
elements of solution. It is basically concerned with the solution design.
Benefit of structured design is, it gives better understanding of how the problem is
being solved. Structure design also makes it simpler for designer to concentrate on the
problem more accurately.
Structure design is mostly based on ‘divide and conquer’ strategy where a problem
is individually solved until the whole problem is solved.
A good structure design always follows some rules for communication among
multiple modules, namely ----
Cohesion --- grouping of all functionally related elements.
Coupling --- communication between different modules.
A good structured design has high cohesion and low coupling arrangements.
FOD OOD
1. The basic obstructions, which are given to 1. The basic abstractions are not the real
the user are real world functions. world functions but are the data abstractions
where the real world entities are
represented.
2. Functions are grouped together by which 2. Functions are grouped together are the
a higher lever function is obtained. An e.g. of basis of the data they operate since the
this technique is SA/SD. classes are associated with their method.
3. In this approach the state information is 3. In this approach the state information is
often represented in a centralized shared not represented in a centralized memory but
memory. is implemented or distributed among the
objects of the system.
4. Approach is mainly used for computation 4. Approach is mainly used for evolving
sensitive application. system which mistakes a business process
are chase.
B) Bottom up design :
The bottom up design model starts with most specific and basic
components. It proceeds with composing higher level of components by using basic or lower
level components . it keeps creating higher level components until the desired system is not
evolved as one single component. Will each higher level , the amount of abstraction is
increased.
Bottom up strategy is more suitable when a system needs to be
created from some existing system , where the basic primitives can be used in the newer
system.
Both , top down and bottom up approaches are not practical
individual , instead, a good combination of both is used.
Quality Assurance
Necessity :
Usually for complex maintenance projects for legacy systems, the software process
can be represented by a reverse engineering cycle followed by a forward engineering cycle
with an emphasis on as much reuse other documents.
Since the scope (activities required) for different maintenance projects very
widely , no single maintenance process model can be developed to suit every kind of
maintenance projects.
Types :
There are two broad categories of process models can be proposed.
Model 1. : this maintenance model or process is graphically presented below----
Model 1
This model is preferred for projects involving small re works where the code is
changed directly and the changes are reflected in the relevant documents later.
Change requirements
design design
Module Module
specifications specification
reverse code
Forward
code engineering
engineering
The 2nd model is preferred for projects where the amount of rework
required is significant. This approach can be represented by a reverse engineering cycle
followed by a forward engineering cycle , such as approach is also known as software
engineering.
Advantages : (model 2)
1. It produces more structured design than what the original product had.
2. It produces good documentation.
3. It is very often results in increased efficiency.
Disadvantages :
1. It is very costly than the 1st one.
Types of DFD :
Data flow diagram are either logical or physical.
A) Logical :
The type of DFD concentrates on the system process and flow of data in
the system , how data is moved between different entities.
B) Physical :
This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.
DFD components :
DFD can represent source , destination, storage and flow of data using the
following set of components =-----
Data flow
processes Data Storage
entity
Entities :
Entities are source and destination of information data , entities and
represented by a rectangle with their respective names.
Process :
Activities and action taken on the data are represented by circle or round
rectangles.
Data storage :
There are two variants of data storage it can either be represented as a
rectangle with absence of both smaller sides or as an open sided rectangle with only one
side missing.
Data flow :
Movement of data is shown by pointed arrows . data movement is shown
from the base of arrow as it’s source towards head of the arrow as destruction.
Levels of DFD :
Level 0 :
Highest abstraction level DFD is known as level 0 DFD , which depicts the , entire
information system as one diagram concealing all the underlying details , level 0 DFD are
also known as content level DFD.
customers
Level 1 :
The level 0 DFD is broken down into more specific , level 1 DFD depicts base
modules in the system and flow of data among various modules. Level 1 DFD also maintains
basic processes and source of information.
Accounts
Finance
verification
Order
Customer data Stores
processing
process Order
Sales
Delivery
Order
Customers
Level 2 :
At this level , DFD shows data flows inside the modules maintained in level 1.
Higher level DFDs can be transformed into more specific lower level DFD s with
deeper level of understanding unless the desired level of specification is achieved.
Security :
Security is the ability of the software to remain protocol from unauthorized access.
This includes both change access and view access.
Integrity :
Integrity comes with security . system integrity should be sufficient to prevent
unauthorized access to system functions , preventing information loss, ensure that the
software is protected from virus infection and protecting the privacy of data entered into
the system.
Maintainability :
Maintainability is the ability of software to adapt when external changes occur.
Flexibility :
Flexibility is the ability of software to adapt when external changes occur.
Robustness :
Robustness is defined as the ability of a software product to cope with unusual
situation.
Efficiency :
Efficiency is the ability of the software to do the requirement processing on
latest amount of hardware.
Test ability :
System should be easy to test and find defects . if required should be easy to
divide in different modules for testing.
Organic :
A development project can be considered to be of organic type , if the project deals
with developing a well understood applicable program. The size the team member are
experimented in developing similar type of project.
Semidetached :
A development project can be considered to be of semidetached type, if the
development team consists of a mixture of experienced and inexperienced staff.
Embedded :
A development project can be considered to be of embedded, if the software
being developed is strongly coupled the complex software.
According to Boehm, software cost estimation should be through three stages-----
A. Basic COCOMO
B. Intermediate COCOMO
C. Complete COCOMO
A. Basic COCOMO :
Computes software development effort and cost as a function of program size
expressed in terms of LOC .
The basic COCOMO takes the following form ---
E = Ab(KLOC) Exp(Bb) persons - months
D = Cb (E) Exp (Db) months
Here,
E = stands for applied effort
D = development time
The co efficient s of Ab , Bb , Cb , Db , for the three modes are.
Software Ab Bb Cb Db
project
organic 2.4 1.05 2.5 0.38
Semi Detached 3.0 1.12 2.5 0.35
Embedded 3.5 1.20 2.5 0.32
Advantages :
1. This model is good for quick early rough order of magnitude estimates of
software project.
Disadvantages :
1. The accuracy of this model is limited.
2. The estimates of this model are within a factor of 1.3 only 29% of the time.
B. Intermediate COCOMO :
Computes effort as a function of program size and a lot of cost drivers
that includes subjective assessment of product attributes , software attributes , personal
attributes and project attributes.
The basic model is extended to consider a set of cost drivers attributes
grouped into four categories -----
Product attributes :
1. Required software reliability.
2. Size of application software.
3. Compatibility of the product.
Hardware attributes :
1. Run time performance constrains.
2. Memory constrains.
3. Programming language experience.
Personal attributes :
1. Analyst capability.
2. Software engineering capability.
3. Programming language experience.
Project attributes :
1. Use of software tools.
2. Required development capability.
3. Application of software engineering methods.
The intermediate COCOMO takes the form -----
E = ai (KLOC)(bi) (EAF)
Here, E = applied effort
EAF effort adjustment factor.
The values of ai and bi for various class of software projects are------
Software projects ai bi
organic 3.2 1.05
Semi detached 3.0 1.12
Embedded 2.8 1.20
Advantages :
1. This model can be applied to almost entire software product for easy and
rough cost estimation during early stage.
2. It can also be applied at the software product component level for
obtaining more accurate cost estimation.
Disadvantages :
1. The effort multiplies are not dependant on phases.
2. A product with many component is difficult to estimate.
C. Complete COCOMO :
Complete COCOMO incorporates all characteristics of the intermediate
versions with an assessment of the cost drives impacted on each step.
In complete COCOMO the effort is calculates as function of program
size and a set of cost drives.
The 6 phases of complete COCOMO are -------
i. Plan and requirement
ii. System design
iii. Detailed design
iv. Module code and test
v. Integration and test
vi. Cost constructive model
Advantages :
1. Error isolation :
Functional dependence reduces error propagation. If a module is
functionally dependant, its degree of interaction with other modules is less. Therefore, any
error existing in a module would not directly effect the other modules.
2. Scope for reuse :
Reuse of a module becomes possible because each module does
some well defined and precise functions and the interface of module with other modules is
simple and minimal.
3. Understand ability :
Complexity of the design is reduced because different modules are
more or less independent of each other and can be understood in isolation.
Classification of cohesiveness :
1. Coincidental cohesion :
A module is said to have co incidental cohesion, if it performs a
set of tasks that relate to each other very loosely , if at all . in this case, the module contains
a random collection of functions.
2. Logical cohesion :
A module is said to be logically cohesive , if all elements of the
module perform similar operations, example error handling, data input, data output etc. An
example of logical cohesion is the case where a set of print functions generating different
output reports are arranged into a single module .
3. Temporal cohesion :
When a module contains functions that are related by the fact
that all the functions must be exacted in the same time span the module is said to exhibit
temporal cohesion. The set of functions responsible for initialization start up. Shut down of
some process etc. Exhibit temporal cohesion.
4. Procedural cohesion :
A module is said to process procedural cohesion if all the
functions of the module are all part of a procedure.
5. Communicational cohesion :
A module is said to have communicational cohesion if all the
functions of the module refer to or update the same data structure example the set of
function defined on an array or stack.
6. Sequential cohesion :
A module is said to process sequential cohesion, if the
elements of a module from the parts of a sequence, where the output from input to the next.
7. Functional cohesion :
Functional cohesion is said to exist if the different elements of
a module co operate to achieve a single function.
The primary characteristics of a neat module decomposition are high cohesion and
low coupling.
Definition :
The coupling between two modules is a measure of the degree of independence or
interaction between the two modules.
The degree of coupling between two modules depends on their interface complexity.
Classification of coupling :
Five types of coupling can occur between any two modules.
1. Data coupling :
Two modules are data coupled of they communicate using an
elementary data item that is passed as a parameter between the two, example an integer , a
float, a character etc.
2. Stamp coupling :
Two modules are stamp coupled if they communicate using a
composite data item such as a record in PASCAL or a stricture in C.
3. Control coupling :
Control coupling exists between two modules , if data from one
module is used to direct the order of instruction execution in another. An example of control
coupling is a flag set in one module and tested in another module .
4. Common coupling :
Two modules are common coupled if they share some global data
items.
5. Content coupling :
Content coupling exists between two modules if their code is
shared , example a branch from one module into another module.
Questionnaires :
Questionnaires are much more informal and they are good tools to gather
requirements from stakeholders in remote locations. Questionnaires can also be used when
we have to gather input from ( dozens , hundreds or thousands ) of people.
Prototyping :
In this approach , we gather preliminary requirements that we use to build an
initial version of the situation - a prototype. We show this to client , who then gives you
additional requirements . we change the application and cycle around with the client again.
This repetitive process continues until the product meets the critical mess of business needs.
Use cases :
Use cases are basically stories that described how discrete processes work. ( the
stories include people and describe how the solution works from a user perspective.)
DFD
Importance of DFD :
1. To give a pictorial presentation about the software.
2. It makes easy to understanding about the software to the programmer.
3. DFD models of a system graphically represents how each input data is
transformed to its corresponding output data.
Application :
DFD are a common way of modeling data flow for software development. For
example DFD for a word processing program might show the way the software processes the
data that the user enters by pressing keys on the keyboard to produce the letters on the
screen.
Patient Administrator
Gets PNR ,
Visit no and Can add and
Reports Modify the
Records
0.0
Hospital
Management
System
Monitors
Description
Patient’s
Data
Doctor
Level 1 DFD :
Student Attendance
Student information
Attendance info
Degree Details
Student
Informatio
n
System
Faculty Marks
Course
section data
Administration
Difference between multitasking and Multi threading :