Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Software Engineering

Software:-
A set of program , use for a particular in computer system is known as software.
Computer software consists of the sequence of instructions (called program) and data
that the Computer manipulates to perform various data processing tasks. So not
touchable and not tenable.

the software area of two types -----------


1. System software,
2. application software
Example :--
the different types of computer software are – operating system , compiler ,
Ms-word etc.

Software Engineering:-
software engineering discusses systematic and cost-affective software
development approaches. Alternatively , software engineering as the engineering
approach to develop software. A team of programmers or individual programmer
develop software using a programming language for a specific task.

Requirements analysis and specification :


The requirements analysis and specification phase starts once the feasibility study
phase is complete and the project is found to be financially sound and technically feasible.
The goal of the requirements analysis and specification phase is to clearly understand the
customer requirements in a specification document.

The phase consists of the following two activities:


1. Requirements gathering and analysis.
2. Requirement specification

1. Requirements gathering and analysis :


Normally the requirements that the customers describe are very vague. It is up
to the analyst to extract the full information from the customer. We now
collaboratable the two main activities in the phase.

requirements gathering;
The activity typically involves interviewing the end-users and customers to
collect all possible information regarding the system. If the project involves
automating some existing procedures , then the task of the system analyst
becomes a little easier as they can immediately obtain the input and the output
data formats and the details of the operational procedures.

Analysis of the gathering requirements :


The main purpose of this activity is to clearly understand the exact
requirements of the customer . The following basic questions may be arise.

* what is the problem?


* why is it important to solve the problem ?
* what are the possible solutions to the problem?
* what exactly are the data input to the system or output required the
problem?
* what are the likely complexities that might arise solving the problem?
The most important requirements problems are the problems of anomalies , inconsistencies
and incompleteness.

Anomaly :
An anomaly is an ambiguity in the requirement. When a requirement is
anomalous , several interpretations of the requirement are possible.
Inconsistency :
The requirements become inconsistent if any one of the requirements contradicts another .
Incompleteness :
An incomplete requirement is one where some of the requirements have been overlooked .
When the analyst detects any inconsistencies , anomalies or incompleteness in the gathered
requirements , he resolves then by carrying out further discussions with the end-uses and
the customers .

2. Software requirements specification (SRS) :


after the collection of all the required information regarding the software to be
developed and removed all pro blames from the specifications , starts to systematically
organize the requirements in the for of an SRS document .

the SRS document usually contains all the user requirements in an informal form Different
people need the SRS document for very different purpose.

some of the important categories of users of the SRS document and their needs
are as follows------
@ Users , customers and marketing personnel :
the goal of this set of audience is to unsure that the system as describe
in the SRS document will meet their needs.

@ Software developers :
The software developers refer to the SRS document to make sure that
they develop exactly what is required by the customer.

@ Test engineers :
Their goal is to unsure that the requirements are understandable from a
functionality point of view , so that they can test the software and validate its working.

@ Maintenance engineers :
The SRS document helps the maintenance engineers to understand the
functionality of the system which can help them to understand the design and code . And
also would enable them to determine the modification to the system’s needed for a specific
purpose.

The SRS document can even be used as a legal document to settle , disputes between the
customers and the developers.
Different Software life cycle Model

1. Classical Waterfall Model :

It is theoretically way of developing software . The waterfall model divides


the life cycle into the following phases shown in fig -------

Feasibility
study

Requirements analysis
and specification

Design

Coding and
Module
testing

Integration and system


testing

Delivery
Fig : Waterfall Model Maintenance

Now , the different phases of this model are described below ------

 Feasibility study :
The main aim of the feasibility study activity is to determine whether it would be
financially and technically feasible to develop the product . It involves the analysis of the
problem and collection of all relevant information related to the product such as input data
to the system . Therefore , Feasibility study is considered to be very important stage.

[*Feasibility study define the precise costs and benefits of a software system ]

 Requirements analysis and specification :


Requirements analysis is usually the first phase of a large scale software
development project . It is under taker after a feasibility study has been performed.
Tue purpose of this phase is to identify and document the exact requirements
for the system such study may be performed by the customer , a developer , a marketing
organization or any combination of the three.

 Design :
Once the requirements for a system have been documented , software design a
software system to meet them.
This phase is sometimes split into two sub phases -------
a) Architectural / High level design :
It does with the overall module structure and organization.

b) Detailed / low level design :


It deals with the details of the module .

The purpose of the design phase is to specify a particular software system


that will meet the stated requirements.

 Coding and module testing :


This is the phase that produces the actual code that will be delivered to the
customer as the running system. Individual modules developed in this phase are also tested
before being delivered to the next phase.

 Integration and system testing :


All the modules that have been developed before and tested individually are
put together integrated in this phase are tested as a whole program.

 Delivery and maintenance :


Once the system passes all the tests it is delivered to the customer and
enters the maintenance phase. Any modification made to the system after initial delivery are
usually attributed in this phase.

(2) Spiral Model :


The spiral model of software development is shown below -----

1. Determining objects and identify

Affirmative solution. 2. Identify and redissolves risks

4. Review and plan for the next phase 3. develop verify the next
level of

Product .

The diagrammatic representation of this model appears like a spiral with many loops.
The exact no. of loops in the spiral is not fixed.
Each loop of the spiral represents a phase of the software process. For eg., the
innermost loop might be concerned with feasibility study. The next loop with requirement
specification and something.

Each phase in this model split into 4 sections.

The first quadrant identifies the objectives of the phase and the alternative solution
possible for the phrase under consideration.

During the second quadrant, the alternative solutions are evaluated to select the
best solution possible for the phase under consideration.

During the 3rd quadrant, developing and verifying the next phase.

With each iteration around the spiral, progressively a more complete version of the
software gets built, usually, after several iterations along the spirals, all risks are resolved
and the software is ready for development.

This model is much more flexible compared to the other models.

(3) Prototyping Model :


Prototype is a working system that is developed to test ideas and assumptions about
the news system. It is the 1st version of an information system or original model.

The prototyping model of software development is shown below -----

Requirements
Gathering

Quick
design
Refine requirements
incorporating
Build
customer
Prototype
suggestions

Customer
Evaluation of
Prototype

Design

Implemen
t
Test

Maintain

(fig : Prototype Model for software development)


In this model, product development starts with an initial requirement gathering
phase. A quick design is carried out and the prototype is built. The developed prototype is
submitted to the customer for his evaluation, based on the customer feedback, the
requirements are refined and the prototype is suitably customer approves the prototype.

The actual system is developed using the iterative waterfall approach. However, in
this model, the requirements analysis and specification phase becomes redundant as the
working of the prototype approved by the customer becomes an animated requirements
specification.

The code for the prototype is usually thrown away. However, the experience
gathered from developing the prototype helps a great deal in developing the actual system.

(4) Iterative Waterfall Model:


The classical waterfall model is an idealistic one since it assumes that no
development error is over committed by the engineers during any of the life cycle phases. In
classical waterfall model once a defect is detected, the engineers need to go back to the
phase where the defect had occurred and redo some of the work done during that phase
and the subsequent phase to correct the defect and its effect on the late phases.

Therefore, in any practical software development work, it is not possible to strictly


follow the classical waterfall model. Feedback paths are needed in the classical waterfall
model from every phase to its preceding phase as shown in fig below -----

Feasibility
study
Requirements
analysis and
specification
Design

Coding
and
Unit
testing
Integratio
n
And
system
testing
Maintenanc
e
(fig : Iterative waterfall model)

To allow for the correction of the errors committed during a phase that are
detected in later phases.
(5) Evolutionary Model :
The life cycle model is also referred to as the successive versions model and
sometimes as the incremental model. In this life cycle model, the software is first broken
down into several modules (or functional units) which can be incrementally constructed and
delivered. The development team first develop the core modules of the system. This initial
product skeleton is refined into increasing levels of capability by adding new functionalities
in successive versions.

Each evolutionary version may be developed using an iterative waterfall model of


development. Each evolutionary model is shown in fig below ------

Rough requirements
specification

Identify the core and other


parts to be developed
incrementally

Develop the core parts using


an iterative waterfall model

Collect customer feedback and


modify requirements

Develop the next identified


features using an iterative
waterfall model
All features complete

Maintenance

In the model, the user gets a chance to experiment with a partially developed
software much before the complete of the system is released. Therefore, the evolutionary
model helps to accurately elicit user requirements during the delivery of the complete
software are minimized.

The core modules gets tested thoroughly thereby reducing chances of errors in the
core modules of the final product.
The main disadvantage of the successive version model is that for most practical
problems it is difficult to divide the problem into several functional units which can be
incrementally implement and delivered.

Therefore the evolutionary model is normally useful for only very large products,
where it is easier to find modules for incremental implementation.

 Define software life cycle :


A software life cycle is the series of identifiable stages that a software product
undergoes during its lifetime. The first stage in the life cycle of any software product id
feasibility study stage. Commonly, the subsequent stage are : requirements analysis and
specification, design, coding,testing and maintenance. Each of these stage is called a life
cycle phase.

 Define software life cycle model :


A software life cycle model is a descriptive and diagrammatic representation of the
software life cycle. A life cycle model represents all the activities required to make a
software product. It also captures the order in which these activities to be undertaken.

In other words, a life cycle model maps the different activities performed on a
software product from inception to retirement. Different life cycle models may map the
basics development activities to phase in different ways.

 Waterfall model :(classical)


 Advantages :
1) It is easy to use.
2) Works well for smaller projects where requirements are very well understood.
3) The time spent early in the software production cycle can lead to greater economy at
later stages.
 Disadvantages :
1) Testing is understood as a “one time” action at the end of the project just before the
release the operation.
2) High amounts of risk and uncertainty.
3) Inflexible.
4) Poor model for complex and complex and object oriented projects.

 Spiral Model :
 Advantages :
1) It is very flexible model.
2) Estimates ( I.e . budget, schedule, etc) get more realistic as work progress, because
important issues are discovered earlier .
3) Good for large and mission-critical projects.
4) Software is produced early in the software life cycle.

 Disadvantages :
1) Doesn't work well for smaller projects
2) It may be higher than the cost for building the system.
3) It is much customized for every project.
4) Risk analysis require highly specific expertise.
 Trerative Waterfall Model :-
 Advantages :
1) Produces working software early during the life cycle.
2) More feasible as scope and requirement changes can be implemented at low cost.
3) Testing and debugging is easier , as the iterations are small.
4) Low risks factors as the risk can be identified and resolved during each iteration .

 Disadvantages :
1) This model has phases that are very rigid and do not overlap.
2) Not all the requirements are gathered before starting the development.

 Prototype Model :
 Advantages :
1) Benefits from user input.
2) Errors or risks can be detected at a much earlier stage .

 Disadvantages :
1) Increases completely of the overall system
2) Involves exploratory methodology and therefore involves higher risk.
3) Involves implementing and their repairing the way a system is built , so no errors are
or inherent part of the development process.

 Evolutionary Model :-

 Advantages :
1) Risk analysis is better.
2) It supports changing requirements
3) Initial operating time is less.
4) Better suited for large and mission critical projects.
5) During life cycle software is produced early which facilitates customer evaluation
and feedback.
 Disadvantages :
1) Not suitable for smaller projects.
2) Management complexity is more
3) End of the project may not known which a risk is
4) Can be costly to use
5) Highly skilled resources are required for risk analysis
6) Projects progress is highly dependant upon the risk analysis phase.
 Comparation between different life cycle model :-

Features Original Iterative Prototype Spiral


waterfall waterfall model model
1. Requirement beginning beginning Frequently beginning
specification changed
2. Understanding Well Not well Not well Well
requirements understand understand understand understand
3. cost low low high expensive
4. Availability of No yes yes yes
reusable
component
5. Complexity of simple Simple Complex Complex
system
6. Risk analysis Out at No risk No risk yes
beginning analysis analysis
7. User Only at intermediate high high
involvement in all beginning
phase of DLC
8. Guarantee of less high good high
success
9. Overlapping no no yes yes
phases
10. Implementation long less less Depends
phases time on project
11. Flexibility rigid less highly flexibility
12. Change difficult Easy easy easy
Incorporated
13. Expertise high High midi-um high
required
14. Cost control yes no no yes
15. Resource yes yes No yes
control

What is fact finding?


Identification of what new system should be able to do.
Specification of what the system should do as per users requirements.
Includes what the existing system does and what is the new one expected to do.
Done by system or business analysis.

Different Fact Finding techniques :


1. Background Reading
2. Interviewing
3. Observation
4. Document sampling
5. Questionnaires
1. Background Reading :
To have good understanding of the organization’s business objectives
Kind of documents to be looked for-----
 Company Reports
 Organization charts
 Policy manuals
 Reports
 Documentation of existing system

Advantages and disadvantages :


Helps understanding the organization before meeting its work force.
Helps understanding the requirements of the system in the light of business
objectives
Documentation can provide information of requirements of the current system

2. Interviewing :
Most widely used technique. Requires the most skills and sensitivity. Structured
meeting between analysis and stuff. Discussion of one or more area of work of the stuff. Can
be using fixed set questions or extempore questions. Close and open probes.

Advantages :
1. Produces high quality information.
2. Provides greater depth of understanding of a person’s work and exceptions.

Disadvantages :
1. Time consuming processes.
2. Interview can provide conflicting information which becomes difficult to
resolve later.

3. Observation :
Watching people in their normal work flow carrying out their operations. Analysis
watch and note the type of the information the work is using processing in the existing
system. Can be open ended or close ended.

Advantages :
1. Provides first hand experience..
2. Real time data collection
Disadvantages :
1. Most people don’t like being observed and many behave differently
2. Requires recursive training to have.
3. Logistics.

4.Document sampling :
Done in two ways. First , collect copies of completed documents of the interviews
and observations and Second, statistical analysis of the documents to find out patterns of
data.
Advantages :
1. Used for qualitative data.
2. Used to find errors rates in paper documents
Disadvantages :
1. Existing documents don’t show what changes will be in future.

4.Questionnaires :
Effective fact finding instrument / technique. Has series of questions to be
answered. Multiple choice or yes/no questions. Covers question ranging from coding to
feedback.

Advantages :
1. Economical way of gathering data.
2. If well defined , results are effectively analyzed.
Disadvantages :

Software Matric

To term project size is a measure of the problem complexity in term of the


effort , cost and time required to develop the product . software metric is an unit in
terms of which we can express this project size for accurately estimate .
Currently , two metrics are widely used to estimate project size.
1. Line of code (LOC)/ size oriented Metric
2. Function point (FP)/ function oriented Metric

1. Line of code :
This metric also called size oriented metric . LOC is the simplest among
all metrics available to estimate projects size, so it is very popular using this
metric , the product size is estimated by counting the no of lines of source
instructions (the lines used for commenting the code only) and the header lines
are ignored in the developed program.
In order to estimate the LOC , count at the beginning of the beginning
of project , project managers , usually divide the problem into modules and
each module into sub modules and so on, until the size of the different leaf
level modules can be approximately predicted .
If a software organization maintains simple records , a table of size
oriented measures, such as the one show in figure , can be created. The table
lists a set of simple size oriented metrics can be developed for each project.
 Name of project
 Errors per K LOC (100 lines of code)
 Defects for per LOC
 $ per LOC
 Effort per person month
 Page of documentation per k LOC etc

Project LOC effort $(100) Pp doc errors defects people


alpha 112100 24 168 365 134 29 3
beta 27200 62 440 122 321 86 5
gamma 20200 43 314 1050 256 64 6

Disadvantages :
However , LOC as a measure of problems size has several short comings ----
--
1. LOC gives a numerical value of problem size that can very widely with
individual coding style different programs lay out their code in different
ways .
2. A good problem size measure should consider the overall complexity
of the problem and the effort needed to solve it but, LOC , however ,
focuses on the coding activity alone.
3. LOC metric measures the lexical complexity of a program and does not
address the more important but subtle issues of logical or structural
complexities.
4. LOC measures co-relates poorly with the quality and efficiency of the
code.
5. It is very difficult to accurately estimate LOC in the final product from
the problem specification.

Advantages :
1. LOC is the simplest among all metrics available to estimate project size.
2. This metric is very popular
3. Determining the LOC count at the end of a project is a very simple job.

2.Function point metric (FP) :


Function oriented metrics were first proposed by albacore, who suggested
a measure (n) called the function point.
The conceptual idea of function point metric is that the size of a software
product is directly dependant on the no of different functions or features it
supports. Each function when invoked reads some input data and transforms it
to the corresponding output data.
For example, library Automation software shown below ----

Query book

Output data
Issue book

Return book

Library automation software

Function point is computed in two steps ----------

1. UFP :- the first step is to compute the unadjusted function point.

UFP = (number of inputs ) * 4


(number of outputs) *5
(number of inquiries) *4
(number of files) *10
+ ( number of interfaces) *10
Count total

The meaning of these used parameter are as follows

No of inputs :-
Each data item input by the user is counted
No of outputs :-
Each users that provides application oriented data to the user is counted . the
output refer to reports printed , screen , error message etc.
No of inquiries :-
An inquiries is defined as an online input that results of an online output.
No of files :-
Each logical file (means groups of logically related data) is counted.
No of interfaces :-
All machine readable interfaces (example , data files on storage media) that
are used to transmit information to another system are counted.
2. TCF :- once UFP is completed the technical complexity factor is computed next . it refines
the UFP measure by considering 14 other factors such as high transaction rates, throughput ,
responses time requirements etc.

TCF = 0.65 + 0.01 * DI = total degree of influence.

The TCF can very from 0.65 to 1.35.


Finally we have ,
FP = UFP * TCF

Advantages :
It can be used to easily estimate the size of a software product directly from the
problem specification

Disadvantage :
It does not take into account the algorithmic complexity of a software.

Software quality

Traditionally , a quality product is defined in terms of its fitness of purpose. That is


a quality product does exactly what the user what to do . for software products , the fitness
of purpose is usually interpreted in terms of satisfaction of the requirements laid down in
the SRS document. Although the fitness of purpose is a satisfactory definition of quality for
many products such as a car, a table for, a grinding machine, and so on.
The modern view of the quality associates a software product with several quality
factors such as following -----

Portability :
A software product is said to be portable if it can be easily made to work in
different operating system environments , in different machines, with other software
products etc. Portability is the case of transporting a program from one hardware
configuration to another.
Usability :
Usability is the efforts required to understand and operate a system . a
software products has good usability , if different categories of user can easily invoke the
functions of the product.
Re usability :
A software product has good re usability if different modules of the
product can easily be reused to develop new product.
Correctness :
A software product is correct,is different requirements as specified in the
SRS document have been correctly implemented.
Maintainability :
Maintainability is the case with which the program errors can be easily
detected and corrected . new functions can be easily added to the project and the
functionalities of the project can be easily modified.

Quality Control

Quality control involves the series of inspections . reviews and tests used throughout
the software process that ensure each work product meets the requirements placed upon it.
Quality control includes a feedback loop to the process that created the work
product . quality control as part of the manufacturing process.
A key concept of quality control is that it focuses not only on detecting the defective
products and eliminating them but also on determining the causes behind the defect. The
feedback loop is essential to minimize the defect produced.
Quality control is a set of methods used by organizations to achieve
quality parameters or quality goals and continually improve the organization's
ability to ensure that a software product will meet quality goals.
Quality Control Process:

The three class parameters that control software quality are:


 Products
 Processes
 Resources
The total quality control process consists of:
 Plan - It is the stage where the Quality control processes are planned
 Do - Use a defined parameter to develop the quality
 Check - Stage to verify if the quality of the parameters are met
 Act - Take corrective action if needed and repeat the work

Quality Control characteristics:


 Process adopted to deliver a quality product to the clients at best cost.
 Goal is to learn from other organizations so that quality would be better
each time.
 To avoid making errors by proper planning and execution with correct
review process.

Advantages and disadvantages of quality control

Advantages
 It can help to prevent faulty goods and services being sold.
 It is not disruptive to production- workers continue producing,
inspectors do the checking.
 As with any quality system, the business may benefit from an improved
reputation for quality and and this may increase sales.

Disadvantages

 It does not prevent waste of resources when products are faulty.


 The process of inspecting the goods or service costs money, e.g. the
wages paid to the inspectors, the cost of testing goods in the
laboratory.
 It does not encourage all workers to be responsible for quality.
Software Debugging

Once errors are identified , it is necessary to first locate the precise program
statements responsible for the errors and then to fix them.
Software debugging is an important approaches that are available to identify the
errors locations.
The following are some of the approaches popularity adopted by programmers for
debugging
1. Brute force method
2. Backtracking
3. Cause elimination method
4. Program slicing etc.
1. Brute force method :
It is the least efficient method . in this approach the program is loaded
with print statements to print the immediate values with the hope that of printed values will
help to identify the statement in error.
2. Backtracking :
This is also a fairly common approach . in this approach beginning from
the statement at which an error symptoms is observed. The source code is traced backwards
until the error is discovered.

3. Cause elimination method :


In this approach a list of causes which could possibly have
contributed to the error symptom is developed and tests are conducted to eliminate
each causes.
4. Program slicing :
It is similar to backtracking . however the search space is reduced by
defining slices.

Debugging guidelines :
We also provide some guidelines for effective debugging are follows ---
1. Many times debugging requires a thoroughly understanding of the
program design
2. Debugging may sometimes even require full redesign of the system
3. Any one error correction may introduce new errors.

Software Measure
A software measure provides a quantitative indication of the extent amount dimension
capacity or size of some attribute of a product of process. Measurement is the act of
determining a measure.
Software metric :
Software metric is a quantitative measure of the degree to which a system
component or process a given attribute.
Software indicators :
Software indicators is a metric or combination of metric that provides in sight
into the software process a software project or the product itself . an indicator in sight that
enables the project manager or software engineer to adjust the process to make things
better.

Defect Removal Efficiency

A quality metric that provides benefit at both the project and process level is
defect removal efficiency (DRE ) , it is a measure of the filtering ability of quality assurance
and control activities as they are applied through out all processes frame work activities.
For project as a whole , DRE is defined in the following manner :
DRE = E/(E+D)
Here, E = number of errors before delivery
D = number of defects found after delivery

The ideal value for DRE is 1 . that is no defects are found in the software . DRE
encourages a software project team for finding as many errors as possible before delivery.
DRE can also be used within the project to access a team’s ability to finds errors
before they are passed to the next framework activity. For example the requirements
analysis task when used in this context we redefine DRE as -----------

DREI = E/(Ei +Ei +1)

Here, Ei = number of errors found during software engineering activity.


Ei+1 = are traceable to errors that were not discovered in software engineering activity i .

A quality object for a software team (or individual engineer ) is to achieve DREI that
approaches . That is errors should be filtered out before they are passed on to the next
activity.

Characteristics of software maintenance

Some general characteristics of the maintenance projects are -------------


1. Software maintenance is becoming an important activity of a large number of
organization
2. When the hardcore platform changes and a software product performs some low level
functions , maintenance is necessary
3. For newer interface (during the change of supported environment of a software) a
software product may need to be maintained when the operating system changes.
4. Every software product continues to evolve after it’s development through maintenance
efforts.

Type of Software Maintenance

There are three type of requirements of software maintenance . They are ---------

A) Corrective Maintenance :
Corrective maintenance of a software product becomes necessary to rectify the
bugs observational the system is in use. It is universally used to refer to maintenance for
fault repair.

B) Adoptive Maintenance :
A software product might need maintenance when the customer need the
product to run on new platforms , on new operating systems or when they need the product
to be interfaced with new hardware for software . this is called adoptive maintenance.

C) Perfective Maintenance :
A software product need maintenance to support the new features that users
want (it to support to change different functionalities of the system ) according to customer
demands, or to enhance the performance of the system. This is called perfection
maintenance.

Actual Failure Curve

Reliability behaviour for hardware and software is very different . for example hardware
failures are inherently different from software failures.
The changes in failure rate over the product lifetime for a typical hardware
product panda software product are shown in figure below ------
A) Hardware product :
The failure rate is high initially but decreases as the faulty components are
identified and removed . the system then enters it useful life. After some time (called the
product lifetime) the components wear out and failure rate increases.

B) Software product :
For software the failure rate is at its highest during integration and testing
phases. As the system is tested , more errors are identified and removed resulting in a
reduced failure rate . this error removal continues at a slower pace during the useful life of
the product. As the software becomes absolute , no more errors correction occurs and
failure rate remains unchanged.

Estimate of Maintenance Cost

Boehm (1981) proposed a formula for estimate maintenance costs as part of his
COCOMO cost estimation model .

ACT = KLOC (added) + KLOC (deleted) / KLOC(total)

Where, ACT = annual change traffic. [ACT the fraction of a software product which undergo
change during a typical year either through addition or deletion.
KLOC (added) is the total kilo lines of source code added during maintenance.
KLOC (deleted) is the KLOC (deleted) during maintenance.

The ACT is multiplied with the total development cost to arrive at the maintenance
cost.

Maintenance Cost = ACT * development cost.

Software Reverse Engineering

Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis code.
The purpose of reverse engineering is to facilitate maintenance work by
improving the understand ability of a system and to produce the necessary documents for a
legacy system.
Reverse engineering is becoming important since legacy software products lack
proper documentation and are highly unstructured.
The first stage of reverse engineering usually focuses on carrying out cosmetic
changes to the code to improve it’s readability , structure and understand ability , without
changing any of its functionalities.

The way to carry out these cosmetic changes is shown schematically in fig.

Requirements specification

Design

Module Specification

code

Fig : a process model for reverse engineering

After the cosmetic changes have been carried out on a legacy software the process of
extraction the code , design and the requirements specification can begin these activities
are systematically shown in fig :

Reformat program Assign meaning Simplify


full names conditions

Simplify
Remove errors
processing

Fig : cosmetic changes carried out before reverse engineering.


Software Testing Methods

 Black Box testing :


The technique of testing without having any knowledge of the interior
workings of the application is called black box testing. The tester is obviously the system
architecture and does not have access to the source code. Typically , while performing a
black box test, a tester will interact with the system’s user interface by providing inputs and
examining outputs without knowing how and where the inputs are worked upon.
It is carried out to test functionality of the program. It is also called “Behavioral”
testing. The tester in this case has a set of input values and respective desired results. On
providing input , if the output matches with the desired results , the program is tested “OK”
and problematic otherwise.
In this testing method the design , the design and structure of the code are not
known to tester, and testing engineers and end users conduct this test on the software.
advantages :
1. Well suited and efficient for large code segments.
2. Code access is not required.
3. Clearly separates user’s prospective from the developer’s prescriptive
through visibly defined roles.
4. Large no of knowledge skilled testers can test the application with no
knowledge of implementation programming language or operating
systems.
Disadvantages :
1. Limited coverage since only selected number of test scenarios actually
performed.
2. Inefficient testing , due to the fact that the tester only has limited
knowledge about an application.
3. Blind coverage , since the tester cannot target specific code segments
or error prone areas.
4. The test case are difficult to design

Black Box testing techniques :


1. Equivalence class :
The input is divided into similar classes if one element of a class
passes the test , it is assumed that all class is passed.
2. Boundary values :
The input is divided into higher and lower values . if these values
Pass the test , it is assumed that all values in between may pass too.
3. Cause effect graphic :
In both previous methods only one input value at a time is tested .
cause(input) Effect (output) is a testing technique where consternation of input values are
tested in a systematic way.
4. Pair wide testing :
The behaviour of software depends on multiple parameter . in
pairwise testing , the multiple parameters are tested pair wise of their different values.
5. State based testing :
The system changes state on provision of input . these system are
tested based on their states and input.
Black

Output
Input

Fig : Black Box Testing

 White Box testing :


White box testing is the detailed investigation of internal logic and structure of
the code. White Box testing is also called glass testing or Open Box testing . in order to
perform white box testing on an application a tester needs to know the internal workings of
the code.
The tester needs to have a look inside the source code and find out which
unit/chunk of the code is behaving inappropriately.

Fig : white box testing

In conducted to test program and its implementation , in order to improve


code efficiency or structure . it is also known as Structural testing .
In this method the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the software.

White Box testing techniques :


1. Control flow testing :
The purpose of the control flow testing to set up test cases which
covers all statements and branch conditions . the branch conditions are tested for being true
and false , so that all statements can be covered.
2. Data flow testing :
This testing technique emphasis to cover all the data variables
included in the program. It tests where the variables were declared and defined and where
they where used or changed.
Advantages :
1. As the tester has knowledge of the source code, it becomes very easy to
find out which type of data can help in testing the application effectively.
2. It helps in optimizing the code
3. Extra lines of code can be removed which can bring in hidden defects.
4. Due to the tester’s knowledge about the code maximum coverage is
attained during test scenario witting.

Disadvantages :
1. Due to the fact that a skilled tester is needed to perform white box
testing , the costs are increased .
2. Sometimes it is impossible to look into every nook and corner to find out
hidden errors that may create problems , as many paths will go untested.
3. It is difficult to maintain white Box testing , as it requires specialized
tools like code analyzer and debugging tools.

 Comparison of Testing Method :


Black Box Testing White Box testing
1. The internal workings of an application 1. Tester has full knowledge of the internal
need not to be known. working of the application.

2. Also known as closed box testing, data- 2. Also known as clear-box testing, structural
driver testing or functional testing. testing or code based testing.

3. Performed by end-users and also by 3. normally done by testers and developers.


testers and developers.

4. It is exhaustive and the least time 4. The most exhaustive and time consuming
consuming. type of testing.

5. Not suited for algorithm. 5. Suited for algorithm.

6. This can only be done by trial and error 6. Data domains and internal boundaries can
method. be better tested.

Software validation

Validation is the process of examining whether or not the software satisfies the
user requirements. It is carried out at the end of the SDLC. If the software matches
requirements for which it was made, it is validated.
 Validation ensures the product under development is as per the user requirements.
 Validation emphasizes on user requirements.
Software verification

Verification is the process of confronting if the software is meeting the business


requirements and is developed adhering to the proper specifications and methodologies.
 Verification ensures the product being developed in according to design specifications.
 Verification concentrates on the design and system specifications.

Software Testing Levels

 Unit testing :
This type of testing is performed by developers before the setup is handed over
to the testing team to formally execute the test cases. Unit testing is performed by
respective developers or the individual unites of source code assigned areas. The developers
use test data that is different from the test data of the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.

 Integration testing :
Integration testing is defined as the testing of combined parts of an
application to determine if they function correctly. Integration testing can be done in two
ways . bottom up integration testing ans top down integration testing.

 System testing :
System testing tests the system as a whole once all the components are
integrated , the application as a whole is tested rigorously to see that it meets the specified
quality standards. This type of testing id performed by a specified testing team.
System testing is important because of the following reasons -----------
-----
 System testing is the first step in the software development life cycle ,
where the application is tested as a whole.
 The application is tested thoroughly to verify that it meets the functional
and technical specifications.
 The application is tested in an environment that is very close to the
production environment where the application will be deployed.
 System testing enables us to test, verify, and validate both the business
requirements as well as the application architecture.

 Alpha testing :
This test is the first stage of testing and will e performed among-st the teams
(developer and QA team). unit testing , integration testing and system testing when
combined together is known as alpha testing. During this phase the following aspects will be
tested in the application.
 Spelling mistakes
 Broken links
 Cloudy directions
 Beta testing :
This test is performed after alpha testing been successfully performed. In beta
testing , a sample of the intended audience tests the application . beta testing is also known
as pre-release testing.
In this phase the audience will be testing the following ----
 Users will install , run the application and send their feedback to the
project team.
 Typographical errors , confusing application flow and even crashes.
 Getting the feedback , the project team can fix the problem , the
higher the quality of our application will be.
 The more issues us fix that solve real user problems, the higher the
quality of our application will be.
 Having a higher quality application when we release it to the general
public will increase customer satisfaction.

 Verification vs validation :

Software verification Software Validation


1. Verification address as the concern : “are 1. Validation addresses the concern ; “are
you building it right ?” you building the right thing?”

2. Ensure that the software system meets all 2. Ensure that the functionalities meet the
the functionality. intended behaviour.

3. Verification takes first place and includes 3. Validation occurs after verification and
the checking for do cementation , code etc. mainly involves the checking of the overall
product.
4. Done by developers. 4. Done by tester.

5. It has static activities. 5. It has dynamic activities.

6. It is an objective process. 6. It is subjective process.


Software Design Strategies

Software design is a process to conceptual implementation. Software


design takes the user requirements as challenge and tries to find optimum solution.
While the software is being conceptualization, a plan is chalked out of find the best
possible design for implementing the intended solution.

 Structure design :
Structure design is a conceptualization of problem into several well organized
elements of solution. It is basically concerned with the solution design.
Benefit of structured design is, it gives better understanding of how the problem is
being solved. Structure design also makes it simpler for designer to concentrate on the
problem more accurately.
Structure design is mostly based on ‘divide and conquer’ strategy where a problem
is individually solved until the whole problem is solved.
A good structure design always follows some rules for communication among
multiple modules, namely ----
Cohesion --- grouping of all functionally related elements.
Coupling --- communication between different modules.
A good structured design has high cohesion and low coupling arrangements.

 Function oriented design(FOD):


In function oriented design the system is comprised of many smaller sub-systems
known as functions. These functions are capable of performing significant task in the system.
The system is considered as top view of all functions.
The design mechanism divides the whole system into smaller functions, which
provides means of abstraction by concealing the information and their operation. These
functional modules can share information passing and using information available globally.
Function oriented design works well where the system state does not matter and
program/functions work on input rather than on a state .

 Object oriented design:


Object oriented design works around the entities and their characteristics instead of
functions involved in the software system. This design strategies focused on entities and its
characteristics.
Here the objects of the problem domain and the solution domain are identified
along with the relationship that exists among them, each object is further worked upon after
that, the OOD approach takes less development time and effort maintainability of the
product.
 Difference between FOD and OOD:

FOD OOD

1. The basic obstructions, which are given to 1. The basic abstractions are not the real
the user are real world functions. world functions but are the data abstractions
where the real world entities are
represented.

2. Functions are grouped together by which 2. Functions are grouped together are the
a higher lever function is obtained. An e.g. of basis of the data they operate since the
this technique is SA/SD. classes are associated with their method.

3. In this approach the state information is 3. In this approach the state information is
often represented in a centralized shared not represented in a centralized memory but
memory. is implemented or distributed among the
objects of the system.

4. Approach is mainly used for computation 4. Approach is mainly used for evolving
sensitive application. system which mistakes a business process
are chase.

5. We decompose in function procedure 5. We decompose in class level.


level.

6. Top down approach. 6. Bottom up approach.

 Software design approaches :


Here are two generic approaches for software designing-----
A) Top Down Design :
A system is composed of more than one sub systems and if contains a
number of components. Further , these sub systems and components may have their on set
of sub systems and components and creates hierarchical structure in the system.
Top Down Design takes the whole software system as on entity and
then decomposes it to achieve more than one sub system or component based on some
characteristics.
Each sub system or components is then treated as a system and
decomposed further. This process keeps on running until the lowest level of system in the
top down hierarchical is achieved.
Top Down design is more suitable when the software solution needs
to be designed from scratch and specific details are unknown.

B) Bottom up design :
The bottom up design model starts with most specific and basic
components. It proceeds with composing higher level of components by using basic or lower
level components . it keeps creating higher level components until the desired system is not
evolved as one single component. Will each higher level , the amount of abstraction is
increased.
Bottom up strategy is more suitable when a system needs to be
created from some existing system , where the basic primitives can be used in the newer
system.
Both , top down and bottom up approaches are not practical
individual , instead, a good combination of both is used.

Quality Assurance

Quality assurance consists of auditing and reporting functions of managements.


A major aim of software quality assurance is to help an organization develop high quality
software products in a repeatable manner. The quality of the developed software and the
case of development are important considerations, in quality assurance.
The basic premise of modern quality assurance is that if an organization’s
processes are good , then the products are bound to be of good quality.

Software Maintenance process model

 Necessity :
Usually for complex maintenance projects for legacy systems, the software process
can be represented by a reverse engineering cycle followed by a forward engineering cycle
with an emphasis on as much reuse other documents.
Since the scope (activities required) for different maintenance projects very
widely , no single maintenance process model can be developed to suit every kind of
maintenance projects.

 Types :
There are two broad categories of process models can be proposed.
Model 1. : this maintenance model or process is graphically presented below----

Gather change requirements

Analysis change requirements

Device code change strategies

Apply code change strategies to the off code

Update documents Integrate and test

Model 1
This model is preferred for projects involving small re works where the code is
changed directly and the changes are reflected in the relevant documents later.

Model 2 : this maintenance model is graphically presented below --------

Change requirements

Requirements New requirements


specification specification

design design

Module Module
specifications specification

reverse code
Forward
code engineering
engineering

The 2nd model is preferred for projects where the amount of rework
required is significant. This approach can be represented by a reverse engineering cycle
followed by a forward engineering cycle , such as approach is also known as software
engineering.

 Advantages : (model 2)
1. It produces more structured design than what the original product had.
2. It produces good documentation.
3. It is very often results in increased efficiency.

 Disadvantages :
1. It is very costly than the 1st one.

Data flow diagram

Data flow diagram is graphical representation of flow of that in an information system. It is


capable of depicting incoming data flow , outgoing data flow and stored data . the DFD does
not mention anything about how data flows through the system.
There is prominent different between DFD and flowchart. The flowchart depict
flow of control in program modules . DFD depict flow of data in the system at various levels.
DFD does not contain any control or branch elements.

 Types of DFD :
Data flow diagram are either logical or physical.
A) Logical :
The type of DFD concentrates on the system process and flow of data in
the system , how data is moved between different entities.
B) Physical :
This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.

 DFD components :
DFD can represent source , destination, storage and flow of data using the
following set of components =-----
Data flow
processes Data Storage
entity

 Entities :
Entities are source and destination of information data , entities and
represented by a rectangle with their respective names.
 Process :
Activities and action taken on the data are represented by circle or round
rectangles.
 Data storage :
There are two variants of data storage it can either be represented as a
rectangle with absence of both smaller sides or as an open sided rectangle with only one
side missing.
 Data flow :
Movement of data is shown by pointed arrows . data movement is shown
from the base of arrow as it’s source towards head of the arrow as destruction.

 Levels of DFD :
 Level 0 :
Highest abstraction level DFD is known as level 0 DFD , which depicts the , entire
information system as one diagram concealing all the underlying details , level 0 DFD are
also known as content level DFD.

Online Shopping System


delivery
Order

customers
 Level 1 :
The level 0 DFD is broken down into more specific , level 1 DFD depicts base
modules in the system and flow of data among various modules. Level 1 DFD also maintains
basic processes and source of information.

Accounts

Finance
verification

Order
Customer data Stores
processing

verify Issue item

process Order

Sales
Delivery
Order

Customers

 Level 2 :
At this level , DFD shows data flows inside the modules maintained in level 1.
Higher level DFDs can be transformed into more specific lower level DFD s with
deeper level of understanding unless the desired level of specification is achieved.

Software Quality Attributes(additional)

 Security :
Security is the ability of the software to remain protocol from unauthorized access.
This includes both change access and view access.

 Integrity :
Integrity comes with security . system integrity should be sufficient to prevent
unauthorized access to system functions , preventing information loss, ensure that the
software is protected from virus infection and protecting the privacy of data entered into
the system.

 Maintainability :
Maintainability is the ability of software to adapt when external changes occur.

 Flexibility :
Flexibility is the ability of software to adapt when external changes occur.

 Robustness :
Robustness is defined as the ability of a software product to cope with unusual
situation.

 Efficiency :
Efficiency is the ability of the software to do the requirement processing on
latest amount of hardware.

 Test ability :
System should be easy to test and find defects . if required should be easy to
divide in different modules for testing.

COCOMO(A Heuristic estimation Technique)

COCOMO( Constructive Cost estimation Model ) was proposed by Boehm


(1981). any software development project can be classified into 3 categories based on the
development complexities ----------

 Organic :
A development project can be considered to be of organic type , if the project deals
with developing a well understood applicable program. The size the team member are
experimented in developing similar type of project.

 Semidetached :
A development project can be considered to be of semidetached type, if the
development team consists of a mixture of experienced and inexperienced staff.

 Embedded :
A development project can be considered to be of embedded, if the software
being developed is strongly coupled the complex software.
According to Boehm, software cost estimation should be through three stages-----
A. Basic COCOMO
B. Intermediate COCOMO
C. Complete COCOMO
A. Basic COCOMO :
Computes software development effort and cost as a function of program size
expressed in terms of LOC .
The basic COCOMO takes the following form ---
E = Ab(KLOC) Exp(Bb) persons - months
D = Cb (E) Exp (Db) months

Here,
E = stands for applied effort
D = development time
The co efficient s of Ab , Bb , Cb , Db , for the three modes are.

Software Ab Bb Cb Db
project
organic 2.4 1.05 2.5 0.38
Semi Detached 3.0 1.12 2.5 0.35
Embedded 3.5 1.20 2.5 0.32

Advantages :
1. This model is good for quick early rough order of magnitude estimates of
software project.

Disadvantages :
1. The accuracy of this model is limited.
2. The estimates of this model are within a factor of 1.3 only 29% of the time.

B. Intermediate COCOMO :
Computes effort as a function of program size and a lot of cost drivers
that includes subjective assessment of product attributes , software attributes , personal
attributes and project attributes.
The basic model is extended to consider a set of cost drivers attributes
grouped into four categories -----
 Product attributes :
1. Required software reliability.
2. Size of application software.
3. Compatibility of the product.
 Hardware attributes :
1. Run time performance constrains.
2. Memory constrains.
3. Programming language experience.
 Personal attributes :
1. Analyst capability.
2. Software engineering capability.
3. Programming language experience.
 Project attributes :
1. Use of software tools.
2. Required development capability.
3. Application of software engineering methods.
The intermediate COCOMO takes the form -----

E = ai (KLOC)(bi) (EAF)
Here, E = applied effort
EAF effort adjustment factor.
The values of ai and bi for various class of software projects are------

Software projects ai bi
organic 3.2 1.05
Semi detached 3.0 1.12
Embedded 2.8 1.20

Advantages :
1. This model can be applied to almost entire software product for easy and
rough cost estimation during early stage.
2. It can also be applied at the software product component level for
obtaining more accurate cost estimation.
Disadvantages :
1. The effort multiplies are not dependant on phases.
2. A product with many component is difficult to estimate.

C. Complete COCOMO :
Complete COCOMO incorporates all characteristics of the intermediate
versions with an assessment of the cost drives impacted on each step.
In complete COCOMO the effort is calculates as function of program
size and a set of cost drives.
The 6 phases of complete COCOMO are -------
i. Plan and requirement
ii. System design
iii. Detailed design
iv. Module code and test
v. Integration and test
vi. Cost constructive model

 Computer program vs software

Computer program software


1. A program is an instance of an algorithm A software is a collection of individual
written in some programming language such programs well packaged to run on a
as C, C++, java , python etc. computer.
2. It can be files or even punch cards It typically consists of files
3. It existed before software It existed after program
4. It is written for our self It is developed for a 3rd party
5. It consists of coding It consists of not only coding but also
includes program, documentation and
modules.
Cohesion

Cohesion is a measure of the functional strength of a module.


A module having high cohesion and low coupling is said to be
functionally independent of other modules.

Advantages :
1. Error isolation :
Functional dependence reduces error propagation. If a module is
functionally dependant, its degree of interaction with other modules is less. Therefore, any
error existing in a module would not directly effect the other modules.
2. Scope for reuse :
Reuse of a module becomes possible because each module does
some well defined and precise functions and the interface of module with other modules is
simple and minimal.
3. Understand ability :
Complexity of the design is reduced because different modules are
more or less independent of each other and can be understood in isolation.

Classification of cohesiveness :
1. Coincidental cohesion :
A module is said to have co incidental cohesion, if it performs a
set of tasks that relate to each other very loosely , if at all . in this case, the module contains
a random collection of functions.
2. Logical cohesion :
A module is said to be logically cohesive , if all elements of the
module perform similar operations, example error handling, data input, data output etc. An
example of logical cohesion is the case where a set of print functions generating different
output reports are arranged into a single module .
3. Temporal cohesion :
When a module contains functions that are related by the fact
that all the functions must be exacted in the same time span the module is said to exhibit
temporal cohesion. The set of functions responsible for initialization start up. Shut down of
some process etc. Exhibit temporal cohesion.
4. Procedural cohesion :
A module is said to process procedural cohesion if all the
functions of the module are all part of a procedure.
5. Communicational cohesion :
A module is said to have communicational cohesion if all the
functions of the module refer to or update the same data structure example the set of
function defined on an array or stack.
6. Sequential cohesion :
A module is said to process sequential cohesion, if the
elements of a module from the parts of a sequence, where the output from input to the next.
7. Functional cohesion :
Functional cohesion is said to exist if the different elements of
a module co operate to achieve a single function.

coincidental logical temporal procedural communicational sequential functional


Low high

Fig : classification of cohesion


Coupling

The primary characteristics of a neat module decomposition are high cohesion and
low coupling.

 Definition :
The coupling between two modules is a measure of the degree of independence or
interaction between the two modules.
The degree of coupling between two modules depends on their interface complexity.
 Classification of coupling :
Five types of coupling can occur between any two modules.
1. Data coupling :
Two modules are data coupled of they communicate using an
elementary data item that is passed as a parameter between the two, example an integer , a
float, a character etc.
2. Stamp coupling :
Two modules are stamp coupled if they communicate using a
composite data item such as a record in PASCAL or a stricture in C.
3. Control coupling :
Control coupling exists between two modules , if data from one
module is used to direct the order of instruction execution in another. An example of control
coupling is a flag set in one module and tested in another module .
4. Common coupling :
Two modules are common coupled if they share some global data
items.
5. Content coupling :
Content coupling exists between two modules if their code is
shared , example a branch from one module into another module.

Data Stamp Control Common Content


Low High

Characteristics of software evaluations or Lehman’s law

Lehman and Balady have studied the characteristics of evaluation of several


software product (1918) . they have expressed their observations in the form of laws. Their
important laws are given below --------

 Lehman’s 1st law :


A software product must change continually or become progressively less
useful.
This law clearly show that every procedure must under go maintenance
irrespective of how well it might have been designed.
 Lehman’s 2nd law :
The structure of a program tends to degraded as more and more maintenance
is carried out on it.
The reason for the degraded structure is that when we add a function during
maintenance we build on top an existing program, often in a way that the existing program
was not intended to support. If we do not redesign the system, the additions will be more
complex than they should be.

 Lehman’s 3rd law :


Over a programs lifetime , its rate of development is approximately constant.
This law states that rate at which code is written or modified is approximately
the same during development and maintenance.

Different Techniques for Gathering Requirements

It is difficult to built a solution if we don’t know the requirements . so the


requirements are first gathered from the client. Many techniques are available for gathering
requirements.

 One to one interviews :


The most common technique of gathering requirements is to sit down with
the clients and ask them what they need. The discussion should be planned out ahead time
based on the type of requirements. ( generally , at the first ask open ended questions to get
the interviewee to start talking and then ask probing questions to uncover requirements.
 Group interviews :
Group interviews are similar to previous , except that more than one
person is being interviewed --- usually two to four. The interview work well when every one
is at the same level or has the same role. Group interview can uncover a richer set of
requirements in a shorter period of time.
 Felicitated interviews :
In a facilitated session , we bring a larger group ( five or more ) together
for a common purpose. In this case , we are trying to gather a set of common requirements
from the group in a faster manner.

 Joint Application development ( JAD ) :


JAD sessions are similar to general facilitated sessions . however the group
typically stays in the session until the session objectives are completed.

 Questionnaires :
Questionnaires are much more informal and they are good tools to gather
requirements from stakeholders in remote locations. Questionnaires can also be used when
we have to gather input from ( dozens , hundreds or thousands ) of people.

 Prototyping :
In this approach , we gather preliminary requirements that we use to build an
initial version of the situation - a prototype. We show this to client , who then gives you
additional requirements . we change the application and cycle around with the client again.
This repetitive process continues until the product meets the critical mess of business needs.

 Use cases :
Use cases are basically stories that described how discrete processes work. ( the
stories include people and describe how the solution works from a user perspective.)

 Request for proposal(RFPs) :


RFP is a list of requirements which is to compare against our own capabilities to
determine how close a match we are to the client’s needs.

DFD

Importance of DFD :
1. To give a pictorial presentation about the software.
2. It makes easy to understanding about the software to the programmer.
3. DFD models of a system graphically represents how each input data is
transformed to its corresponding output data.

Application :
DFD are a common way of modeling data flow for software development. For
example DFD for a word processing program might show the way the software processes the
data that the user enters by pressing keys on the keyboard to produce the letters on the
screen.

Q. Design DFD of a typical Hospital Management system .

Now patient registers Get reports and monitor’s


Into system Patient data

Patient Administrator

Gets PNR ,
Visit no and Can add and
Reports Modify the
Records

0.0
Hospital
Management
System
Monitors
Description
Patient’s
Data

Doctor

Get patient reports


R. . design the DFD of typical student admission system :

Level 0 DFD : Student Details


details
Student
Administrative My SQL
information
User database
system
Final report Report

Level 1 DFD :

Student Attendance
Student information
Attendance info

Degree Details
Student
Informatio
n
System

Faculty Membership details

Remuneration View reports

Faculty Marks

Course
section data

Administration
Difference between multitasking and Multi threading :

Multi tasking Multi threading


1. In multi tasking , several programs are In multi threading multiple thread executed
executed concurrently . example java either same or different part of program.
compiler and JAVA IDE .
2. It is less granular than multi threading . It is more granular than multi tasking
3. In multi tasking CPU switches between In multi threading CPU switches between
multiple programs to complete their multiple threads of the same program
execution in real time.
4. It is heavy compared to multi threading. It is light compared to multi tasking
5. It is more costly . It is less costly

You might also like