Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 117

Shri vaishnav vidyapeeth vishwavidyalaya,

Practical
File

SOFTWARE ENGINEERING

III SEM

Submitted to: Reena Gupta Mam


Submitted by: Mansi Singh Thakur
DEPARTMENT OF COMPUTER APPLICATION

2021-22

List of Experiments

Student Name:-mansi thakur Enrollment No.:-2004MCA0008163

 To study the Software Development Life Cycle.


 To study Data Flow Diagrams (DFDs) and levels in DF
• To understand and apply good Software Analysis and Design practices
• To create Data Dictionary for some applications
• To use various information gathering tools (Questionnaire, Interview, On Site
Survey)
• To choose suitable software development process models for developing
different
applications.
• Perform Feasibility Study and to create Feasibility Report for applications.
• To make decision whether to buy/lease/ develop the software.
• To understand and create Use Case Diagram.
• To study Functional Point Analysis
• To devise Test Cases for software testing, black-box, white-box testing and
different
types of testing.
• To study the Risk Management during the software development.
• To assure Quality of Software, Statistical Software Quality Assurance,
Reliability of
Software.
• To understand and apply concepts of Project Management.
• Case study (MIS and DSS).
EXPERIMENT: 1

AIM:- To Study About Software Development Life Cycle.

SOFTWARE DEVELOPMENT LIFE CYCLE:


Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality software’s. The SDLC aims to produce a high-quality
software that meets or exceeds customer expectations, reaches completion within times
and cost estimates.
SDLC is the acronym of Software Development Life Cycle.
It is also called as Software Development Process.
SDLC is a framework defining tasks performed at each step in the software development
process.
ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be
the standard that defines all the tasks required for developing and maintaining software.
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It
consists of a detailed plan describing how to develop, maintain, replace and alter or
enhance specific software. The life cycle defines a methodology for improving the quality
of software and the overall development process.
The following figure is a graphical representation of the various stages of a typical SDLC.
A typical Software Development Life Cycle consists of the following stages −
Stage 1: Planning and Requirement Analysis
Requirement analysis is the most important and fundamental stage in SDLC. It is
performed by the senior members of the team with inputs from the customer, the sales
department, market surveys and domain experts in the industry. This information is then
used to plan the basic project approach and to conduct product feasibility study in the
economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks associated
with the project is also done in the planning stage. The outcome of the technical feasibility
study is to define the various technical approaches that can be followed to implement the
project successfully with minimum risks.
Stage 2: Defining Requirements
Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts.
This is done through an SRS (Software Requirement Specification) document which
consists of all the product requirements to be designed and developed during the project
life cycle.
Stage 3: Designing the Product Architecture
SRS is the reference for product architects to come out with the best architecture for the
product to be developed. Based on the requirements specified in SRS, usually more than
one design approach for the product architecture is proposed and documented in a DDS -
Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various parameters
as risk assessment, product robustness, design modularity, budget and time constraints,
the best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with
its communication and data flow representation with the external and third party modules
(if any). The internal design of all the modules of the proposed architecture should be
clearly defined with the minutest of the details in DDS.
Stage 4: Building or Developing the Product
In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is performed in
a detailed and organized manner, code generation can be accomplished without much
hassle.
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate the
code. Different high level programming languages such as C, C++, Pascal, Java and PHP
are used for coding. The programming language is chosen with respect to the type of
software being developed.
Stage 5: Testing the Product
This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the
testing only stage of the product where product defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance
Once the product is tested and ready to be deployed it is released formally in the
appropriate market. Sometimes product deployment happens in stages as per the business
strategy of that organization. The product may first be released in a limited segment and
tested in the real business environment (UAT- User acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the market,
its maintenance is done for the existing customer base.
The development models are the various processes or methodologies that are being
selected for the development of the project depending on the project’s aims and goals.
There are many development life cycle models that have been developed in order to
achieve different required objectives. The models specify the various stages of the process
and the order in which they are carried out.

The selection of model has very high impact on the testing that is carried out. It will define
the what, where and when of our planned testing, influence regression testing and largely
determines which test techniques to use.

There are various Software development models or methodologies. They are as follows:

Waterfall Model
Prototype Model
Incremental Model
Iterative Model
Spiral Model
WATERFALL MODEL:

The Waterfall Model was first Process Model to be introduced. It is very simple to
understand and use. In a waterfall model, each phase must be completed fully before the
next phase can begin. At the end of each phase, a review takes place to determine if the
project is on the right path and whether or not to continue or discard the project. In
waterfall model phases do not overlap. Waterfall Model contains the following stages:

Requirement gathering and analysis


System Design
Implementation
Testing
Deployment of system
Maintenance.
DIAGRAM OF WATERFALL MODEL:

ADVANTAGES OF WATERFALL MODEL:

Simple and easy to understand and use.


Easy to manage due to the rigidity of the model – each phase has specific deliverables
and a review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
DISADVANTAGES OF WATERFALL MODEL:

Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to high risk of
changing.

WHEN TO USE THE WATERFALL MODEL:

Requirements are very well known, clear and fixed.


Product definition is stable.
Technology is understood.
There are no ambiguous requirements
Ample resources with required expertise are available freely

The project is short.

PROTOTYPE MODEL:

The basic idea here is that instead of freezing the requirements before a design or coding
can proceed, a throwaway prototype is built to understand the requirements. This
prototype is developed based on the currently known requirements. By using this
prototype, the client can get an “actual feel” of the system, since the interactions with
prototype can enable the client to better understand the requirements of the desired
system. Prototyping is an attractive idea for complicated and large systems for which there
is no manual process or existing system to help determining the requirements. The
prototype are usually not complete systems and many of the details are not built in the
prototype. The goal is to provide a system with overall functionality.

DIAGRAM OF PROTOTYPE MODEL:


ADVANTAGES OF PROTOTYPE MODEL:

Users are actively involved in the development.


Since in this methodology a working model of the system is provided, the users get a better
understanding of the system being developed.
Errors can be detected much earlier.
Quicker user feedback is available leading to better solutions.
Missing functionality can be identified easily.

DISADVANTAGES OF PROTOTYPE MODEL:

Leads to implementing and then repairing way of building systems.


Practically, this methodology may increase the complexity of the system as scope of the
system may expand beyond original plans.
Incomplete application may cause application not to be used as the full system was
designed.

Incomplete or inadequate problem analysis.

WHEN TO USE PROTOTYPE MODEL:

Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.

Typically, online systems, web interfaces have a very high amount of interaction with
end users, are best suited for Prototype model.

Prototyping ensures that the end users constantly work with the system and provide a
feedback which is incorporated in the prototype to result in a useable system.

They are excellent for designing good human computer interface systems.

INCREMENTAL MODEL:

In incremental model the whole requirement is divided into various builds. Multiple
development cycles take place. Cycles are divided up into smaller, more easily
managed modules. Each module passes through the requirements, design,
implementation and testing phases. A working version of software is produced during
the first module, so you have working software early on during the software life cycle.
Each subsequent release of the module adds function to the previous release. The
process continues till the complete system is achieved.
DIAGRAM OF INCREMENTAL MODEL:

ADVANTAGES OF INCREMENTAL MODEL:

Generates working software quickly and early during the software life cycle.
More flexible – less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Customer can respond to each built.

Lowers initial delivery cost.

DISADVANTAGES OF INCREMENTAL MODEL:

Needs good planning and design.

Needs a clear and complete definition of the whole system before it can be broken down
and built incrementally.

Total cost is higher than waterfall.

WHEN TO USE THE INCREMENTAL MODEL:

Requirements of the complete system are clearly defined and understood.


Major requirements must be defined; however, some details can evolve with time.
There is a need to get a product to the market early.

A new technology is being used.

ITERATIVE MODEL:
An iterative life cycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which can then be reviewed in order to identify further requirements. This
process is then repeated, producing a new version of the software for each cycle of the
model.

DIAGRAM OF ITERATIVE MODEL:

ADVANTAGES OF ITERATIVE MODEL:

In iterative model we can only create a high-level design of the application before we
actually begin to build the product and define the design solution for the entire product.
Later on we can design and built a skeleton version of that, and then evolved the design
based on what had been built.
In iterative model we build and improve the product step by step. Hence we can track the
defects at early stages. This avoids the downward flow of the defects.
Reliable user feedback. When presenting sketches and blueprints of the product to users
for their feedback, we are effectively asking them to imagine how the product will work.

Less time is spent on documenting and more time is given for designing.

DISADVANTAGES OF ITERATIVE MODEL:

Each phase of an iteration is rigid with no overlaps.

Costly system architecture or design issues may arise because not all requirements are
gathered up front for the entire lifecycle.

WHEN TO USE ITERATIVE MODEL:

Requirements of the complete system are clearly defined and understood.


When the project is big.

Major requirements must be defined; however, some details can evolve with time.

SPIRAL MODEL:
The spiral model is similar to the incremental model, with more emphasis placed on
risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering
and Evaluation. A software project repeatedly passes through these phases in iterations
(called Spirals in this model). The baseline spiral, starting in the planning phase,
requirements are gathered and risk is assessed. Each subsequent spirals builds on the
baseline spiral. Requirements are gathered during the planning phase. In the risk
analysis phase, a process is undertaken to identify risk and alternate solutions. A
prototype is produced at the end of the risk analysis phase. Software is produced in the
engineering phase, along with testing at the end of the phase. The evaluation phase
allows the customer to evaluate the output of the project to date before the project
continues to the next spiral.

ADVANTAGES OF SPIRAL MODEL:

High amount of risk analysis hence, avoidance of Risk is enhanced.


Good for large and mission-critical projects.
Strong approval and documentation control.
Additional Functionality can be added at a later date.

Software is produced early in the software life cycle.

DISADVANTAGES OF SPIRAL MODEL:

Can be a costly model to use.


Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.

Doesn’t work well for smaller projects.


DIAGRAM OF SPIRAL MODEL:
WHEN TO USE SPIRAL MODEL:

When costs and risk evaluation is important.


For medium to high-risk projects.
Users are unsure of their needs.
Requirements are complex.
New product line.

Significant changes are expected (research and exploration).


Experiment:2
AIM: To Study Data Flow Diagrams (Dfds) and Levels In Dfds.

INTRODUCTION:
A data flow diagram (DFD) is a graphical representation of the "flow" of data
through an information system, modelling its process aspects. Often they are a
preliminary step used to create an overview of the system which can later be
elaborated. DFDs can also be used for the visualization of data processing
(structured design).

At its simplest, a data flow diagram looks at how data flows through a system. It
concerns things like where the data will come from and go to as well as where it
will be stored. But you won't find information about the processing timing (e.g.
whether the processes happen in sequence or in parallel).

We usually begin with drawing a context diagram, a simple representation of the


whole system. To elaborate further from that, we drill down to a level 1 diagram
with additional information about the major functions of the system. This could
continue to evolve to become a level 2 diagram when further analysis is required.

REPRESENTATION OF COMPONENTS:

DFDs only involve four symbols. They are:

 Process
 Data Object
 Data Store
 External entity

Process
Transform of incoming data flow(s) to outgoing flow(s).
Data Flow
Movement of data in the system.
Data Store
Data repositories for data that are not moving. It may be as simple as a buffer or a
queue
or as sophisticated as a relational database.

Sources of destinations outside the specified system boundary.


LEVELS IN DFDS:

A context diagram is a top level (also known as Level 0) data flow diagram. It only
contains one process node (process 0) that generalizes the function of the entire system in
relationship to external entities.

Context Diagram

LEVEL 0 DFD:

Next Level is Level 0 DFD. Some important points are:

 Level 0 DFD must balance with the context diagram it describes.


 Input going into a process are different from outputs leaving the process.
 Data stores are first shown at this level

LEVEL 1 DFD:

Next level is Level 1 DFD. Some important points are:

 Level 1 DFD must balance with the Level 0 it describes.


 Input going into a process are different from outputs leaving the process.
 Continue to show data stores.

LEVEL 2 DFD:

Next level is Level 1 DFD. Some important points are:

 Level 1 DFD must balance with the Level 0 it describes.

ADVANTAGES:

 DFDs have diagrams that are easy to understand, check and change data.
 DFDs help tremendously in depicting information about how an organization
operations.
 They give a very clear and simple look at the organization of the interfaces between
an application and the people or other applications that use it.

DISADVANTAGES:

 Modification to a data layout in DFDs may cause the entire layout to be changed.
This is because the specific changed data will bring different data to units that it
accesses. Therefore, evaluation of the possible of the effect of the modification must
be considered first.
 The number of units in a DFD in a large application is high. Therefore, maintenance
is harder, more costly and error prone. This is because the ability to access the data
is passed explicitly from one component to the other. This is why changes are
impractical to be made on DFDs especially in large system.

Balancing a DFD
The data that flow into or out of a bubble must match the data flow at the next level of
DFD. This is known as balancing a DFD. The concept of balancing a DFD has been
illustrated in fig. 10.3. In the level 1 of the DFD, data items d1 and d3 flow out of the
bubble 0.1 and the data item d2 flows into the bubble 0.1. In the next level, bubble 0.1 is
decomposed. The decomposition is balanced, as d1 and d3 flow out of the level 2 diagram
and d2 flows in.
An example showing balanced decomposition
EXPERIMENT: 3

AIM: To Study the Risk Management During The Software Development.


INTRODUCTION:
Risk management is an emerging area that aims to address the problem of identifying and
managing the risks associated with a software project. Risk in a project is the possibility that
the defined goals are not met. The basic motivation of having risk management is to avoid
disasters or heavy losses.
Risk management is the area that tries to ensure that the impact of risks on cost, quality, and
schedule is minimal. It can be considered as dealing with possibility and actual occurrence
of hose events that are not regular or commonly expected. So, in a sense risk management
begins where normal project management ends. Diagrammatically the risk management
activities are shown below which clearly shows that it revolves around Risk Assessment and
Risk Control. It can be considered as dealing with the possibility and actual occurrence of
those events that are not regular or commonly expected. It deals with events that are
infrequent, somewhat out of the control of the project management, and are large enough.

Risk
Management

Risk Risk
Assesment Control

Risk Risk Risk Risk mgt Risk Risk


Identification Analysis Prioritization Planning Resolution Monitoring

RISK ASSESSMENT
Risk assessment is an activity that must be undertaken during project planning. This involves
identifying the risks, analysing them, and prioritizing them on the basis of the analysis. The
goal of risk assessment is to prioritize the risks so that risk management can focus attention
and resources on the more risky items.
The risk assessment consists of three steps:
Risk identification
Risk analysis
Risk prioritization

RISK IDENTIFICATION is the first step in risk assessment, which identifies all the different
risks for a particular project. These risks are project dependent, and their identification is
clearly necessary before any risk management can be done for the project.
Based on surveys of experienced project managers, Boehm has produced a list of the top-10
risk items likely to compromise the success of a software project. These risks and
management techniques are shown above and the description is given as follows:
The top ranked risk item is Personnel Shortfalls. This involves just having fewer people than
necessary or not having people with specific skills that a project may require. Some of the
ways to manage these risks is to get the top talent possible and to match the needs of the
project with the skills of the available personnel.
The second item, Unrealistic Schedules And Budgets, happens very frequently due to
business and other reasons. It is very common that high level management imposes a
schedule for a software project that is not based on the characteristics of the project and is
unrealistic.
Project runs the risk of Developing The Wrong Software if the requirements analysis is not
done properly and if development begins to earlier.
Similarly, often Improper User Interface may be developed. This requires extensive rework of
the user interface later or the software benefits are not obtained because users are reluctant
to use it.
Some Requirements Changes are to be expected in any project, but some requirements
frequent changes are requested, which is often a reflection of the fact that the client has not
yet understood or settled on its own requirements.
Gold Plating refers to adding features in the software that are only marginally useful. This
adds unnecessary risk to the project because gold plating consumes recourses and time with
little return.
Performance Shortfalls are critical in real time systems and poor performance can mean the
failure of project.
The project might be delayed if the External Component Is Not Available on time. The
project would also suffer if the quality of the external component is poor or if the component
turns out to be incompatible with the other project components or with the environment in
which the software is developed or is to operate.
If a project relies on Technology That Is Not Well Developed, it may fail. This is a risk due to
straining the computer science capabilities.

RISK ANALYSIS include studying the probability and the outcome of possible decisions,
understanding the task dependencies to decide critical activities and the probability and cost
of their not being completed on time, risks on the various quality factors like reliability and
usability, and evaluating the performance early through simulation, etc , if there are strong
performance constraints on the system.
One approach for RISK PRIORITIZATION is through the concept of risk exposure, which is
sometimes called risk impact. RE is defined by the relationship
RE=Prob (U O)*Loss (U O),
Where Prob (U O) is the probability of the risk materializing and Loss (U O) is the total loss
incurred due to the unsatisfactory outcome.

RISK CONTROL
Risk control includes three tasks which are
Risk management planning
Risk resolution
Risk monitoring
Risk control starts with Risk Management Planning. Plans are developed for each identified
risk that needs to be controlled. This activity, like other planning activities, is done during
the initiation phase. A basic risk management plan has five components. These are
Why the risk is important and why it should be managed
What should be delivered regarding disk management and when
Who is responsible for performing the different risk management activities
How will the risk be abetted or the approach is taken.
And how many resources are needed.
The main focus of risk management planning is to enumerate the risks to be controlled and
specify how to deal with a risk.

The actual elimination or reduction is done in the Risk Resolution step. Risk resolution is
essentially implementation of the risk management plan.
Risk Monitoring is the activity of monitoring the status of various risks and their control
activities. Like project monitoring, it is performed through the entire duration of the project.
EXPERIMENT: 4
AIM: To Study About Functional Point Analysis.
FUNCTIONAL POINT ANALYSIS:

Function Point Analysis is a structured technique of problem solving. It is a method to break


systems into smaller components, so they can be better understood and analyzed.

Function points are a unit measure for software much like an hour is to measuring time,
miles are to measuring distance or Celsius is to measuring temperature. Function Points are
an ordinal measure much like other measures such as kilometers, Fahrenheit, hours, so on
and so forth.

The functional user requirements of the software are identified and each one is categorized
into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once
the function is identified and categorized into a type, it is then assessed for complexity and
assigned a number of function points.

OBJECTIVES OF FUNCTION POINT ANALYSIS

Since Function Points measures systems from a functional perspective they are independent
of technology. Regardless of language, development method, or hardware platform used, the
number of function points for a system will remain constant. The only variable is the amount
of effort needed to deliver a given set of function points; therefore, Function Point Analysis
can be used to determine whether a tool, an environment, a language is more productive
compared with others within an organization or among organizations. This is a critical point
and one of the greatest values of Function Point Analysis.

Function Point Analysis can provide a mechanism to track and monitor scope creep.
Function Point Counts at the end of requirements, analysis, design, code, testing and
implementation can be compared. The function point count at the end of requirements and/or
designs can be compared to function points actually delivered. If the project has grown, there
has been scope creep. The amount of growth is an indication of how well requirements were
gathered by and/or communicated to the project team. If the amount of growth of projects
declines over time it is a natural assumption that communication with the user has improved.

FUNCTION POINT CALCULATION

A function point is a rough estimate of a unit of delivered functionality of a software project.


Function points (FP) measure size in terms of the amount of functionality in a system.
Function points are computed by first calculating an unadjusted function point count (UFC).
Counts are made for the following categories

Number of user inputs: Each user input that provides distinct application oriented data to the
software is counted.
Number of user outputs: Each user output that provides application oriented information
to the user is counted. In this context "output" refers to reports, screens, error messages,
etc. Individual data items within a report are not counted separately.

Number of user inquiries: An inquiry is defined as an on-line input that results in the
generation of some immediate software response in the form of an on-line output. Each
distinct inquiry is counted.

Number of files: Each logical master file is counted.

Number of external interfaces: All machine-readable interfaces that are used to transmit
information to another system are counted.

Once this data has been collected, a complexity rating is associated with each count
according to the following table:

Function point complexity weights.


Measurement parameter Weighting factor

Simpl Averag Compl


e e ex
Number of user inputs 3 4 6
Number of user outputs 4 5 7

Number of user inquiries 3 4 6


Number of files 7 10 15
Number of external
interfaces 5 7 10

Each count is multiplied by its corresponding complexity weight and the results are
summed to provide the UFC. The adjusted function point count (FP) is calculated by
multiplying the UFC by a technical complexity factor (TCF) also referred to as Value
Adjustment Factor (VAF). Components of the TCF are listed in below given table.

Components of the technical complexity factor.

Reliable back-up and Data


F1 recovery F2 communications
F3 Distributed functions F4 Performance

Heavily used
F5 configuration F6 Online data entry
F7 Operational ease F8 Online update

F1 Complex
F9 Complex interface 0 processing
F1 F1
1 Reusability 2 Installation ease

F1 F1
3 Multiple sites 4 Facilitate change
Alternatively the following questionnaire could be utilized

Does the system require reliable backup and recovery?

Are data communications required?

Are there distributed processing functions?

Is performance critical?

Will the system run in an existing, heavily utilized operational environment?


Does the system require on-line data entry?

Does the on-line data entry require the input transaction to be built over multiple screens or
operations?

Are the master files updated online?

Are the input, outputs, files or inquiries complex?

Is the internal processing complex?

Is the code designed to be reusable?

Are conversions and installation included in the design?

Is the system designed for multiple installations in different organizations?

Is the applications designed to facilitate change and ease of use?


ADVANTAGES OF FUNCTIONAL POINT ANALYSIS:

Function Points can be used to size software applications accurately. Sizing is an important
component in determining productivity (outputs/inputs).
They can be counted by different people, at different times, to obtain the same measure
within a reasonable margin of error.
Function Points are easily understood by the non-technical user. This helps communicate
sizing information to a user or customer.
Function Points can be used to determine whether a tool, a language, an environment, is
more productive when compared with others.

EXPERIMENT: 5
AIM: To Study Software Testing, Blackbox and Whitebox Testing and Different Types of
Testing.
DEFINITION:
Software testing is performed to verify that the completed software package functions
according to the expectations defined by the requirements/specifications. The overall
objective to not to find every software bug that exists, but to uncover situations that could
negatively impact the customer, usability and/or maintainability. Software testing is the
process of evaluating a software item to detect differences between given input and expected
output. Testing assesses the quality of the product. Software testing is a process that should
be done during the development process. In other words software testing is a verification
and validation process.

There are two basics of software testing:


blackbox testing and whitebox testing.

BLACKBOX TESTING:

Black box testing is a testing technique that ignores the internal mechanism of the system
and focuses on the output generated against any input and execution of the system. It is also
called functional testing.

The technique of testing without having any knowledge of the interior workings of the
application is Black Box testing. The tester is oblivious to the system architecture and does
not have access to the source code. Typically, when performing a black box test, a tester will
interact with the system's user interface by providing inputs and examining outputs without
knowing how and where the inputs are worked upon.

Advantages:
Well suited and efficient for large code segments.
Code Access not required.
Clearly separates user's perspective from the developer's perspective through visibly defined
roles.
Large numbers of moderately skilled testers can test the application with no knowledge of
implementation, programming language or operating systems.

Disadvantages:
Limited Coverage since only a selected number of test scenarios are actually performed.
Inefficient testing, due to the fact that the tester only has limited knowledge about an
application.
Blind Coverage, since the tester cannot target specific code segments or error prone areas.
The test cases are difficult to design.

WHITEBOX TESTING:

White box testing is a testing technique that takes into account the internal mechanism of a
system. It is also called structural testing and glass box testing.

White box testing is the detailed investigation of internal logic and structure of the code.
White box testing is also called glass testing or open box testing. In order to perform white
box testing on an application, the tester needs to possess knowledge of the internal
working of the code.

The tester needs to have a look inside the source code and find out which unit/chunk of the
code is behaving inappropriately.

Advantages:

As the tester has knowledge of the source code, it becomes very easy to find out which type
of data can help in testing the application effectively.
It helps in optimizing the code.
Extra lines of code can be removed which can bring in hidden defects.
Due to the tester's knowledge about the code, maximum coverage is attained during test
scenario writing.

Disadvantages:
Due to the fact that a skilled tester is needed to perform white box testing, the costs are
increased.
Sometimes it is impossible to look into every nook and corner to find out hidden errors that
may create problems as many paths will go untested.
It is difficult to maintain white box testing as the use of specialized tools like code
analyzers and debugging tools are required.
Black box testing is often used for validation and white box testing is often used for
verification.

DIFFERENT TYPES OF SOFTWARE TESTING:

There are many types of software testing like:


Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteria and to enable the customer to determine whether or not to
accept the system. It is usually performed by the customer.

Accessibility Testing: Type of testing which determines the usability of a product to the
people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is
conducted by persons having disabilities.
Age Testing: Type of testing which evaluates a system's ability to perform in the future.
The evaluation process is conducted by testing teams.

Alpha Testing: Type of testing a software product or system conducted at the developer's
site. Usually it is performed by the end user.

Backward Compatibility Testing: Testing method which verifies the behavior of the
developed software with older versions of the test environment. It is performed by testing
teams.

Beta Testing: Final testing before releasing application for commercial purpose. It is
typically done by end-users or others.

Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level
are developed first and other modules which go towards the 'main' program are integrated
and tested one at a time. It is usually performed by the testing teams.

Branch Testing: Testing technique in which all branches in the program source code are
tested at least once. This is done by the developer.

Compatibility Testing: Testing technique that validates how well a software performs in a
particular hardware/software/operating system/network environment. It is performed by
the testing teams.

Component Testing: Testing technique similar to unit testing but with a higher level of
integration - testing is done in the context of the application instead of just directly testing
a specific method. Can be performed by testing or development teams.
Compliance Testing: Type of testing which checks whether the system was developed in
accordance with standards, procedures and guidelines. It is usually performed by external
companies which offer "Certified OGC Compliant" brand.

Destructive Testing: Type of testing in which the tests are carried out to the specimen's
failure, in order to understand a specimen's structural performance or material behavior
under different loads. It is usually performed by QA teams.

Dynamic Testing: Term used in software engineering to describe the testing of the
dynamic behavior of code. It is typically performed by testing teams.

Error-Handling Testing: Software testing type which determines the ability of the system to
properly process erroneous transactions. It is usually performed by the testing teams.
Gray Box Testing: A combination of Black Box and White Box testing methodologies:
testing a piece of software against its specification but using some knowledge of its
internal workings. It can be performed by either development or testing teams.

Integration Testing: The phase in software testing in which individual software modules
are combined and tested as a group. It is usually conducted by testing teams.

Load Testing: Testing technique that puts demand on a system or device and measures its
response. It is usually conducted by the performance engineers.

Regression Testing: Type of software testing that seeks to uncover software errors after
changes to the program (e.g. bug fixes or new functionality) have been made, by retesting
the program. It is performed by the testing teams.

Recovery Testing: Testing technique which evaluates how well a system recovers from
crashes, hardware failures, or other catastrophic problems. It is performed by the testing
teams.

Unit Testing: Software verification and validation method in which a programmer tests if
individual units of source code are fit for use. It is usually conducted by the development
team.
EXPERIMENT: 6
AIM: To use information gathering tools(questionnaries , interview , on site survey).
Information Gathering: A problem Solving Approach
Information gathering is an art and a science. The approach and manner, in which
information is gathered, require persons with sensitivity, common sense and knowledge
of what and when to gather and the channels used to secure information.
KINDS OF INFORMATION REQUIRED
Before one determines where to go for information or what tools to use, the first
requirement is to figure out what information to gather. Much of the information we need
to analyze relates to the organization in general, the user staff, and the workflow.

Information about the Organization


Information about the organization‟s policies, goals, objectives, and structure
explains the kind of environment that promotes the introduction of computer-based
systems. Company policies are guidelines that determine the conduct of business.
Policies are translated into rules and procedures for achieving goals. A statement of
goals describes management‟s commitment to objectives and the direction system
development will follow. Objectives are milestones of accomplishments toward achieving
goals. Information from manuals, pamphlets, annual reports etc help the analyst to get
an idea of the goals of the Organization.
After policies and goals are set, a firm is organized to meet these goals. The organization
structure indicates management directions and orientation. The organization chart
represents
an achievement-oriented structure. It helps us understand the general climate in which
candidate systems will be considered. In gathering information about the firm, the
analyst should watch for the correspondence between what the organization claims to
achieve goals and actual operations. Policies, goals, objectives and structure are
important elements for analysis.

Information about the User Staff

Another kind of information for analysis is knowledge about the people who run the
present system, their job functions and information requirements, the relationships of
their jobs to the existing system and the interpersonal network that holds the user group
together. The main focus is on the roles of the people, authority relationships and inters
personnel relations. Information of this kind highlights the organization chart and
establishes a basis for determining the importance of the existing system for the
organization. Thus the major focus is to find out the expectations of the people before
going in for the design of the candidate system.
Information about the Work Flow

The workflow focuses on what happens to the data through various points in a system.
This can be shown by a data flow diagram or a system flow chart.
A data flow diagram represents the information generated at each processing point in the
system and the direction it takes from source to destination.
A system flowchart describes the physical system. The information available from such
charts explains the procedures used for performing tasks and work schedules.
Information Gathering Techniques
No two projects are ever the same. This means that the analyst must decide on the
information-gathering tool and how it must be used. Although there are no standard
rules for specifying their use, an important rule is that information must be acquired
accurately, methodically, under the right conditions, and with minimum interruption to
user personnel. There are various information-gathering tools. Each tool has a special
function depending on the information needed.
Review of Literature, Procedures and Forms

Review of existing records, procedures, and forms helps to seek insight into a system
which describes the current system capabilities, its operations, or activities.
Advantages
It helps user to gain some knowledge about the organization or operations by themselves
before they impose upon others.
It helps in documenting current operations within short span of time as the procedure
manuals and forms describe the format and functions of present system.
It can provide a clear understanding about the transactions that are handled in the
organization, identifying input for processing, and evaluating performance.
It can help an analyst to understand the system in terms of the operations that must be
supported.
It describes the problem, its affected parts, and the proposed solution.

Disadvantages:
The primary drawback of this search is time.
Sometimes it will be difficult to get certain reports.
Publications may be expensive and the information may be out dated due to a time lag in
publication.
On-site Observation
A fact-finding method used by the systems analyst is on-site or direct observation. It is
the process of recognizing and noting people, objects and occurrences to obtain
information. The major objective of on-site observation is to get as close as possible to
the “real” system
being studied. For this reason it is important that the analyst is knowledgeable about the
general makeup and activities of the system. The analyst‟s role is that of an information
seeker. As an observer, the analyst follows a set of rules.
While making observations he/she is more likely to listen than talk and has to listen with
interest when information is passed on.
The analyst has to observe the physical layout of the current system, the location and
movement of the people and the workflow.
The analyst has to be alert to the behavior of the user staff and the people to whom they
come into contact. A change in behavior provides a clue to an experienced analyst. The
clue can be used to identify the problem.

The following questions can serve as a guide for on-site observations:


What kind of system is it? What does it do?
Who runs the system? Who are the important people in it?
What is the history of the system? How did it get to its present stage of development?
Apart from its formal function, what kind of system is it in comparison with other systems
in the organization?
Advantages
It is a direct method for gathering information.
It is useful in situation where authenticity of data collected is in question or when
complexity of certain aspects of system prevents clear explanation by end-users.
It produces more accurate and reliable data.
It produces all the aspect of documentation that is incomplete and outdated.

Difficulties in on-site observations:


On-site observation is the most difficult fact-finding technique. It requires intrusion into
the user‟s area and can cause adverse reaction by the user‟s staff if not handled
properly.
If on-site observation is to be done properly in a complex situation, it will be time-
consuming.
Proper sampling procedures must be used to identify the stability of the behavior being
observed. Otherwise inferences drawn from these samples will be inaccurate and
unreliable.
Attitudes and motivations of subjects cannot be easily observed.
As an observer, the analyst follows a set of rules. While making observations, he /she
should be more likely to listen than talk and to listen with a sympathetic and genuine
interest when information is conveyed.

Four alternative observation methods are considered

Natural or contrived: A natural observation occurs in a setting such as the employee‟s


place of work, whereas the observer in a place like a laboratory sets up is contrived
observation.

Obtrusive or unobtrusive: An obtrusive observation takes place when the respondent


knows he/she is being observed; an unobtrusive observation takes place in a contrived
way such as behind a one-way mirror.

Direct or indirect: A direct observation takes place when the analyst actually observes
the subject or the system at work. In an indirect observation, the analyst uses mechanical
devices such as cameras and videotapes to capture information.

Structured or unstructured: In a structured observation, the observer looks for and


records a specific action. Unstructured methods place the observer in a situation to
observe whatever might be applicable at the time.
Any of these methods may be used in information gathering. Natural, direct, obtrusive
and unstructured observations are frequently used to get an overview of an operation.
Electronic observation and monitoring methods are becoming widely used tools for
information gathering.
Interviews
On-site observation is less effective for learning about people‟s perceptions, feelings and
motivations. The alternative is the personal interview and the questionnaire. In both the
methods heavy reliance is placed on the interviewees report for information about the
job, the present system, or experience. The quality of the response is judged in terms of
its reliability and validity.

Reliability means that the information gathered is trustworthy enough to be used for
making decisions about the system being studied. Validity means that the questions to be
asked are worded in such a way as to elicit (obtain) the intended information. So the
reliability and validity of the data collected depends on the design of the interview or
questionnaire and the manner in which each instrument is administered.
The interview is a face-to-face interpersonal role situation in which a person called the
interviewer asks a person being interviewed questions designed to gather information
about a problem area. The interview is the oldest and most often used device for
gathering information in system work. It can be used for two main purposes
As an exploratory device to identify relations or verify information
To capture information, as it exists.

Systems analyst collects information from individuals or groups by interviewing. The


analyst can be formal, legalistic, play politics, or be informal; as the success of an
interview depends on the skill of analyst as interviewer.
Advantages of Interviewing
This method is frequently the best source of gathering qualitative information.
It is useful for them, who do not communicate effectively in writing or who may not have
the time to complete questionnaire.
Information can easily be validated and cross checked immediately.
It can handle the complex subjects.
It is easy to discover key problem by seeking opinions.
It bridges the gaps in the areas of misunderstandings and minimizes future problems.

The disadvantages of an interview are


The major drawback of the interview is the long preparation time.
Interview also takes a lot of time to conduct, which means time and money.

In an interview, since the analyst and the person interviewed meet face to face, there is
an opportunity for greater flexibility in eliciting information. The interviewer is also in a
natural position to observe the subjects and the situation to which they are responding. In
contrast the information obtained through a questionnaire is limited to the written
responses of the subjects to predefined questions.
The art of interviewing:
Interviewing is an art. The analyst learns the art by experience. The interviewer‟s art
consists of creating a permissive situation in which the answers offered are
reliable. Respondent‟s opinions are offered with no fear of being criticized by others.
Primary requirements for a successful interview are to create a friendly atmosphere and
to put the respondent at ease. Then the interview proceeds with asking questions
properly, obtaining reliable responses and recording them accurately and completely.
Arranging the interview:
The interview should be arranged so that the physical location, time of the interview and
order of interviewing assure privacy and minimal interruption. A common area that is
non- threatening to the respondent is chosen. Appointments should be made well in
advance and a fixed time period adhered to as closely as possible. Interview schedules
generally begin at the top of the organization structure and work down so as not to
offend anyone.

Guides to a successful interview:

In an interview the following steps should be taken.


Set the stage for the interview.
Establish rapport: put the interviewee at ease.
Phrase questions clearly and briefly
Be a good listener; avoid arguments.
Evaluate the outcome of the interview.

Stage setting: This is a relaxed, informal phase where the analyst opens the interview by
focusing on
The purpose of the interview
Why the subject was selected
The confidential nature of the interview.
After a favorable introduction, the analyst asks the first question and the respondent
answers it and goes right through the interview. The job of the analyst should be that of a
reporter rather than a debater. Discouraging distracting conversation controls the
direction of the interview.
Establishing rapport:
Some of the pitfalls to be avoided are
Do not deliberately mislead the user staff about the purpose of the study. A careful
briefing is required. Too much of technical details will confuse the user and hence only
information that is necessary has to be given to the participants.
Assure interviewees confidentiality that no information they offer will be released to
unauthorized personnel. The promise of anonymity is very important.
Avoid personal involvement in the affairs of the users department or identification with
one section at the cost of another.
Avoid showing off your knowledge or sharing information received from other sources.
Avoid acting like an expert consultant and confidant. This can reduce the objectivity of
the approach and discourage people from freely giving information
Respect the time schedules and preoccupations of your subjects. Do not make an
extended social event out of the meeting.
Do not promise anything you cannot or should not deliver, such as advice or feedback.
Dress and behave appropriately for the setting and the circumstances of the user contact.
Do not interrupt the interviewee. Let him/her finish talking.
Asking the questions: Except in unstructured interviews, it is important that each
question is asked exactly as it is worded. Rewording may provoke a different answer .The
question must also be asked in the same order as they appear on the interview schedule.
Reversing the sequence destroys the comparability of the interviews. Finally each
question must be asked unless the respondent, in answering the previous question, has
already answered the next one.

Obtaining and recording the response: Interviews must be prepared well in order to
collect further information when necessary. The information received during the
interview must be recorded for later analysis.

Data recording and the notebook: Many system studies fail because of poor data
recording. Care must be taken to record the data, their source and the time of collection.
If there is no record of a conversation, the analyst won‟t be remembering enough
details, attributing to the wrong source or distorting the data. The form of the notebook
varies according to the type of study, for the amount of data, the number of analysts, and
their individual preferences. The “notebook” may be a card file, a set of carefully
coded file folders. It should be bound and the pages numbered.

Questionnaire
This method is used by analyst to gather information about various issues of system from
large number of persons. This tool has collection of questions to which individuals
respond.
The advantages of questionnaire are
It is economical and requires less skill to administer than the interview.
Unlike the interview, which generally questions one subject at a time, a questionnaire
can be administered to large numbers of individuals simultaneously.
The standardized wording and order of the questions and the standardized instructions
for reporting responses ensure uniformity of questions.
The respondents feel greater confidence in the anonymity of a questionnaire than in that
of an interview. In an interview, the analyst usually knows the user staff by name,
job function or other identification. With a questionnaire, respondents give opinions
without fear that the answer will be connected to their names.
The questionnaire places less pressure on subjects for immediate responses. Respondents
have time to think the questions over and do calculations to provide more accurate data.

Types of interviews and questionnaires

Interviews and Questionnaires vary widely in form and structure. Interviews range from
highly unstructured to the highly structured alternative in which the questions and
responses are fixed.
The unstructured Interview:
The Unstructured interview is non-directive information gathering technique. It allows
respondents to answer questions freely in their own words. The responses in this case are
spontaneous and self-revealing. The role of the analyst as an interviewer is to encourage
the respondent to talk freely and serve as a catalyst to the expression of feelings and
opinions. This method works well in a permissive atmosphere in which subjects have no
feeling of disapproval.
The structured Interview:
In this alternative the questions are presented with exactly the same wordings and in the
same order to all subjects. Standardized questions improve the reliability of the
responses by ensuring that all subjects are responding to the same questions.
Structured interviews and questionnaires may differ in the amount of structuring of the
questions.
Questions may be either
Open-ended questions
Close-ended questions
An open-ended question requires no response direction or specific response.
Questionnaire is written with space provided for the response. Such questions are more
often used in interviews than in questionnaires because scoring takes time.
Close-ended questions are those, in which the responses are presented as a set of
alternatives.
There are five major types of closed questions.
Fill in the blanks: in which questions request specific information. These responses can
be statically analyzed.
Dichotomous (Yes/No type): in which questions will offer two answers. This has
advantages similar to those of the multiple-choice type. Here the question sequence and
content are also important
Ranking scales questions: Ask the respondent to rank a list of items in order of
importance or preference
Multiple-choice questions: Offer respondents specific answer choices. This offers the
advantage of faster tabulation and less analyst bias due to the order in which the
questions are given. Respondents have a favorable bias toward the first alternative
item. Alternating the order in which answer choices are listed may reduce bias but at the
expense of additional time to respond to the questionnaire.
Rating scales – These types of questions are an extension of the multiple-choice
design. The respondent is offered a range of responses along a single dimension

Open-ended questions are ideal in exploratory situations where new ideas and
relationships are sought.
Disadvantages of open-ended questions:
The main drawback is the difficulty of interpreting the subjective answers and the tedious
responses to open ended questions.
Other drawbacks are potential analyst, bias in interpreting the data and time- consuming
tabulation
Closed questions are quick to analyze.
Disadvantages of close-ended questions:
They are costly to prepare.
They have the additional advantage of ensuring that answers are given in a frame of
reference consistent with the line of inquiry.

Procedure for questionnaire construction:


There are six steps for constructing a questionnaire
Decide on the data should be collected, that is used to define the problem to be
investigated.
Decide the type of questionnaire should be used (closed or open ended).
Outline the topics for the questionnaire and then write the questions.
Edit the questionnaire for technical defect that reflect personal values.
Pretest the questionnaire to see how well it works
Do a final editing to ensure that questionnaire is ready for administration. This includes
a close look at the content, form and sequence of questions as well as the appearance and
clarity of the procedure for using the questionnaire.
Important thing in questionnaire construction is the formulation of reliable and valid
questions. To do a satisfactory job, the analyst must focus on question content, wording
and format. The following criteria has to be considered for constructing a questionnaire
Question content (Is the question necessary?, does it cover the area intended?, does the
participants have proper information to answer the questions?, is the question biased?)
Question wording (Is the question worked for the participant‟s background and
experience?, Can the question be misinterpreted?, is the question clear and direct?)
Question format (How can the question be asked?, Is the response form easy or
adequate for the job?, Is there any contamination effect?)

Reliability of data from respondents:


The data collected from the user staff is assumed to correspond to the way in which
events occur. If such reports are used, then there will be several errors like
Reports of a given event from several staff members who has little training in observation
will not be accurate.
Respondent‟s ability to forget things.
Reluctance of the person being interviewed.
Inability of the participants to communicate their ideas or the analyst to get required
information from the participants.
The reliability-validity issue:
Information – gathering instrument faces 2 major tests – reliability and validity. The term
reliability is synonymous with dependability, consistency and accuracy. Concern for
reliability comes from the necessity for dependability in measurement. Reliability may be
approached in 3 ways
If we administer the same questionnaire to the same subject will we get the similar or
same results? This question implies a definition of reliability as stability, dependability
and predictability.
Does the questionnaire measure the true variables? This question focuses on the
accuracy aspect of reliability.
How much error of measurement is there in the proposed questionnaire? Errors of
measurement are random errors.
The most common question that defines validity is: Does the instrument measure what we
think it is measuring? It refers to the notion that the questions asked are worded to
produce the information sought. In contrast reliability means that information gathered
is dependable enough to be used for decision making. In validity, the emphasis is on what
is being measured?. Thus the adequacy of an information-gathering tool is judged by the
criteria of validity and reliability. Both depend on the design of the instrument as well as
the way it is administered.
The main aim of fact finding techniques is to determine the information requirements of
an organization used by analysts to prepare a precise System Requirement specification
(SRS) understood by user.
Ideal SRS Document should
be Complete, Unambiguous, and Jargon-free.
specify operational, tactical, and strategic information requirements.
solve possible disputes between users and analyst.
use graphical aids which simplify understanding and design.
EXPERIMENT: 7
AIM: Perform feasibility study and to create feasibility report for application.

Feasibility Study
Feasibility Study can be considered as preliminary investigation that helps the
management to take decision about whether study of system should be feasible for
development or not.
It identifies the possibility of improving an existing system, developing a new system, and
produce refined estimates for further development of system.
It is used to obtain the outline of the problem and decide whether feasible or appropriate
solution exists or not.
The main objective of a feasibility study is to acquire problem scope instead of solving
the problem.
The output of a feasibility study is a formal system proposal act as decision document
which includes the complete nature and scope of the proposed system.
Steps Involved in Feasibility Analysis
The following steps are to be followed while performing feasibility analysis −
Form a project team and appoint a project leader.
Develop system flowcharts.
Identify the deficiencies of current system and set goals.
Enumerate the alternative solution or potential candidate system to meet goals.
Determine the feasibility of each alternative such as technical feasibility, operational
feasibility, etc.
Weight the performance and cost effectiveness of each candidate system.
Rank the other alternatives and select the best candidate system.
Prepare a system proposal of final project directive to management for approval.
Types of Feasibilities
Economic Feasibility
It is evaluating the effectiveness of candidate system by using cost/benefit analysis
method.
It demonstrates the net benefit from the candidate system in terms of benefits and costs to
the organization.
The main aim of Economic Feasibility Analysis (EFS) is to estimate the economic
requirements of candidate system before investments funds are committed to proposal.
It prefers the alternative which will maximize the net worth of organization by earliest
and highest return of funds along with lowest level of risk involved in developing the
candidate system.
Technical Feasibility
It investigates the technical feasibility of each implementation alternative.

It analyzes and determines whether the solution can be supported by existing technology
or not.
The analyst determines whether current technical resources be upgraded or added it that
fulfill the new requirements.
It ensures that the candidate system provides appropriate responses to what extent it can
support the technical enhancement.
Operational Feasibility
It determines whether the system is operating effectively once it is developed and
implemented.
It ensures that the management should support the proposed system and its working
feasible in the current organizational environment.
It analyzes whether the users will be affected and they accept the modified or new
business methods that affect the possible system benefits.
It also ensures that the computer resources and network architecture of candidate system
are workable.
Behavioral Feasibility
It evaluates and estimates the user attitude or behavior towards the development of new
system.
It helps in determining if the system requires special effort to educate, retrain, transfer,
and changes in employee‟s job status on new ways of conducting business.
Schedule Feasibility
It ensures that the project should be completed within given time constraint or schedule.
It also verifies and validates whether the deadlines of project are reasonable or not.

EXPERIMENT: 8
AIM: To create Data Dictionary for some applications.
Data Dictionary

A data dictionary lists all data items appearing in the DFD model of a system. The data
items listed include all data flows and the contents of all data stores appearing on the
DFDs in the DFD model of a system. A data dictionary lists the purpose of all data items
and the definition of all composite data items in terms of their component data items. For
example, a data dictionary entry may represent that the data grossPay consists of the
components regularPay and overtimePay.

grossPay = regularPay + overtimePay


For the smallest units of data items, the data dictionary lists their name and their type.
Composite data items can be defined in terms of primitive data items using the following
data definition operators:
+: denotes composition of two data items, e.g. a+b represents data a and b.
[,,]: represents selection, i.e. any one of the data items listed in the brackets can occur.
For example, [a,b] represents either a occurs or b occurs.
(): the contents inside the bracket represent optional data which may or may not appear.
e.g. a+(b) represents either a occurs or a+b occurs.
{}: represents iterative data definition, e.g. {name}5 represents five name data. {name}*
represents zero or more instances of name data.
=: represents equivalence, e.g. a=b+c means that a represents b and c.
/* */: Anything appearing within /* and */ is considered as a comment.

Example 1: Tic-Tac-Toe Computer Game


Tic-tac-toe is a computer game in which a human player and the computer make
alternative moves on a 3×3 square. A move consists of marking previously unmarked
square. The player who first places three consecutive marks along a straight line on the
square (i.e. along a row, column, or diagonal) wins the game. As soon as either the
human player or the computer wins, a message congratulating the winner should be
displayed. If neither player manages to get three consecutive marks along a straight line,
but all the squares on the board are filled up, then the game is drawn. The computer
always tries to win a game
(a)

Fig 10.2 (a) Level 0 (b) Level 1 DFD for Tic-Tac-Toe game
It may be recalled that the DFD model of a system typically consists of several DFDs:
level 0, level 1, etc. However, a single data dictionary should capture all the data
appearing in all the DFDs constituting the model. Figure 10.2 represents the level 0 and
level 1 DFDs for the tic-tac- toe game. The data dictionary for the model is given below.

Data Dictionary for the DFD model in Example 1

move: integer /*number between 1 and 9 */ display: game+result


game: board board: {integer}9
result: [“computer won”, “human won” “draw”]

Importance of Data Dictionary


A data dictionary plays a very important role in any software development process
because of the following reasons:
A data dictionary provides a standard terminology for all relevant data for use by the
engineers working in a project. A consistent vocabulary for data items is very important,
since in large projects different engineers of the project have a tendency to use different
terms to refer to the same data, which unnecessary causes confusion.
The data dictionary provides the analyst with a means to determine the definition of
different data structures in terms of their component elements.
EXPERIMENT: 9
AIM: To understand and create use case diagram.
USE CASE DIAGRAM
Use Case Model
The use case model for any system consists of a set of “use cases”. Intuitively, use cases
represent the different ways in which a system can be used by the users. A simple way to
find all the use cases of a system is to ask the question: “What the users can do using the
system?” Thus for the Library Information System (LIS), the use cases could be:
issue-book
query-book
return-book
create-member
add-book, etc
Use cases correspond to the high-level functional requirements. The use cases partition
the system behavior into transactions, such that each transaction performs some useful
action from the user’s point of view. To complete each transaction may involve either a
single message or multiple message exchanges between the user and the system to
complete.
Purpose of use cases
The purpose of a use case is to define a piece of coherent behavior without revealing the
internal structure of the system. The use cases do not mention any specific algorithm to be
used or the internal data representation, internal structure of the software, etc. A use case
typically represents a sequence of interactions between the user and the system. These
interactions consist of one mainline sequence. The mainline sequence represents the
normal interaction between a user and the system. The mainline sequence is the most
occurring sequence of interaction. For example, the mainline sequence of the withdraw
cash use case supported by a bank ATM drawn, complete the transaction, and get the
amount. Several variations to the main line sequence may also exist. Typically, a
variation from the mainline sequence occurs when some specific conditions hold. For the
bank ATM example, variations or alternate scenarios may occur, if the password is
invalid or the amount to be withdrawn exceeds the amount balance. The variations are
also called alternative paths. A use case can be viewed as a set of related scenarios tied
together by a common goal. The mainline sequence and each of the variations are called
scenarios or instances of the use case. Each scenario is a single path of user events and
system activity through the use case
Representation of Use Cases
Use cases can be represented by drawing a use case diagram and writing an
accompanying text elaborating the drawing. In the use case diagram, each use case is
represented by an ellipse with the name of the use case written inside the ellipse. All the
ellipses (i.e. use cases) of a system are enclosed within a rectangle which represents the
system boundary. The name of the system being modeled (such as Library Information
System) appears inside the rectangle.
The different users of the system are represented by using the stick person icon. Each stick
person icon is normally referred to as an actor. An actor is a role played by a user with
respect to the system use. It is possible that the same user may play the role of multiple
actors. Each actor can participate in one or more use cases. The line connecting the actor
and the use case is called the communication relationship. It indicates that the actor
makes use of the functionality provided by the use case. Both the human users and the
external systems can be represented by stick person icons. When a stick person icon
represents an external system, it is annotated by the stereotype <<external system>>.
Example 1:Tic-Tac-Toe Computer Game
Tic-tac-toe is a computer game in which a human player and the computer make
alternative moves on a 3×3 square. A move consists of marking previously unmarked
square. The player who first places three consecutive marks along a straight line on the
square (i.e. along a row, column, or diagonal) wins the game. As soon as either the
human player or the computer wins, a message congratulating the winner should be
displayed. If neither player manages to get three consecutive marks along a straight line,
but all the squares on the board are filled up, then the game is drawn. The computer
always tries to win a game.

The use case model for the Tic-tac-toe problem is shown in fig. 13.1. This software has
only one use case “play move”. Note that the use case “get-user- move” is not used here.
The name “get-user-move” would be inappropriate because the use cases should be
named from the user’s perspective.
Fig. 13.1: Use case model for tic-tac-toe game
Text Description
Each ellipse on the use case diagram should be accompanied by a text description. The
text description should define the details of the interaction between the user and the
computer and other aspects of the use case. It should include all the behavior associated
with the use case in terms of the mainline sequence, different variations to the normal
behavior, the system responses associated with the use case, the exceptional conditions
that may occur in the behavior, etc. The behavior description is often written in a
conversational style describing the interactions between the actor and the system. The text
description may be informal, but some structuring is recommended. The following are
some of the information which may be included in a use case text description in addition
to the mainline sequence, and the alternative scenarios.
Contact persons: This section lists the personnel of the client organization with whom the
use case was discussed, date and time of the meeting, etc.
Actors: In addition to identifying the actors, some information about actors using this use
case which may help the implementation of the use case may be recorded.
Pre-condition: The preconditions would describe the state of the system before the use
case execution starts.
Post-condition: This captures the state of the system after the use case has successfully
completed.
Non-functional requirements: This could contain the important constraints for the design
and implementation, such as platform and environment conditions, qualitative statements,
response time requirements, etc.
Exceptions, error situations: This contains only the domain-related errors such as lack of
user’s access rights, invalid entry in the input fields, etc. Obviously, errors that are not
domain related, such as software errors, need not be discussed here.
Sample dialogs: These serve as examples illustrating the use case.
Specific user interface requirements: These contain specific requirements for the user
interface of the use case. For example, it may contain forms to be used, screen shots,
interaction style, etc.
Document references: This part contains references to specific domain-related documents
which may be useful to understand the system operation
Example 2:
A supermarket needs to develop the following software to encourage regular customers.
For this, the customer needs to supply his/her residence address, telephone number, and
the driving license number. Each customer who registers for this scheme is assigned a
unique customer number (CN) by the computer. A customer can present his CN to the
checkout staff when he makes any purchase. In this case, the value of his purchase is
credited against his CN. At the end of
each year, the supermarket intends to award surprise gifts to 10 customers who make the
highest total purchase over the year. Also, it intends to award a 22 caret gold coin to
every customer whose purchase exceeded Rs.10,000. The entries against the CN are the
reset on the day of every year after the prize winners’ lists are generated.
The use case model for the Supermarket Prize Scheme is shown in fig. 13.2. As discussed
earlier, the use cases correspond to the high-level functional requirements. From the
problem description, we can identify three use cases: “register-customer”, “register-
sales”, and “select- winners”. As a sample, the text description for the use case “register-
customer” is shown.

Fig. 13.2 Use case model for Supermarket Prize Scheme


Text description
U1: register-customer: Using this use case, the customer can register himself by
providing the necessary details.
Scenario 1: Mainline sequence
Customer: select register customer option.
System: display prompt to enter name, address, and telephone number. Customer: enter
the necessary values.
4. System: display the generated id and the message that the customer has been
successfully registered.
Scenario 2: at step 4 of mainline sequence
1. System: displays the message that the customer has already registered.
Scenario 2: at step 4 of mainline sequence
1. System: displays the message that some input information has not been entered. The
system displays a prompt to enter the missing value.
The description for other use cases is written in a similar fashion.
Utility of use case diagrams
From use case diagram, it is obvious that the utility of the use cases are represented by
ellipses. They along with the accompanying text description serve as a type of
requirements specification of the system and form the core model to which all other
models must conform. But, what about the actors (stick person icons)? One possible use
of identifying the different types of users (actors) is in identifying and implementing a
security mechanism through a login system, so that each actor can involve only those
functionalities to which he is entitled to. Another possible use is in preparing the
documentation (e.g. users’ manual) targeted at each category of user. Further, actors
help in identifying the use cases and understanding the exact functioning of the system.
Factoring of use cases
It is often desirable to factor use cases into component use cases. Actually, factoring of
use cases are required under two situations. First, complex use cases need to be factored
into simpler use cases. This would not only make the behavior associated with the use
case much more comprehensible, but also make the corresponding interaction diagrams
more tractable. Without decomposition, the interaction diagrams for complex use cases
may become too large to be accommodated on a single sized (A4) paper. Secondly, use
cases need to be factored whenever there is common behavior across different use cases.
Factoring would make it possible to define such behavior only once and reuse it whenever
required. It is desirable to factor out common usage such as error handling from a set of
use cases. This makes analysis of the class design much simpler and elegant. However, a
word of caution here. Factoring of use cases should not be done except for achieving the
above two objectives. From the design point of view, it is not advantageous to break up a
use case into many smaller parts just for the sake of it.
UML offers three mechanisms for factoring of use cases as follows:
1. Generalization
Use case generalization can be used when one use case that is similar to another, but
does something slightly differently or something more. Generalization works the same
way with use cases as it does with classes. The child use case inherits the behavior and
meaning of the parent use case. The notation is the same too (as shown in fig. 13.3). It is
important to remember that the base and the derived use cases are separate use cases and
should have separate text descriptions.

Fig. 13.3: Representation of use case generalization

Includes
The includes relationship in the older versions of UML (prior to UML 1.1) was known as
the uses relationship. The includes relationship involves one use case including the
behavior of another use case in its sequence of events and actions. The includes
relationship occurs when a chunk of behavior that is similar across a number of use
cases. The factoring of such behavior will help in not repeating the specification and
implementation across different use cases. Thus, the includes relationship explores the
issue of reuse by factoring out the commonality across use cases. It can also be gainfully
employed to decompose a large and complex use cases into more manageable parts. As
shown in fig. 13.4 the includes relationship is represented using a predefined stereotype
<<include>>.In the includes relationship, a base use case compulsorily and
automatically
includes the behavior of the common use cases. As shown in example fig. 13.5, issue- book
and renew-book both include check-reservation use case. The base use case may include
several use cases. In such cases, it may interleave their associated common use cases
together. The common use case becomes a separate use case and the independent text
description should be provided for it.

Fig. 13.4 Representation of use case inclusion

Fig. 13.5: Example use case inclusion


Extends
The main idea behind the extends relationship among the use cases is that it allows you to
show optional system behavior. An optional system behavior is extended only under
certain conditions. This relationship among use cases is also predefined as a stereotype
as shown in fig. 13.6. The extends relationship is similar to generalization. But unlike
generalization, the extending use case can add additional behavior only at an extension
point only when certain
conditions are satisfied. The extension points are points within the use case where
variation to the mainline (normal) action sequence may occur. The extends relationship is
normally used to capture alternate paths or scenarios.

Fig. 13.6: Example use case extension

Organization of use cases


When the use cases are factored, they are organized hierarchically. The high-level use
cases are refined into a set of smaller and more refined use cases as shown in fig. 13.7.
Top-level use cases are super-ordinate to the refined use cases. The refined use cases are
sub-ordinate to the top-level use cases. Note that only the complex use cases should be
decomposed and organized in a hierarchy. It is not necessary to decompose simple use
cases. The functionality of the super- ordinate use cases is traceable to their sub-ordinate
use cases. Thus, the functionality provided by the super-ordinate use cases is composite of
the functionality of the sub-ordinate use cases. In the highest level of the use case model,
only the fundamental use cases are shown. The focus is on the application context.
Therefore, this level is also referred to as the context diagram. In the context diagram, the
system limits are emphasized. In the top-level diagram, only those use cases with which
external users of the system. The subsystem-level use cases specify the services offered by
the subsystems. Any number of levels involving the subsystems may be utilized. In the
lowest level of the use case hierarchy, the class-level use cases specify the functional
fragments or operations offered by the classes.
Fig. 13.7: Hierarchical organization of use cases
EXPERIMENT:10
AIM: Case study (MIS and DSS)
MIS:-
Case Summary:
A waiter takes an order at a table, and then enters it online via one of the six
terminals located in the restaurant dining room. The order is routed to a printer
in the appropriate preparation area: the cold item printer if it is a salad, the
hot-item printer if it is a hot sandwich or the bar printer if it is a drink. A
customer’s meal check-listing (bill) the items ordered and the respective prices
are automatically generated. This ordering system eliminates the old three-
carbon-copy guest check system as well as any problems caused by a waiter’s
handwriting. When the kitchen runs out of a food item, the cooks send out an
‘out of stock’ message, which will be displayed on the dining room terminals
when waiters try to order that item. This gives the waiters faster feedback,
enabling them to give better service to the customers. Other system features aid
management in the planning and control of their restaurant business. The
system provides up-to-the-minute information on the food items ordered and
breaks out percentages showing sales of each item versus total sales. This helps
management plan menus according to customers’ tastes. The system also
compares the weekly sales totals versus food costs, allowing planning for
tighter cost controls. In addition, whenever an order is voided, the reasons for
the void are keyed in. This may help later in management decisions, especially
if the voids consistently related to food or service. Acceptance of the system by
the users is exceptionally high since the waiters and waitresses were involved in
the selection and design process. All potential users were asked to give their
impressions and ideas about the various systems available before one was
chosen.
Questions:
In the light of the system, describe the decisions to be made in the area of
strategic planning, managerial control and operational control? What
information would you require to make such decisions?
What would make the system a more complete MIS rather than just doing
transaction processing?
Explain the probable effects that making the system more formal would have on
the customers and the management.
Solution of Management Information System in Restaurant Case Study:
1. A management information system (MIS) is an organized combination of
people, hardware, communication networks and data sources that collects,
transforms and distributes information in an organization. An MIS helps
decision making by providing timely, relevant and accurate information to
managers. The physical components of an MIS include hardware, software,
database, personnel and procedures.
Management information is an important input for efficient performance of
various managerial functions at different organization levels. The information
system facilitates decision making. Management functions include
planning, controlling and decision making. Decision making is the core of
management and aims at selecting the best alternative to achieve an objective.
The decisions may be strategic, tactical or technical. Strategic decisions are
characterized by uncertainty. They are future oriented and relate directly to
planning activity. Tactical decisions cover both planning and controlling.
Technical decisions pertain to implementation of specific tasks through
appropriate technology. Sales region analysis, cost analysis, annual budgeting,
and relocation analysis are examples of decision-support systems and
management information systems.
There are 3 areas in the organization. They are strategic, managerial and
operational control.
Strategic decisions are characterized by uncertainty. The decisions to be made
in the area of strategic planning are future oriented and relate directly to
planning activity. Here basically planning for future that is budgets, target
markets, policies, objectives etc. is done. This is basically a top level where up-
to-the minute information on the food items ordered and breaks out percentages
showing sales of each item versus total sales is provided. The top level
where strategic planning is done compares the weekly sales totals versus food
costs, allowing planning for tighter cost controls. Executive support systems
function at the strategic level, support unstructured decision making, and use
advanced graphics and communications. Examples of executive support systems
include sales trend forecasting, operating plan development, budget
forecasting, profit planning, and manpower planning.
The decisions to be made in the area of managerial control are largely
dependent upon the information available to the decision makers. It is basically
a middle level where planning of menus is done and whenever an order is
voided, the reasons for the void are keyed in which later helps in management
decisions, especially if the voids are related to food or service. The managerial
control that is middle level also gets customer feedback and is responsible for
customer satisfaction.
The decisions to be made in the area of operational control pertain to
implementation of specific tasks through appropriate technology. This is
basically a lower level where the waiter takes the order and enters it online via
one of the six terminals located in the restaurant dining room and the order is
routed to a printer in the appropriate preparation area. The item’s ordered list
and the respective prices are automatically generated. The cooks send ‘out of
stock’ message when the kitchen runs out of a food item, which is basically
displayed on the dining room terminals when waiter tries to order that item.
This basically gives the waiters faster feedback, enabling them to give better
service to the customers. Transaction processing systems function at the
operational level of the organization. Examples of transaction processing
systems include order tracking, order processing, machine control, plant
scheduling, compensation, and securities trading.
The information required to make such decision must be such that it highlights
the trouble spots and shows the interconnections with the other functions. It
must summarize all information relating to the span of control of the manager.
The information required to make these decisions can be strategic, tactical or
operational information.
Advantages of an online computer system:
Eliminates carbon copies
Waiters’ handwriting issues
Out-of-stock message
Faster feedback, helps waiters to service the customers
Advantages to management:
Sales figures and percentages item-wise
Helps in planning the menu
Cost accounting details
2. If the management provides sufficient incentive for efficiency and results to
their customers, it would make the system a more complete MIS and so the MIS
should support this culture by providing such information which will aid the
promotion of efficiency in the management services and operational system. It
is also necessary to study the keys to successful Executive Information System
(EIS) development and operation. Decision support systems would also make
the system a complete MIS as it constitutes a class of computer-based
information systems including knowledge-based systems that support decision-
making activities. DSSs serve the management level of the organization and
help to take decisions, which may be rapidly changing and not easily specified
in advance.
Improving personal efficiency, expediting problem solving (speed up the
progress of problems solving in an organization), facilitating interpersonal
communication, promoting learning and training, increasing organizational
control, generating new evidence in support of a decision, creating a
competitive advantage over competition, encouraging exploration and
discovery on the part of the decision maker, revealing new approaches to
thinking about the problem space and helping automate the managerial
processes would make the system a complete MIS rather than just doing
transaction processing.
3.   The management system should be an open system and MIS should be so
designed that it highlights the critical business, operational, technological and
environmental changes to the concerned level in the management, so that the
action can be taken to correct the situation. To make the system a success,
knowledge will have to be formalized so that machines worldwide have a
shared and common understanding of the information provided. The systems
developed will have to be able to handle enormous amounts of information very
fast.
An organization operates in an ever-increasing competitive, global
environment. Operating in a global environment requires an organization to
focus on the efficient execution of its processes, customer service, and speed to
market. To accomplish these goals, the organization must exchange valuable
information across different functions, levels, and business units. By making the
system more formal, the organization can more efficiently exchange
information among its functional areas, business units, suppliers, and
customers.
As the transactions are taking place every day, the system stores all the data
which can be used later on when the hotel is in need of some financial help
from financial institutes or banks. As the inventory is always entered into the
system, any frauds can be easily taken care of and if anything goes missing then
it can be detected through the system.

Decision support systems (DSS) are interactive software-based systems


intended to help managers in decision-making by accessing large volumes of
information generated from various related information systems involved in
organizational business processes, such as office automation system,
transaction processing system, etc.
DSS uses the summary information, exceptions, patterns, and trends using the
analytical models. A decision support system helps in decision-making but does
not necessarily give a decision itself. The decision makers compile useful
information from raw data, documents, personal knowledge, and/or business
models to identify and solve problems and make decisions.
Programmed and Non-programmed Decisions
There are two types of decisions - programmed and non-programmed decisions.
Programmed decisions are basically automated processes, general routine
work, where −
These decisions have been taken several times.
These decisions follow some guidelines or rules.
For example, selecting a reorder level for inventories, is a programmed
decision.
Non-programmed decisions occur in unusual and non-addressed situations, so

It would be a new decision.
There will not be any rules to follow.
These decisions are made based on the available information.
These decisions are based on the manger's discretion, instinct, perception and
judgment.
For example, investing in a new technology is a non-programmed decision.
Decision support systems generally involve non-programmed decisions.
Therefore, there will be no exact report, content, or format for these systems.
Reports are generated on the fly.
Attributes of a DSS
Adaptability and flexibility
High level of Interactivity
Ease of use
Efficiency and effectiveness
Complete control by decision-makers
Ease of development
Extendibility
Support for modeling and analysis
Support for data access
Standalone, integrated, and Web-based
Characteristics of a DSS
Support for decision-makers in semi-structured and unstructured problems.
Support for managers at various managerial levels, ranging from top executive
to line managers.
Support for individuals and groups. Less structured problems often requires the
involvement of several individuals from different departments and organization
level.
Support for interdependent or sequential decisions.
Support for intelligence, design, choice, and implementation.
Support for variety of decision processes and styles.
DSSs are adaptive over time.
Benefits of DSS
Improves efficiency and speed of decision-making activities.
Increases the control, competitiveness and capability of futuristic decision-
making of the organization.
Facilitates interpersonal communication.
Encourages learning or training.
Since it is mostly used in non-programmed decisions, it reveals new approaches
and sets up new evidences for an unusual decision.
Helps automate managerial processes.
Components of a DSS
Following are the components of the Decision Support System −
Database Management System (DBMS) − To solve a problem the necessary
data may come from internal or external database. In an organization, internal
data are generated by a system such as TPS and MIS. External data come from
a variety of sources such as newspapers, online data services, databases
(financial, marketing, human resources).
Model Management System − It stores and accesses models that managers use
to make decisions. Such models are used for designing manufacturing facility,
analyzing the financial health of an organization, forecasting demand of a
product or service, etc.
Support Tools − Support tools like online help; pulls down menus, user
interfaces, graphical analysis, error correction mechanism, facilitates the user
interactions with the system.
Classification of DSS
There are several ways to classify DSS. Hoi Apple and Whinstone classifies
DSS as follows −
Text Oriented DSS − It contains textually represented information that could
have a bearing on decision. It allows documents to be electronically created,
revised and viewed as needed.
Database Oriented DSS − Database plays a major role here; it contains
organized and highly structured data.
Spreadsheet Oriented DSS − It contains information in spread sheets that
allows create, view, modify procedural knowledge and also instructs the system
to execute self-contained instructions. The most popular tool is Excel and Lotus
1-2-3.
Solver Oriented DSS − It is based on a solver, which is an algorithm or
procedure written for performing certain calculations and particular program
type.
Rules Oriented DSS − It follows certain procedures adopted as rules.
Rules Oriented DSS − Procedures are adopted in rules oriented DSS. Export
system is the example.
Compound DSS − It is built by using two or more of the five structures
explained above.
Types of DSS
Following are some typical DSSs −
Status Inquiry System − It helps in taking operational, management level, or
middle level management decisions, for example daily schedules of jobs to
machines or machines to operators.
Data Analysis System − It needs comparative analysis and makes use of
formula or an algorithm, for example cash flow analysis, inventory analysis etc.
Information Analysis System − In this system data is analyzed and the
information report is generated. For example, sales analysis, accounts
receivable systems, market analysis etc.
Accounting System − It keeps track of accounting and finance related
information, for example, final account, accounts receivables, accounts
payables, etc. that keep track of the major aspects of the business.
Model Based System − Simulation models or optimization models used for
decision-making are used infrequently and creates general guidelines for
operation or management.
EXPERIMENT:11
AIM: To understand and apply concepts of project management.
The job pattern of an IT company engaged in software development can be seen
split in two parts:
Software Creation
Software Project Management
A project is well-defined task, which is a collection of several operations done
in order to achieve a goal (for example, software development and delivery). A
Project can be characterized as:
Every project may has a unique and distinct goal.
Project is not routine activity or day-to-day operations.
Project comes with a start time and end time.
Project ends when its goal is achieved hence it is a temporary phase in the
lifetime of an organization.
Project needs adequate resources in terms of time, manpower, finance, material
and knowledge-bank.
Software Project
A Software Project is the complete procedure of software development from
requirement gathering to testing and maintenance, carried out according to the
execution methodologies, in a specified period of time to achieve intended
software product.
Need of software project management
Software is said to be an intangible product. Software development is a kind of
all new stream in world business and there’s very little experience in building
software products. Most software products are tailor made to fit client’s
requirements. The most important is that the underlying technology changes
and advances so frequently and rapidly that experience of one product may not
be applied to the other one. All such business and environmental constraints
bring risk in software development hence it is essential to manage software
projects efficiently.

The image above shows triple constraints for software projects. It is an


essential part of software organization to deliver quality product, keeping the
cost within client’s budget constrain and deliver the project as per scheduled.
There are several factors, both internal and external, which may impact this
triple constrain triangle. Any of three factor can severely impact the other two.
Therefore, software project management is essential to incorporate user
requirements along with budget and time constraints.
Software Project Manager
A software project manager is a person who undertakes the responsibility of
executing the software project. Software project manager is thoroughly aware
of all the phases of SDLC that the software would go through. Project manager
may never directly involve in producing the end product but he controls and
manages the activities involved in production.
A project manager closely monitors the development process, prepares and
executes various plans, arranges necessary and adequate resources, maintains
communication among all team members in order to address issues of cost,
budget, resources, time, quality and customer satisfaction.
Let us see few responsibilities that a project manager shoulders -
Managing People
Act as project leader
Liaison with stakeholders
Managing human resources
Setting up reporting hierarchy etc.
Managing Project
Defining and setting up project scope
Managing project management activities
Monitoring progress and performance
Risk analysis at every phase
Take necessary step to avoid or come out of problems
Act as project spokesperson
Software Management Activities
Software project management comprises of a number of activities, which
contains planning of project, deciding scope of software product, estimation of
cost in various terms, scheduling of tasks and events, and resource
management. Project management activities may include:
Project Planning
Scope Management
Project Estimation
Project Planning
Software project planning is task, which is performed before the production of
software actually starts. It is there for the software production but involves no
concrete activity that has any direction connection with software production;
rather it is a set of multiple processes, which facilitates software production.
Project planning may include the following:
Scope Management
It defines the scope of project; this includes all the activities, process need to be
done in order to make a deliverable software product. Scope management is
essential because it creates boundaries of the project by clearly defining what
would be done in the project and what would not be done. This makes project to
contain limited and quantifiable tasks, which can easily be documented and in
turn avoids cost and time overrun.
During Project Scope management, it is necessary to -
Define the scope
Decide its verification and control
Divide the project into various smaller parts for ease of management.
Verify the scope
Control the scope by incorporating changes to the scope
Project Estimation
For an effective management accurate estimation of various measures is a
must. With correct estimation managers can manage and control the project
more efficiently and effectively.
Project estimation may involve the following:
Software size estimation
Software size may be estimated either in terms of KLOC (Kilo Line of Code) or
by calculating number of function points in the software. Lines of code depend
upon coding practices and Function points vary according to the user or
software requirement.
Effort estimation
The managers estimate efforts in terms of personnel requirement and man-hour
required to produce the software. For effort estimation software size should be
known. This can either be derived by managers’ experience, organization’s
historical data or software size can be converted into efforts by using some
standard formulae.
Time estimation
Once size and efforts are estimated, the time required to produce the software
can be estimated. Efforts required is segregated into sub categories as per the
requirement specifications and interdependency of various components of
software. Software tasks are divided into smaller tasks, activities or events by
Work Breakthrough Structure (WBS). The tasks are scheduled on day-to-day
basis or in calendar months.
The sum of time required to complete all tasks in hours or days is the total time
invested to complete the project.
Cost estimation
This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones. For estimating project cost, it is
required to consider -
Size of software
Software quality
Hardware
Additional software or tools, licenses etc.
Skilled personnel with task-specific skills
Travel involved
Communication
Training and support
Project Estimation Techniques
We discussed various parameters involving project estimation such as size,
effort, time and cost.
Project manager can estimate the listed factors using two broadly recognized
techniques –
Decomposition Technique
This technique assumes the software as a product of various compositions.
There are two main models -
Line of Code Estimation is done on behalf of number of line of codes in the
software product.
Function Points Estimation is done on behalf of number of function points in
the software product.
Empirical Estimation Technique
This technique uses empirically derived formulae to make estimation.These
formulae are based on LOC or FPs.
Putnam Model
This model is made by Lawrence H. Putnam, which is based on Norden’s
frequency distribution (Rayleigh curve). Putnam model maps time and efforts
required with software size.
COCOMO
COCOMO stands for COnstructive COst MOdel, developed by Barry W.
Boehm. It divides the software product into three categories of software:
organic, semi-detached and embedded.
Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be done
with specified order and within time slot allotted to each activity. Project
managers tend to define various tasks, and project milestones and arrange them
keeping various factors in mind. They look for tasks lie in critical path in the
schedule, which are necessary to complete in specific manner (because of task
interdependency) and strictly within the time allocated. Arrangement of tasks
which lies out of critical path are less likely to impact over all schedule of the
project.
For scheduling a project, it is necessary to -
Break down the project tasks into smaller, manageable form
Find out various tasks and correlate them
Estimate time frame required for each task
Divide time into work-units
Assign adequate number of work-units for each task
Calculate total time required for the project from start to finish
Resource management
All elements used to develop a software product may be assumed as resource
for that project. This may include human resource, productive tools and
software libraries.
The resources are available in limited quantity and stay in the organization as a
pool of assets. The shortage of resources hampers the development of project
and it can lag behind the schedule. Allocating extra resources increases
development cost in the end. It is therefore necessary to estimate and allocate
adequate resources for the project.
Resource management includes -
Defining proper organization project by creating a project team and allocating
responsibilities to each team member
Determining resources required at a particular stage and their availability
Manage Resources by generating resource request when they are required and
de-allocating them when they are no more needed.
Project Risk Management
Risk management involves all activities pertaining to identification, analyzing
and making provision for predictable and non-predictable risks in the project.
Risk may include the following:
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business competition.
Risk Management Process
There are following activities involved in risk management process:
Identification - Make note of all possible risks, which may occur in the project.
Categorize - Categorize known risks into high, medium and low risk intensity
as per their possible impact on the project.
Manage - Analyze the probability of occurrence of risks at various phases.
Make plan to avoid or face risks. Attempt to minimize their side-effects.
Monitor - Closely monitor the potential risks and their early symptoms. Also
monitor the effects of steps taken to mitigate or avoid them.
Project Execution & Monitoring
In this phase, the tasks described in project plans are executed according to
their schedules.
Execution needs monitoring in order to check whether everything is going
according to the plan. Monitoring is observing to check the probability of risk
and taking measures to address the risk or report the status of various tasks.
These measures include -
Activity Monitoring - All activities scheduled within some task can be
monitored on day-to-day basis. When all activities in a task are completed, it is
considered as complete.
Status Reports - The reports contain status of activities and tasks completed
within a given time frame, generally a week. Status can be marked as finished,
pending or work-in-progress etc.
Milestones Checklist - Every project is divided into multiple phases where
major tasks are performed (milestones) based on the phases of SDLC. This
milestone checklist is prepared once every few weeks and reports the status of
milestones.
Project Communication Management
Effective communication plays vital role in the success of a project. It bridges
gaps between client and the organization, among the team members as well as
other stake holders in the project such as hardware suppliers.
Communication can be oral or written. Communication management process
may have the following steps:
Planning - This step includes the identifications of all the stakeholders in the
project and the mode of communication among them. It also considers if any
additional communication facilities are required.
Sharing - After determining various aspects of planning, manager focuses on
sharing correct information with the correct person on correct time. This keeps
every one involved the project up to date with project progress and its status.
Feedback - Project managers use various measures and feedback mechanism
and create status and performance reports. This mechanism ensures that input
from various stakeholders is coming to the project manager as their feedback.
Closure - At the end of each major event, end of a phase of SDLC or end of the
project itself, administrative closure is formally announced to update every
stakeholder by sending email, by distributing a hardcopy of document or by
other mean of effective communication.
After closure, the team moves to next phase or project.
Configuration Management
Configuration management is a process of tracking and controlling the changes
in software in terms of the requirements, design, functions and development of
the product.
IEEE defines it as “the process of identifying and defining the items in the
system, controlling the change of these items throughout their life cycle,
recording and reporting the status of items and change requests, and verifying
the completeness and correctness of items”.
Generally, once the SRS is finalized there is less chance of requirement of
changes from user. If they occur, the changes are addressed only with prior
approval of higher management, as there is a possibility of cost and time
overrun.
Baseline
A phase of SDLC is assumed over if it baselined, i.e. baseline is a measurement
that defines completeness of a phase. A phase is baselined when all activities
pertaining to it are finished and well documented. If it was not the final phase,
its output would be used in next immediate phase.
Configuration management is a discipline of organization administration,
which takes care of occurrence of any change (process, requirement,
technological, strategical etc.) after a phase is baselined. CM keeps check on
any changes done in software.
Change Control
Change control is function of configuration management, which ensures that all
changes made to software system are consistent and made as per
organizational rules and regulations.
A change in the configuration of product goes through following steps -
Identification - A change request arrives from either internal or external
source. When change request is identified formally, it is properly documented.
Validation - Validity of the change request is checked and its handling
procedure is confirmed.
Analysis - The impact of change request is analyzed in terms of schedule, cost
and required efforts. Overall impact of the prospective change on system is
analyzed.
Control - If the prospective change either impacts too many entities in the
system or it is unavoidable, it is mandatory to take approval of high authorities
before change is incorporated into the system. It is decided if the change is
worth incorporation or not. If it is not, change request is refused formally.
Execution - If the previous phase determines to execute the change request, this
phase take appropriate actions to execute the change, does a thorough revision
if necessary.
Close request - The change is verified for correct implementation and merging
with the rest of the system. This newly incorporated change in the software is
documented properly and the request is formally is closed.
Project Management Tools
The risk and uncertainty rises multifold with respect to the size of the project,
even when the project is developed according to set methodologies.
There are tools available, which aid for effective project management. A few
are described -
Gantt Chart
Gantt charts was devised by Henry Gantt (1917). It represents project schedule
with respect to time periods. It is a horizontal bar chart with bars representing
activities and time scheduled for the project activities.

PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts
project as network diagram. It is capable of graphically representing main
events of project in both parallel and consecutive way. Events, which occur one
after another, show dependency of the later event over the previous one.

Events are shown as numbered nodes. They are connected by labeled arrows
depicting sequence of tasks in the project.
Resource Histogram
This is a graphical tool that contains bar or chart representing number of
resources (usually skilled staff) required over time for a project event (or
phase). Resource Histogram is an effective tool for staff planning and
coordination.

Critical Path Analysis


This tools is useful in recognizing interdependent tasks in the project. It also
helps to find out the shortest path or critical path to complete the project
successfully. Like PERT diagram, each event is allotted a specific time frame.
This tool shows dependency of event assuming an event can proceed to next
only if the previous one is completed.
The events are arranged according to their earliest possible start time. Path
between start and end node is critical path which cannot be further reduced and
all events require to be executed in same order.
EXPERIMENT:12
AIM: To make decision whether to buy/lease/develop the software.
A make-or-buy decision is an act of choosing between manufacturing a product
in-house or purchasing it from an external supplier.
Also referred to as an outsourcing decision, a make-or-buy decision compares
the costs and benefits associated with producing a necessary good or service
internally to the costs and benefits involved in hiring an outside supplier for the
resources in question.
To compare costs accurately, a company must consider all aspects regarding
the acquisition and storage of the items versus creating the items in-house,
which may require the purchase of new equipment, as well as storage costs.
Four Numbers You Should Know
When you are supposed to make a make-or-buy decision, there are four
numbers you need to be aware of. Your decision will be based on the values of
these four numbers. Let's have a look at the numbers now. They are quite self-
explanatory.
The volume
The fixed cost of making
Per-unit direct cost when making
Per-unit cost when buying
Now, there are two formulas that use the above numbers. They are 'Cost to Buy'
and 'Cost to Make'. The higher value loses and the decision maker can go
ahead with the less costly solution.
Cost to Buy (CTB) = Volume x Per-unit cost when buying
Cost to Make (CTM) = Fixed costs + (Per-unit direct cost x volume)
Reasons for Making
There are number of reasons a company would consider when it comes to
making in-house. Following are a few:
Cost concerns
Desire to expand the manufacturing focus
Need of direct control over the product
Intellectual property concerns
Quality control concerns
Supplier unreliability
Lack of competent suppliers
Volume too small to get a supplier attracted
Reduction of logistic costs (shipping etc.)
To maintain a backup source
Political and environment reasons
Organizational pride
Reasons for Buying
Following are some of the reasons companies may consider when it comes to
buying from a supplier:
Lack of technical experience
Supplier's expertise on the technical areas and the domain
Cost considerations
Need of small volume
Insufficient capacity to produce in-house
Brand preferences
Strategic partnerships
The Process
The make or buy decision can be in many scales. If the decision is small in
nature and has less impact on the business, then even one person can make the
decision. The person can consider the pros and cons between making and
buying and finally arrive at a decision.
When it comes to larger and high impact decisions, usually organizations
follow a standard method to arrive at a decision. This method can be divided
into four main stages as below.
1. Preparation
Team creation and appointment of the team leader
Identifying the product requirements and analysis
Team briefing and aspect/area destitution
2. Data Collection
Collecting information on various aspects of make-or-buy decision
Workshops on weightings, ratings, and cost for both make-or-buy
3. Data Analysis
Analysis of data gathered
4. Feedback
Feedback on the decision made
By following the above structured process, the organization can make an
informed decision on make-or-buy. Although this is a standard process for
making the make-or-buy decision, the organizations can have their own
varieties.
EXPERIMENT: 13
AIM: To assure Quality of Software, Statistical Software Quality Assurance,
Reliability of Software
Software Quality Assurance (SQA) is simply a way to assure quality in
the software. It is the set of activities which ensure processes, procedures as
well as standards are suitable for the project and implemented correctly. 

Software Quality Assurance is a process which works parallel to development


of software. It focuses on improving the process of development of software
so that problems can be prevented before they become a major issue. Software
Quality Assurance is a kind of Umbrella activity that is applied throughout the
software process. 
Software Quality Assurance has: 
1. A quality management approach 
2. Formal technical reviews 
3. Multi testing strategy 
4. Effective software engineering technology 
5. Measurement and reporting mechanism 
Major Software Quality Assurance Activities: 

1. SQA Management Plan: 


Make a plan for how you will carry out the sqa through out the project.
Think about which set of software engineering activities are the best for
project. check level of sqa team skills. 

2. Set The Check Points: 


SQA team should set checkpoints. Evaluate the performance of the project
on the basis of collected data on different check points. 

3. Multi testing Strategy: 


Do not depend on a single testing approach. When you have a lot of testing
approaches available use them. 

4. Measure Change Impact: 


The changes for making the correction of an error sometimes re introduces
more errors keep the measure of impact of change on project. Reset the
new change to change check the compatibility of this fix with whole
project. 

5. Manage Good Relations: 


In the working environment managing good relations with other teams
involved in the project development is mandatory. Bad relation of sqa team
with programmers team will impact directly and badly on project. Don’t
play politics. 
 
Benefits of Software Quality Assurance (SQA): 
 
1. SQA produces high quality software. 
2. High quality application saves time and cost. 
3. SQA is beneficial for better reliability. 
4. SQA is beneficial in the condition of no maintenance for a long time. 
5. High quality commercial software increase market share of company. 
6. Improving the process of creating software. 
7. Improves the quality of the software. 
 
Disadvantage of SQA: 
There are a number of disadvantages of quality assurance. Some of them
include adding more resources, employing more workers to help maintain
quality and so much more.
Statistical Quality Assurance (SQA)

As brands and retailers experience growing demand for the latest consumer
products, the resulting increase in production and batch sizes makes quality
control more challenging for companies.

Traditional compliance testing techniques can sometimes provide limited


pass/fail information, which results in insufficient measurements on the batch’s
quality control, identification of the root cause of failure results and overall
quality assurance (QA) in the production process.
Intertek combines legal, customer and essential safety requirements to
customize a workable QA process, called Statistical Quality Assurance (SQA).
SQA is used to identify the potential variations in the manufacturing process
and predict potential defects on a parts-per-million (PPM) basis. It provides a
statistical description of the final product and addresses quality and safety issues
that arise during manufacturing.
SQA consists of three major methodologies:

1. Force Diagram - A Force Diagram describes how a product should be


tested. Intertek engineers base the creation of Force Diagrams on our
knowledge of foreseeable use, critical manufacturing process and critical
components that have high potential to fail.
2. Test-to-Failure (TTF) - Unlike any legal testing, TTF tells
manufacturers on how many defects they are likely to find in every
million units of output. This information is incorporated into the process
and concludes if a product needs improvement in quality or if it is being
over engineered, which will eventually lead to cost savings.
3. Intervention - Products are separated into groups according to the total
production quantity and production lines. Each group then undergoes an
intervention. The end result is measured by Z-value, which is the
indicator of quality and consistency of a product to a
specification. Intervention allows manufacturers to pinpoint a defect to a
specific lot and production line; thus saving time and money in corrective
actions.

Software Reliability

Software Reliability means Operational reliability. It is described as the


ability of a system or component to perform its required functions under
static conditions for a specific period.

Software reliability is also defined as the probability that a software


system fulfills its assigned task in a given environment for a predefined
number of input cases, assuming that the hardware and the input are free
of error.

Software Reliability is an essential connect of software quality, composed


with functionality, usability, performance, serviceability, capability,
installability, maintainability, and documentation. Software Reliability is
hard to achieve because the complexity of software turn to be high. While
any system with a high degree of complexity, containing software, will be
hard to reach a certain level of reliability, system developers tend to push
complexity into the software layer, with the speedy growth of system size
and ease of doing so by upgrading the software.

For example, large next-generation aircraft will have over 1


million source lines of software on-board; next-generation air traffic
control systems will contain between one and two million lines; the
upcoming International Space Station will have over two million lines on-
board and over 10 million lines of ground support software; several
significant life-critical defense systems will have over 5 million source
lines of software. While the complexity of software is inversely
associated with software reliability, it is directly related to other vital
factors in software quality, especially functionality, capability, etc.

You might also like