Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 11

Manipal

BCA REVAMPED PROGRAMME


IV SEMESTER
ASSIGNMENT
Name

VELMURUGAN C
____________________________________________________

Registration No.

531210112
____________________________________________________

Learning Center

KUWAIT EDUCATIONAL CENTER


____________________________________________________

Learning Center Code

2527
____________________________________________________

Course/Program

BCA
____________________________________________________

Semester

IV Semester
____________________________________________________

Subject Code

BC0049
____________________________________________________

Subject Title

SOFTWARE ENGINEERING
____________________________________________________

Date of Submission

26.02.2014
____________________________________________________

Marks Awarded

:
____________________________________________________

Average marks of both assignments

_________________________________________________

_______________________________________________

Signature of Center Coordinator

Signature of Evaluator

Directorate of Distance Education


Sikkim Manipal University
II Floor, Syndicate House, Manipal 576 104

SMU

_________________________________________

Sikkim Manipal University

Directorate of Distance Education

Question 1: Define System software, Business Software,


Embedded software, Web based software and Software Crisis.
Ans.:
SYSTEM SOFTWARE:
System software is a collection of programs written to service other
programs. Some system software processes are complex information
structures. Other systems applications process largely indeterminate
data. It is characterized by heavy interaction with hardware, heavy usage
by multiple users, concurrent operation that requires scheduling, resource
sharing, and sophisticated process management, complex data structures
and multiple external interfaces.
System software is a type of computer program that is designed to run a
computers hardware and application programs. If we think of the
computer system as a layered model, the system software is the interface
between the hardware and user applications.
The operating system (OS) is the best-known example of system software.
The OS manages all the other programs in a computer.
BUSINESS SOFTWARE:
Business software or business application is any software or set of
computer programs that are used by business users to perform various
business functions. These business applications are used to increase
productivity, to measure productivity and to perform business functions
accurately. Business information processing is the largest single software
application
area.
Discrete
systems
like
payroll,
accounts
receivable/payable have evolved into Management Information Systems
(MIS) software that accesses one or more large databases containing
business information.
EMBEDDED SOFTWARE:
Embedded software is computer software, written to control machines or
devices that are not typically thought of as computers. Embedded
software resides only in read-only memory and is used to control products
and systems for the consumer and industrial markets.
Embedded
software can provide very limited and esoteric functions or provide
significant function and control capability.

WEB BASED SOFTWARE:


The web pages retrieved by a browser are software that incorporates
executable instructions and data. In essence, the network becomes a
massive computer providing an almost unlimited software resource that
can be accessed by one one with a modem.
SOFTWARE CRISIS:
Software crisis was a term used in the early days of computing science.
The term was used to describe the impact of rapid increases in computer
power and the complexity of the problems that could be tackled. The set
of problems that are encountered in the development of computer
software is not limited to software that does not function properly rather
the affliction encompasses problems associated with how we develop
software, how we support a growing volume of existing software, and how
we can expect to keep pace with a growing demand for more software.

Question 2: What are the drawbacks of Rapid Application


Development (RAD)?
Ans.:
RAD model is Rapid Application Development model. It is a type of
incremental model. In RAD model the components or functions are
developed in parallel as if they were mini projects. The developments are
time boxed, delivered and then assembled into a working prototype. This
can quickly give the customer something to see and use and to provide
feedback regarding the delivery and their requirements.

DRAWBACKS OF THE RAD MODEL:


For large but scalable project, RAD requires sufficient human resources to
create the right number of RAD teams.
RAD requires developers and customers who are committed to the rapidfire activities necessary to get a system complete in a much abbreviated
time frame. If commitment is lacking for either, RAD projects will fail.

Not all types of applications are appropriate for RAD. If a system cannot
be properly modularized, building the components necessary for RAD will
be problematic. If high performance is an issue and performance is to be
achieved through tuning the interfaces to system components, the RAD
approach may not work.
RAD is not appropriate when technical risks are high. This occurs when a
new application makes a heavy use of new technology or when the new
software requires a high degree of interoperability with existing computer
programs.
Short iteration may not add enough functionality, leading to significant
delays in final iterations. Since Agile emphasizes real-time communication
(preferably face-to-face), utilizing it is problematic for large multi-team
distributed system development. Agile methods produce very little written
documentation and require a significant amount of post-project
documentation.
Programmers are required to work in pairs (which may be difficult for
some developers). There is no up-front detailed design, which could
result in more redesign effort in the long run. The business champion
attached to the project full time can potentially become a single point-offailure for the project and a major source of stress for the team.
The client may create an unrealistic product vision and request extensive
gold-plating, leading the team to over- or under-develop functionality.
Product may lose its competitive edge because of insufficient core
functionality and may exhibit poor overall quality.

Depends on strong team and individual performances for identifying


business requirements.
Only system that can be modularized can be built using RAD
Requires highly skilled developers/designers.
High dependency on modeling skills
Inapplicable to cheaper projects as cost of modeling and automated
code generation is very high.

Question 3: Reliability is more important than efficiency. State 5


reasons to justify.
Ans.:

RELIABILITY IS MORE IMPORTANT THAN EFFICIENCY:


The Reliability of a software system is a measure of how well users think it
provides the services that they require. Reliability is usually defined as the
probability of failure-free operation for a specified time in a specified
environment for a specific purpose.
Improved programming techniques, better programming languages and
better quality management have led to very significant improvements in
reliability for most software.
However, for some systems, such as those which control unattended
machinery, these normal techniques may not be enough to achieve the
level of reliability required.
In these cases special programming techniques may be necessary to
achieve the required reliability.
Improved reliability is one of the benefits of software reuse. Software
reliability is a function of the number of failures experienced by a
particular use of that software. A software failure occurs when the
software is executing. It is a situation in which the software does not
deliver the service expected by the user.
Software reliability is the function of the number of failures experienced
by a particular user of that software. A software failure occurs when the
software is executing.
REASONS TO JUSTIFY; RELIABILITY IS MORE IMPORTANT THAN
EFFICIENCY:
i.

Computers are now cheap and fast: There is little need to


maximize equipment usage. Paradoxically, however, faster
equipment leads to increasing expectations on the part of the
user so efficiency considerations cannot be completely ignored.

ii.

Unreliable software is liable to be discarded by use: If a company


attains a reputation for unreliability because of single unreliable
product, it is likely to affect future sales of all of that companys
products.

iii.

System failure costs may be enormous: For some application ,


such a reactor control system or an aircraft navigation system,
the cost of system failure is orders of magnitude greater than the
cost of the control system.

iv.

Unreliable systems are difficult to improve: It is usually possible


to tune an inefficient system because most execution time is
spent in small program sections. An unreliable system is more
difficult to improve as unreliability tends to be distributed
throughout the system.

v.

Efficiency is predictable: Program takes long time to execute and


users can adjust their work to take this into account. Unreliability,
by contrast, usually surprises the user.

Unreliable system may cause information loss: Information is very


expensive to collect and maintains; it may sometimes be worth more than
the computer system on which it is processed.

Question 4: Explain Data-flow design.


Ans.:
The data flow model is a set of policies and diagrams representing the
design requirements" that you need to implement in order to meet the
goals in your solution proposal. Data flow model policies are simple
statements of the business requirements found in your solution proposal.
You refine each statement so that it evolves into a precise policy of your
business requirements.
Data flow design is concerned with designing a sequence of functional
transformations that convert system inputs into the required. The design
is represented as data-flow diagrams. These diagrams illustrate how data
flows through a system and how the output is derived from the input
through a sequence of functional transformations. A data flow diagram
(DFD) is a graphical representation of the "flow" of data through an
information system, modeling its process aspects. Often they are a
preliminary step used to create an overview of the system which can later
be elaborated. DFDs can also be used for the visualization of data
processing (structured design).
Data-flow diagrams are a useful and intuitive way of describing a system.
They are normally understandable without special training, especially if
control information is excluded.
They show end-to-end processing that is, the flow of processing from
when data enters the system to where it leaves the system can be traced.

Data-flow design is an integral part of a number of design methods and


most CASE tools support data-flow diagram creation.
Different methods may use different icons to represent data-flow diagram
entities but their meanings are similar. The notation which use is based
on the following symbols:

ROUNDED RECTANGLES represent functions, which transform inputs


to outputs. The transformation name indicates its function.
RECTANGLES represent data stores. Again, they should be given a
descriptive name.
CIRCLES represent user interactions with the system which provide
input or receive output.
ARROWS show the direction of data flow. Their name describes the
data flowing along that path.
The KEYWORDS AND and OR : These have their usual meanings
as in Boolean expressions. They are used to link data flows when
more than one data flow may be input or output from a
transformation.

Question 5: Explain White-Box testing?


Ans.:
White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of testing
software that tests internal structures or workings of an application, as
opposed to its functionality (i.e. black-box testing).
White-box testing, sometimes called glass-box testing is a test case
design method that uses the control structure of the procedural design to
derive test cases. Using white-box testing methods, the software engineer
can derive test cases that (1) guarantee that all independent paths within
a module have been exercised at least once, (2) exercise all logical
decisions on their true and false sides, (3) execute all loops at their
boundaries and within their operational bounds, and (4) exercise internal
data structures to ensure their validity.
A reasonable question might be posed at this juncture: "why spend time
and energy worrying about (and testing) logical minutiae when we might
better expend effort ensuring that program requirements have been met?

Stated another way, why don't we spend all of our energy on black-box
tests? The answer lies in the nature of software defects:
Logic errors and incorrect assumptions are inversely proportional to the
probability that a program path will be executed. Errors tend to creep into
our work when we design and implement function, conditions, or controls
that are out of the mainstream. Everyday processing tends to be well
understood (and well scrutinized), while 'special case' processing tends to
fall into the cracks.
We often believe that a logical path is not likely to be executed when, in
fact, it may be executed on a regular basis. The logical flow of a program
is sometimes counterintuitive, meaning that our unconscious assumptions
about flow of control and data may lead us to make design errors that are
uncovered only once path testing commences.
Typographical errors are random. When a program is translated into
programming language source code, it is likely that some typing errors will
occur.
Many will be uncovered by syntax and type checking mechanisms, but
others may go undetected until testing begins. it is as likely that a typo
will exist on an obscure logical path as on a mainstream path.
Each of these reasons provides an argument for conducting white-box
tests. Black-box testing, no matter how thorough, may miss the kinds of
errors noted here. White- box testing is far more likely to uncover them.

Question 6: What is Top-down integration and Bottom-up


integration?
Ans.:
Integration testing (sometimes called integration and testing, abbreviated
I&T) is the phase in software testing in which individual software modules
are combined and tested as a group. It occurs after unit testing and before
validation testing. Integration testing takes as its input modules that have
been unit tested, groups them in larger aggregates, applies tests defined
in an integration test plan to those aggregates, and delivers as its output
the integrated system ready for system testing.
TOP-DOWN INTEGRATION:
Top-Down Testing is an approach to integrated testing where the top
integrated modules are tested and the branch of the module is tested step
by step until the end of the related module. Top-down integration testing
is an incremental approach to construction of program structure. Modules
are integrated by moving downward through the control hierarchy,
beginning with the main control module (main program). Modules
subordinate (and ultimately subordinate) to the main control module are
incorporated into the structure in either a depth-first or breadth-first
manner.
Advantages

Having the framework, we can test major or supreme functions early


in the development process.
At the same time, we can also test any interfaces that we have
considered and thus obviously identify any errors in that area
special or very early.
The supreme or major benefit of this practice is that we include a
partially working framework to show to the clients and to the top
management too.

BOTTOM-UP INTEGRATION:
Bottom-Up Testing is an approach to integrated testing where the lowest
level components are tested first, then used to facilitate the testing of
higher level components. The process is repeated until the component at
the top of the hierarchy is tested.

All the bottom or low-level modules, procedures or functions are


integrated and then tested. After the integration testing of lower level
integrated modules, the next level of modules will be formed and can be
used for integration testing. This approach is helpful only when all or most

of the modules of the same development level are ready. This method also
helps to determine the levels of software developed and makes it easier to
report testing progress in the form of a percentage.
Bottom-up integration testing, as its name implies, begins construction
and testing with atomic modules (i.e., components at the lowest levels in
the program structure). Because components are integrated from the
bottom up, processing required for components subordinate to a given
level is always available and the need for stubs is eliminated.

You might also like