SE - Solution - Sample Questions

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

SE QB SOLUTION

1. Explain COCOMO II model with a suitable example. A project size of 200 KLOC is to be developed.
Software development team has average experience on similar type of project. The project schedule was
not very tight. Calculate Effort, development time, average staff size and productivity of the project.

2. Explain different size estimation metrics with their advantages and disadvantages.
Estimation of the size of the software is an essential part of Software Project Management. It helps the
project manager to further predict the effort and time which will be needed to build the
project. Various measures are used in project size estimation. Some of these are:

● Lines of Code
● Number of entities in ER diagram
● Total number of processes in detailed data flow diagram
● Function points
1. Lines of Code (LOC): As the name suggests, LOC counts the total number of lines of source code in a
project. The units of LOC are:
KLOC- Thousand lines of code
● NLOC- Non-comment lines of code
● KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of the same kind. The experts use it to
predict the required size of various components of software and then add them to get the
total size.
It’s tough to estimate LOC by analysing the problem definition. Only after the whole code has been
developed can accurate LOC be estimated. This statistic is of little utility to project managers
because project planning must be completed before development activity can begin.
Two separate source files having a similar number of lines may not require the same effort. A file with
complicated logic would take longer to create than one with simple logic. Proper estimation
may not be attainable based on LOC.
The length of time it takes to solve an issue is measured in LOC. This statistic will differ greatly from one
programmer to the next. A seasoned programmer can write the same logic in fewer lines
than a newbie coder.
Advantages:
Universally accepted and is used in many models like COCOMO.
● Estimation is closer to the developer’s perspective.
● Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
● No proper industry standard exists for this technique.
● It is difficult to estimate the size using this technique in the early stages of the project.
2. Number of entities in ER diagram: ER model provides a static view of the project. It describes the
entities and their relationships. The number of entities in ER model can be used to measure
the estimation of the size of the project. The number of entities depends on the size of the
project. This is because more entities needed more classes/structures thus leading to more
coding.
Advantages:
Size estimation can be done during the initial stages of planning.
● The number of entities is independent of the programming technologies used.
Disadvantages:
No fixed standards exist. Some entities contribute more project size than others.
● Just like FPA, it is less used in the cost estimation model. Hence, it must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD) represents the
functional view of software. The model depicts the main processes/functions involved in
software and the flow of data between them. Utilization of the number of functions in DFD
to predict software size. Already existing processes of similar type are studied and used to
estimate the size of the process. Sum of the estimated size of each process gives the final
estimated size.
Advantages:
It is independent of the programming language.
● Each major process can be decomposed into smaller processes. This will increase the accuracy of
estimation
Disadvantages:
Studying similar kinds of processes to estimate size takes additional time and effort.
● All software projects are not required for the construction of DFD.
4. Function Point Analysis: In this method, the number and type of functions supported by the software
are utilized to find FPC(function point count). The steps in function point analysis are: Count
the number of functions of each proposed type.
● Compute the Unadjusted Function Points(UFP).
● Find Total Degree of Influence(TDI).
● Compute Value Adjustment Factor(VAF).
● Find the Function Point Count(FPC).

3. Illustrate design principles


Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem, divide the
problems and conquer the problem it means to divide the problem into smaller pieces so that each piece
can be captured separately.For software design, the goal is to divide the problem into manageable
pieces.
These pieces cannot be entirely independent of each other as they together form the system. They have
to cooperate and communicate to solve the problem. This communication adds complexity.

Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element
as well as the component being designed.
Here, there are two common abstraction mechanisms
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
A module is specified by the method it performs.
The details of the algorithm to accomplish the functions are not visible to the user of the function.
Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis for
Object Oriented design approaches.

Modularity
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable. Single large programs are difficult to
understand and read due to a large number of reference variables, control paths, global variables, etc.

The desirable properties of a modular system are:


1. Each module is a well-defined system that can be used with other applications.
2. Each module has single specified objectives.
3. Modules can be separately compiled and saved in the library.

Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy to
develop and latter too, change. Structured design methods help developers to deal with the size and
complexity of programs. Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:


Top-down Approach
Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components and then
decomposing them into their more detailed sub-components.

Software Design Principles


2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up the
hierarchy, as shown in fig. This approach is suitable in case of an existing system.

4. Why software design should exhibit an architecture? Justify


Software architecture in software engineering helps to expose the structure of a system while hiding
some implementation details. Architecture focuses on relationships and how the elements and
components interact with each other, as does software engineering.In fact, software architecture and
software engineering often overlap. They are combined because many of the same rules govern both
practices. The difference sometimes comes when decisions are focused on software engineering and the
software architecture follows.The software architect is able to distinguish between what is just details in
the software engineering and what is important to that internal structure.Further, it involves a set of
significant decisions about the organization related to software development and each of these decisions
can have a considerable impact on quality, maintainability, performance, and the overall success of the
final product. These decisions comprise of −

● Selection of structural elements and their interfaces by which the system is composed.
● Behavior as specified in collaborations among those elements.
● Composition of these structural and behavioral elements into a large subsystem.
● Architectural decisions align with business objectives.
● Architectural styles guide the organization.

5. Explain different types of coupling and Cohesion with example.


A good design is the one that has low coupling. Coupling is measured by the number of relations
between the modules. That is, the coupling increases as the number of calls between modules
increase or the amount of shared data is large. Thus, it can be said that a design with high coupling
will have more errors.

Types of Module Coupling


1. No Direct Coupling: There is no direct coupling between M1 and M2.
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or entire
structure to another module, they are said to be stamp coupled. For example, passing structure
variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is used to
direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed data
format, communication protocols, or device interface. This is related to communication to external
too
6. Common Coupling: Two modules are common coupled if they share information through some
global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a branch
from one module into another module.

Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low
cohesion."
Types of Modules Cohesion
1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module,
cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a
module forms the components of the sequence, where the output from one component of the
sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of
the module refer to or update the same data structure, e.g., the set of functions defined on an
array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the
module are all parts of a procedure in which a particular sequence of steps has to be carried out
for achieving a goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact that all
the methods must be executed in the same time, the module is said to exhibit temporal
cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module
perform a similar operation. For example Error handling, data input and data output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of
tasks that are associated with each other very loosely, if at all.

6. Explain software design principle based with relevant examples.


Principles Of Software Design :
Should not suffer from “Tunnel Vision” –
While designing the process, it should not suffer from “tunnel vision” which means that is should not
only focus on completing or achieving the aim but on other effects also.
Traceable to analysis model –
The design process should be traceable to the analysis model which means it should satisfy all the
requirements that software requires to develop a high-quality product.
Should not “Reinvent The Wheel” –
The design process should not reinvent the wheel that means it should not waste time or effort in
creating things that already exist. Due to this, the overall development will get increased.
Minimize Intellectual distance –
The design process should reduce the gap between real-world problems and software solutions for
that problem meaning it should simply minimize intellectual distance.
Exhibit uniformity and integration –
The design should display uniformity which means it should be uniform throughout the process
without any change. Integration means it should mix or combine all parts of software i.e. subsystems
into one system.
Accommodate change –
The software should be designed in such a way that it accommodates the change implying that the
software should adjust to the change that is required to be done as per the user’s need.
Degrade gently –
The software should be designed in such a way that it degrades gracefully which means it should
work properly even if an error occurs during the execution.
Assessed or quality –
The design should be assessed or evaluated for the quality meaning that during the evaluation, the
quality of the design needs to be checked and focused on.
Review to discover errors –
The design should be reviewed which means that the overall evaluation should be done to check if
there is any error present or if it can be minimized.
Design is not coding and coding is not design –
Design means describing the logic of the program to solve any problem and coding is a type of
language that is used for the implementation of a design.

7. What are different Architectural styles?


The use of architectural styles is to establish a structure for all the components of the system.
Taxonomy of Architectural styles:

Data centered architectures:


A data store will reside at the center of this architecture and is accessed frequently by the other
components that update, add, delete or modify the data present within the store.
The figure illustrates a typical data centered style. The client software access a central repository.
Variation of this approach are used to transform the repository into a blackboard when data related
to client or data of interest for the client change the notifications to client software.
This data-centered architecture will promote integrability. This means that the existing components
can be changed and new client components can be added to the architecture without the
permission or concern of other clients.
Data can be passed among clients using blackboard mechanism.
Data flow architectures:
This kind of architecture is used when input data to be transformed into output data through a series
of computational manipulative components.
The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of
components called filters connected by pipes.
Pipes are used to transmit data from one component to the next.
Each filter will work independently and is designed to take data input of a certain form and produces
data output to the next filter of a specified form. The filters don’t require any knowledge of the
working of neighboring filters.
If the data flow degenerates into a single line of transforms, then it is termed as batch sequential.
This structure accepts the batch of data and then applies a series of sequential components to
transform it.
Call and Return architectures: It is used to create a program that is easy to scale and modify. Many
sub-styles exist within this category. Two of them are explained below.
Remote procedure call architecture: This components is used to present in a main program or sub
program architecture distributed among multiple computers on a network.
Main program or Subprogram architectures: The main program structure decomposes into number
of subprograms or function into a control hierarchy. Main program contains number of subprograms
that can invoke other components.

8. Illustrate design issues


9. Explain Integration testing and system testing in detail.
System Testing: While developing a software or application product, it is tested at the final stage as a
whole by combining all the product modules and this is called as System Testing. The primary aim of
conducting this test is that it must fulfill the customer/user requirement specification. It is also called
an end-to-end test, as is performed at the end of the development. This testing does not depend on
system implementation; in simple words, the system tester doesn’t know which technique between
procedural and object-oriented is implemented. This testing is classified into functional and
non-functional requirements of the system. In functional testing, the testing is similar to black-box
testing which is based on specifications instead of code and syntax of the programming language
used. On the other hand, non-functional testing checks for performance and reliability by generating
test cases in the corresponding programming language.

Integration Testing: This testing is the collection of the modules of the software, where the
relationship and the interfaces between the different components are also tested. It needs
coordination between the project-level activities of integrating the constituent components at a
time. The integration and integration testing must adhere to a building plan for the defined
integration and identification of the bug in the early stages. However, an integrator or integration
tester must have programming knowledge, unlike a system tester.

10. Find Cyclomatic Complexity and independent path of the code


Cyclomatic complexity of a code section is the quantitative measure of the number of linearly
independent paths in it. It is a software metric used to indicate the complexity of a program. It is
computed using the Control Flow Graph of the program.
For example, if source code contains no control flow statement then its cyclomatic complexity will be
1 and source code contains a single path in it. Similarly, if the source code contains one if condition
then cyclomatic complexity will be 2 because there will be two paths one for true and the other for
false.
Steps that should be followed in calculating cyclomatic complexity and test cases design are:
Construction of graphs with nodes and edges from code.
Identification of independent paths.
Cyclomatic Complexity Calculation
Design of Test Cases

11. Differentiate Verification and Validation with examples. Which comes first
Verification is the process of reviewing the intermediate work products of a software development
lifecycle to ensure that we are on track to complete the final result.
In other words, verification is a process of evaluating software mediation products to see whether
they meet the requirements set out at the start of the phase.
These may comprise documentation like requirements specifications, design documents, database
table design, ER diagrams, test cases, traceability matrix, and so on that are created throughout the
development stages.
Verification uses review or non-executable techniques to guarantee that the system (software,
hardware, documentation, and staff) meets an organization's standards and procedures.
Validation is the process of assessing the completed product to see whether it fits the business
requirements. To put it another way, the test execution that we undertake on a daily basis is
essentially a validation activity that comprises smoke testing, functional testing, regression testing,
systems testing, and so on.
Validation encompasses all types of testing that include interacting with the product and putting it
to the test.
The validation procedures are listed below −
Unit Testing
Testing for integration
System Evaluation
Acceptance Testing of Users

12. Describe the various testing strategies for a conventional system


Test strategies for conventional software
Following are the four strategies for conventional software:
1) Unit testing
2) Integration testing
3) Regression testing
4) Smoke testing

1) Unit testing
Unit testing focus on the smallest unit of software design, i.e module or software component.
Test strategy conducted on each module interface to access the flow of input and output.
The local data structure is accessible to verify integrity during execution.
Boundary conditions are tested.
In which all error handling paths are tested.
An Independent path is tested.

2) Integration testing
Integration testing is used for the construction of software architecture.

There are two approaches of incremental testing are:


i) Non incremental integration testing
ii) Incremental integration testing

i) Non incremental integration testing


Combines all the components in advanced.
A set of error is occurred then the correction is difficult because isolation cause is complex.
ii) Incremental integration testing
The programs are built and tested in small increments.
The errors are easier to correct and isolate.
Interfaces are fully tested and applied for a systematic test approach to it.
Following are the incremental integration strategies:
a. Top-down integration
b. Bottom-up integration
3) Regression testing
In regression testing the software architecture changes every time when a new module is added as
part of integration testing.
4) smoke testing
The developed software component are translated into code and merge to complete the product.

13. How effective equivalence partitioning method based testing can be done?
14. Compute cyclomatic complexity for given code. Also compute basis set & determine test data for each
basis set
15. Discuss various testing methods applicable for Web application.
Web Testing, or website testing is checking your web application or website for potential bugs before
its made live and is accessible to general public.
In Software Engineering, the following testing types/technique may be performed depending on
your web testing requirements.
1. Functionality Testing of a Website
Functionality Testing of a Website is a process that includes several testing parameters like user
interface, APIs, database testing, security testing, client and server testing and basic website
functionalities.
2. Usability testing:
Usability Testing has now become a vital part of any web based project. It can be carried out by
testers like you or a small focus group similar to the target audience of the web application.
3.Interface Testing:
Three areas to be tested here are – Application, Web and Database Server

Application: Test requests are sent correctly to the Database and output at the client side is
displayed correctly. Errors if any must be caught by the application and must be only shown to the
administrator and not the end user.
Web Server: Test Web server is handling all application requests without any service denial.
Database Server: Make sure queries sent to the database give expected results.
4. Database Testing:
Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-

Test if any errors are shown while executing queries


Data Integrity is maintained while creating, updating or deleting data in database.
5.Compatibility testing:
Compatibility tests ensures that your web application displays correctly across different devices. This
would include-

Browser Compatibility Test: Same website in different browsers will display differently. You need to
test if your web application is being displayed correctly across browsers, JavaScript, AJAX and
authentication is working fine.
6.Performance Testing:
This will ensure your site works under all loads. Software Testing activities will include but not
limited to –

Website application response times at different connection speeds


Load test your web application to determine its behavior under normal and peak loads
7. Security testing:
Security Testing is vital for e-commerce website that store sensitive customer information like credit
cards. Testing Activities will include-

Test unauthorized access to secure pages should not be permitted


Restricted files should not be downloadable without appropriate access
8) Intruder
Intruder is a powerful vulnerability scanner that will help you uncover the many weaknesses lurking
in your web applications and underlying infrastructure.
9. Crowd Testing:
You will select a large number of people (crowd) to execute tests which otherwise would have been
executed a select group of people in the company. Crowdsourced testing is an interesting and
upcoming concept and helps unravel many a unnoticed defects.
16. Differentiate Alpha and Beta testing

17. Explain different types of maintenance


Software Maintenance is the process of modifying a software product after it has been delivered to the
customer. The main purpose of software maintenance is to modify and update software applications
after delivery to correct faults and to improve performance.
Categories of Software Maintenance –
Maintenance can be divided into the following:
Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed
while the system is in use, or to enhance the performance of the system.

Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware
and software.
Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change
different types of functionalities of the system according to the customer demands.

Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of the
software. It goals to attend problems, which are not significant at this moment but may cause serious
issues in future.

18. Mention reason for project delay. Prepare RMMM plan for the same.
19. Explain steps of Version control and Change control
Version control – Creating versions/specifications of the existing product to build new products from
the help of SCM system. A description of version is given below:

Suppose after some changes, the version of configuration object changes from 1.0 to 1.1. Minor
corrections and changes result in versions 1.1.1 and 1.1.2, which is followed by a major update that
is object 1.2. The development of object 1.0 continues through 1.3 and 1.4, but finally, a noteworthy
change to the object results in a new evolutionary path, version 2.0. Both versions are currently
supported.

Change control – Controlling changes to Configuration items (CI). The change control process is
explained in Figure below:
A change request (CR) is submitted and evaluated to assess technical merit, potential side effects,
overall impact on other configuration objects and system functions, and the projected cost of the
change. The results of the evaluation are presented as a change report, which is used by a change
control board (CCB) —a person or group who makes a final decision on the status and priority of the
change. An engineering change Request (ECR) is generated for each approved change.

Also CCB notifies the developer in case the change is rejected with proper reason. The ECR describes
the change to be made, the constraints that must be respected, and the criteria for review and audit.
The object to be changed is “checked out” of the project database, the change is made, and then the
object is tested again. The object is then “checked in” to the database and appropriate version
control mechanisms are used to create the next version of the software.

20. Explain Software configuration item identification


21. Discuss different categories of risks that help to define impact values in a risk table
22. Is risk can be quantified? Justify your answer.
23. Explain SCM process. Differentiate between Quality Assurance and Quality Control.
System Configuration Management (SCM) is an arrangement of exercises which controls change by
recognizing the items for change, setting up connections between those things,
making/characterizing instruments for overseeing diverse variants, controlling the changes being
executed in the current framework, inspecting and revealing/reporting on the changes made.
SCM is a fundamental piece of all project management activities.
SCM Process
It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:

1. Identification of objects in the software configuration


2. Version Control
3. Change Control
4. Configuration Audit
5. Status Reporting
24. Explain the process of Risk Projection
Risk Management is an important part of project planning activities. It involves identifying and
estimating the probability of risks with their order of impact on the project.

Risk Management Steps:


There are some steps that need to be followed in order to reduce risk. These steps are as follows:

1. Risk Identification:
Risk identification involves brainstorming activities. it also involves the preparation of a risk list.
Brainstorming is a group discussion technique where all the stakeholders meet together. this
technique produces new ideas and promotes creative thinking.
Preparation of risk list involves identification of risks that are occurring continuously in previous
software projects.
2. Risk Analysis and Prioritization:
It is a process that consists of the following steps:
Identifying the problems causing risk in projects
Identifying the probability of occurrence of problem
Identifying the impact of problem
Prepare a table consisting of all the values and order risk on the basis of risk exposure factor
3. Risk Avoidance and Mitigation:
The purpose of this technique is to altogether eliminate the occurrence of risks. so the method to
avoid risks is to reduce the scope of projects by removing non-essential requirements.

4. Risk Monitoring:
In this technique, the risk is monitored continuously by reevaluating the risks, the impact of risk, and
the probability of occurrence of the risk.
This ensures that:
Risk has been reduced
New risks are discovered
Impact and magnitude of risk are measured

You might also like