software ENGINEERING Q & A

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

SEPT 2021

Question 2

i) What are the characteristics of a good software system

Good software should have several characteristics that are important to consider when
developing a software system. These characteristics include:

Functionality: The software meets the requirements and specifications that it was designed for,
and it behaves as expected when it is used in its intended environment.

Usability: The software is easy to use and understand, and it provides a positive user experience.

Reliability: The software is free of defects and it performs consistently and accurately under
different conditions and scenarios.

Performance: The software runs efficiently and quickly, and it can handle large amounts of data
or traffic.

Security: The software is protected against unauthorized access and malicious attacks.

Maintainability: The software is easy to maintain and update, and it can be modified without
causing unintended effects.

Reusability: The software can be reused in different contexts or for different purposes.

Scalability: The software can handle increasing amounts of work or users without losing
performance.

Testability: The software can be tested to ensure that it meets its requirements and
specifications.

These characteristics are commonly recognized by software engineers and are important for
creating a good software system

ii) Explain the following terms and also explain the testing techniques that are used for
each term
Verification
Validation

Verification is the process of evaluating software artifacts such as requirements, design, code, etc. to
ensure they meet the specified requirements and standards. It ensures that the software is built
according to the needs and design specifications. Verification tests ensure that all development
elements (software, hardware, documentation, and human resources) adhere to organizational and
team-specific standards and protocols. Verification checks are often like studying the specifications
and checking them against the code logic. Some of the testing techniques used for verification are:

Inspection

Code review
Desk-checking

Walkthroughs

Validation is the process of checking if the software (end product) has met the client’s true needs
and expectations. It ensures that the software fits its intended purpose and meets the user’s
expectations. Validation is the process of checking the validation of the actual and expected product.
Some of the testing techniques used for validation are:

Black Box Testing

White Box Testing

Non-functional testing

Usability testing

Performance testing

System testing

Security testing

Functionality testing

Unit testing

Integration testing

User acceptance testing

It is important to note that software testing is incomplete until it undergoes verification and
validation processes. Verification and validation are the main elements of the software testing
workflow because they ensure that the end product meets the design requirements.

iii)Explain the difference between projection and partitions using examples.

Partitioning and projection are two techniques used in database management systems to improve
query performance. Here are the differences between the two techniques:

Partitioning

Partitions the table horizontally, dividing a large table into smaller ones based on a partition key.

Helps organize data in a particular sort order that improves performance.

Useful for data purging and query performance.

Can be used to move, swap, and copy partitions between tables.

The partition remains the same for all projections that will be created based on that table.

Example: Partitioning a sales table by date, creating a separate partition for each month's sales data.

Projection

Selects certain required attributes, while discarding other attributes.


Partitions the table vertically, storing data in columns.

Helps reduce query latency and cost by eliminating the need to scan irrelevant partitions.

Can significantly reduce query runtime for queries that are constrained on partition metadata
retrieval.

The selected column(s) with the select clause affect the number of columns returned.

Example: Selecting only the customer name and order date from a sales table.

In summary, partitioning divides a large table into smaller ones based on a partition key, while
projection selects certain required attributes and discards others. Both techniques can improve
query performance, but they differ in how they organize and manipulate data.

Question 3

a) Explain the main activities carried out during requirement analysis and negotiation

Requirement analysis and negotiation are critical activities in the software engineering process. Here
are the main activities carried out during requirement analysis and negotiation:

Requirement Analysis:

Identify stakeholders and their needs

Analyze and document requirements

Validate requirements

Manage requirements

Refine user needs and constraints based on gathered information

Establish priorities and record results in the specifications of the requirements

Requirement analysis is a team effort that requires a combination of hardware, software, and human
factors engineering expertise, as well as skills in dealing with people.

Negotiation:

Discuss and exchange conversation on what is needed and what is to be eliminated

Prioritize requirements and make guesstimates on the conflicts that may arise

Take into consideration the risks of all the requirements

Negotiate a coherent set of requirements acceptable to the stakeholders

Establish priorities and record results in the specifications of the requirements

Negotiation is between the developer and the customer, and they dwell on how to go about the
project with limited business resources. The customer and developer must be satisfied with the
further implementation.
In summary, requirement analysis involves identifying stakeholders, analyzing and documenting
requirements, validating requirements, and managing requirements. Negotiation involves discussing
and exchanging conversation on what is needed and what is to be eliminated, prioritizing
requirements, taking into consideration the risks of all the requirements, and negotiating a coherent
set of requirements acceptable to the stakeholders.

b) Explain the following terms

i) Technical feasibility: This refers to the assessment of whether a proposed project is possible
from a technical perspective. It involves evaluating whether the technical resources available to
the organization are sufficient to complete the project and whether the technical team is
capable of converting the ideas into working systems. Technical feasibility also involves the
evaluation of the hardware, software, and other technical requirements of the proposed system.

ii) Economic feasibility: This refers to the assessment of whether a proposed project is financially
viable. It involves examining the costs and financial benefits of the project. To determine
economic feasibility, a rough order of magnitude (ROM) estimate is commonly performed. The
objective of economic feasibility is to determine whether the project is financially worthwhile
and whether it will generate enough profits to justify the investment.

iii) Operational feasibility: This refers to the assessment of whether a proposed project is
operationally viable. It involves analyzing and determining whether the organization's needs can
be met by completing the project. Operational feasibility studies also examine how a project
plan satisfies the requirements identified in the requirements analysis phase of system
development. Operational feasibility is dependent on human resources available for the project
and involves projecting whether the system will be used if it is developed and implemented.

iv) Schedule feasibility: This refers to the assessment of whether a proposed project can be
completed within the given time frame. It involves analyzing the timelines and deadlines for the
proposed project and determining whether they are realistic and achievable. Schedule feasibility
is important because it helps to ensure that the project is completed on time and within budget

Question 4

a) Discuss the following conversion methods giving situations where they are appropriate to
use, (Include diagrams):

i)Phased conversion

ii)Direct changeover

iii)Pilot changeover

iv)Parallel changeover

There are four main approaches to implementing a new system, each with its own advantages
and disadvantages. These approaches are:
Direct Conversion: This approach involves stopping the use of the old system and immediately
starting to use the new system at a specific point in time. This can be risky because even with
sufficient testing, it is impossible to make sure there are no issues with the new system.

Parallel Conversion: In this approach, the old and new systems are run simultaneously for a
period of time. This process is less risky than the direct approach because the company still has
the old system to rely on. However, this approach duplicates the work since it’s the work is being
performed in two environments.

Phase-In Conversion: This approach involves slowly implementing the new system in pieces. This
can be useful to make sure that the whole process is not interrupted, which makes it less risky
than the direct approach.

Pilot Conversion: In this approach, the company uses the new system in a test environment for a
period of time to work out all of the bugs. This is less risky than the direct approach since it
allows the company to understand the issues that may become applicable with a new system.

The appropriate conversion method to use depends on the specific situation. For example:

Direct Conversion: This approach may be appropriate when the old system is judged to be
absolutely without value, the new system is very small or simple, and the design of the new
system is completely different from that of the old system.

Parallel Conversion: This approach may be appropriate when the organization needs a high
degree of protection from the failure of the new system.

Phase-In Conversion: This approach may be appropriate when the company wants to make sure
that the whole process is not interrupted.

Pilot Conversion: This approach may be appropriate when the company wants to work out all of
the bugs before implementing the new system.

Below are diagrams that illustrate each of the four conversion methods:
b) What is sprint? What is its significance in the scrum software development methodology?

A sprint is a short, time-boxed period during which specific work has to be completed and made
ready for review in Agile product development. It is a foundational element of Scrum, which is an
agile project management framework that helps teams structure and manage their work through a
set of values, principles, and practices. The significance of sprints in the Scrum software
development methodology is that they break down big, complex projects into bite-sized pieces,
allowing teams to develop projects in small increments. During a sprint, work is done to create new
features based on the user stories and backlog, and the outcome of a sprint is a hypothetically
usable product. Sprint planning is a collaborative event where the team decides on the sprint goal
and specific user stories are added to the sprint from the product backlog. At the end of a sprint, two
meetings are held: the sprint review, where the team shows their work to the product owner, and
the sprint retrospective, where the team discusses what they can do to improve processes. The goal
of sprints is continuous improvement, and they help teams follow the agile principle of "delivering
working software frequently" and live the agile value of "responding to change over following a
plan"

Question 5

a)Describe fully the steps involved in risk management process

The risk management process involves identifying, assessing, and responding to risks that could
negatively impact a project or organization. Here are the five steps involved in the risk management
process:

Identify the risks: The first step is to identify all the events that can negatively (risk) or positively
(opportunity) affect the objectives of the project. These events can be listed in the risk matrix and
later captured in the risk register. There are different types of risks such as legal risks, environmental
risks, market risks, and regulatory risks.

Analyze the risks: During this step, your team will estimate the probability and fallout of each risk to
decide where to focus first. Then you will determine a response plan for each risk. Factors such as
potential financial loss to the organization, time lost, and severity of impact all play a part in
accurately analyzing each risk.

Prioritize the risks: This step gives you a holistic view of your organization's risk exposure. More
importantly, it helps you identify where you should focus more of your team's time and resources.
Based on this ranking information, you can also create workable solutions to manage each risk so
that your operations are not significantly affected during the risk treatment phase.

Treat the risks: Every risk needs to be eliminated or contained. This step involves developing and
implementing a plan to mitigate the risks. The plan should include specific actions to be taken,
timelines, and responsibilities.

Monitor and review the risks: Risk management is a continuous process, especially since the risk
landscape is constantly changing. So, you need to constantly monitor both the results of your risk
control strategy and any new risks that arise, making adjustments as necessary. You also need to
document, analyze, and share the progress of your risk management plan.

By following these five steps, you can create a basic risk management plan for your project or
organization.

b)Discuss, citing examples,

i) Technical risk

ii) Personnel risk

iii)Financial risk

Risk management is an essential aspect of running a successful business. Risks can be categorized
into three categories: preventable risks, external risks, and strategic risks. Here are some examples
of each type of risk:
i) Technical risk:

Cybersecurity threats

System failures

Data breaches

Software bugs

ii) Personnel risk:

Unauthorized, illegal, unethical, incorrect, or inappropriate actions by employees or managers

Breakdowns in routine operational processes

Employee turnover

Lack of training or experience

iii) Financial risk:

Market risk: substantial changes in the particular marketplace in which a company competes

Credit risk: a company's risk of not having enough funds to pay its bills

Liquidity risk: how easily a company can convert its assets into cash if it needs funds

Operational risk: risks that can arise from a company's ordinary business activities, such as fraud,
lawsuits, and personnel issues

Managing pure risk entails the process of identifying, evaluating, and subjugating these risks. It's a
defensive strategy to prepare for the unexpected. The basic methods for risk management—
avoidance, retention, sharing, transferring, and loss prevention and reduction—can apply to all
facets of an individual's life and can pay off in the long run.

In the case of financial risk, organizing risks by categories can be helpful in getting a handle on them.
Guidance from the Committee of Sponsoring Organizations of the Treadway Commission (COSO)
uses the following four risk categories: strategic risk, financial and reporting risk, compliance and
governance risk, and operational risk.

In conclusion, risk management is a crucial aspect of running a successful business, and it is essential
to understand the different types of risks that organizations face. By identifying, assessing, and
mitigating risks, companies can protect their capital, earnings, and operations

Question 6

a) C.A.S.E tools can either be upper CASE or lower CASE. Give 5 examples of CASE tools and
outline how they aid software development

CASE (Computer Aided Software Engineering) tools are software application programs used to
automate various activities in the Software Development Life Cycle (SDLC) . There are three types of
CASE tools: Upper CASE, Lower CASE, and Integrated CASE. Upper CASE tools are used in the
planning, analysis, and design stages of SDLC, while Lower CASE tools are used in implementation,
testing, and maintenance. Integrated CASE tools are helpful in all stages of SDLC, from requirement
gathering to testing and documentation. Here are five examples of CASE tools and how they aid
software development:

Rational Rose: This is an Upper CASE tool used for modeling software systems. It helps in the
creation of UML diagrams, which are used to represent the structure and behavior of software
systems. Rational Rose also generates code from the UML diagrams.

JIRA: This is a Project Management tool used to track and manage software development projects. It
helps in planning, tracking, and reporting on project progress. JIRA also integrates with other tools
used in SDLC, such as Git and Jenkins.

Selenium: This is a Lower CASE tool used for testing software systems. It automates the testing
process and helps in the detection of defects and bugs in software systems. Selenium also integrates
with other tools used in SDLC, such as JIRA and Jenkins.

Visual Paradigm: This is an Integrated CASE tool used for modeling software systems. It helps in the
creation of UML diagrams, flowcharts, and other diagrams used to represent the structure and
behavior of software systems. Visual Paradigm also generates code from the diagrams.

GitHub: This is a Lower CASE tool used for version control and collaboration in software
development projects. It helps in the management of source code and enables collaboration among
developers working on the same project. GitHub also integrates with other tools used in SDLC, such
as JIRA and Jenkins.

In summary, CASE tools aid software development by automating various activities in SDLC, such as
analysis, design, implementation, testing, and maintenance. They help in the creation of UML
diagrams, project management, version control, and collaboration among developers. CASE tools
also help in the detection of defects and bugs in software systems, thereby improving the quality of
the systems developed

FEB 2023

Question 1

a) Differentiate between functional and non-functional requirements

Functional and non-functional requirements are two types of requirements that are critical in the
success of a software or system project. Here are the differences between the two:

Functional Requirements:

These are the requirements that the end-user specifically demands as basic facilities that the system
should offer.

They are represented or stated in the form of input to be given to the system, the operation
performed, and the output expected.

They define what the system should do.

They are basically the requirements stated by the user which one can see directly in the final
product.
Non-functional Requirements:

These are basically the quality constraints that the system must satisfy according to the project
contract.

They define how the system should perform.

They are also called non-behavioral requirements.

They basically deal with issues like portability, security, maintainability, reliability, scalability,
performance, reusability, and flexibility.

In summary, functional requirements define what the system should do, while non-functional
requirements define how the system should perform. Both types of requirements are important in
ensuring that the delivered product meets the expectations of the client

b) Explain the various level of testing in detail

There are four main levels of testing in software testing, each with a specific purpose:

Unit Testing: This level of testing checks if software components are fulfilling functionalities or not. It
involves testing individual units or components of the software to ensure that they are fit for use by
the developers. Unit testing is usually done by the developer themselves.

Integration Testing: This level of testing checks the data flow from one module to other modules. It
involves testing two or more independent components together to ensure that they work together
as expected. Integration testing is done by the testing team.

System Testing: This level of testing evaluates both functional and non-functional needs for the
testing. It tests the overall interaction of components and is performed on a complete, integrated
system. System testing most often the final test to verify that the system meets the specification. It
involves load, performance, reliability, and security testing. System testing is undertaken by
independent testers who haven’t played a role in developing the program.

Acceptance Testing: This level of testing checks the requirements of a specification or contract are
met as per its delivery. It is conducted to ensure whether the requirement of the users are fulfilled
prior to its delivery and the software works correctly in the user’s working environment. Acceptance
testing is basically done by the user or customer, but other stakeholders can be involved in this
process. Since acceptance testing is the final phase, it needs to validate the end-to-end business flow
and check for things like cosmetic errors, spelling mistakes, and usability.

It is important to note that these levels of testing need to be done with care and should be
methodical and deliberate. These tests need to be completed in order as this sequence will help to
reduce the number of bugs or errors that pop up before the final product is launched.

c) Describe configuration management

Configuration management (CM) is a process used to track and control IT resources and services
across an enterprise. It is a systems engineering process for establishing and maintaining consistency
of a product's performance, functional, and other attributes throughout its life. Configuration
management is used to maintain an understanding of the status of complex assets with a view to
maintaining the highest level of serviceability for the lowest cost.
A configuration management system allows the enterprise to define settings in a consistent manner,
then to build and maintain them according to the established baselines. Configuration management
is typically implemented in the form of software tools, but it is a broad approach to systems
engineering and governance, and it can be codified in standardized frameworks.

Configuration management provides an underlying consistency to the IT environment. When a


device requires service or replacement, an established configuration provides a baseline that can be
preserved and applied to replacement devices. Configuration management involves establishing a
clear approach to documentation, maintenance, and change control so that systems can be
configured consistently and accurately across complex environments.

Typical configuration management tools help teams to:

Classify and manage systems by groups and subgroups.

Centrally modify base configurations.

Roll out new settings to all applicable systems

Question 2

a) Assume that you work for an organization that develops database products for
microcomputer systems. The organization is interested in quantifying its software
development. Identify the appropriate metrics used for quality assessment and explain how
these can be collected.

Metrics are quantitative measurements of a software product or project that can help management
understand software performance, quality, and productivity. Here are some appropriate metrics
used for quantity assessment in software development:

Developer productivity metrics: These metrics include active days, assignment scope, efficiency, and
code churn. They can help you understand how much time and work developers are investing in a
software project.

Agile process metrics: These metrics include lead time, cycle time, and velocity. They measure the
progress of a development team in producing working, shipping-quality software features.

Code characteristics metrics: These metrics include static code analysis, code complexity, and lines
of code (LOC). They can help you understand how easy the system is to debug, troubleshoot,
maintain, integrate, and extend with new functionality.

Software quality metrics: These metrics include dependability, functionality, interface accessibility,
system maintainability, and customer satisfaction. They can help you understand how well the
software meets the requirements and expectations of its users.

Product success metrics: These metrics include response time, request rate, user transactions,
virtual users, and error rate. They can help you measure the software product's performance and
identify problems that need to be solved.

Testing metrics: These metrics include test coverage, test execution time, test case pass rate, and
defect density. They can help you understand how well the software is tested and how many defects
are found.
To collect these metrics, you can use various tools and techniques such as automated testing tools,
code analysis tools, project management tools, and customer feedback surveys. You can also use
data visualization tools to analyze and present the collected data in a meaningful way. It is important
to choose the right metrics that link directly to your overarching business goals and to have a clear-
cut roadmap that outlines what milestones to track and report on. Metrics should be used by the
team and should be part of a conversation about process and challenges faced by the team

b) Describe risk mitigation, monitoring and management

Risk mitigation, monitoring, and management (RMMM) are essential components of risk
management. RMMM is a plan that outlines all risk analysis activities. The following are the key
components of RMMM:

Risk Mitigation

It is an activity used to avoid problems (Risk Avoidance).

Steps for mitigating the risks include finding out the risk, removing causes that are the reason for risk
creation, controlling the corresponding documents from time to time, and conducting timely reviews
to speed up the work.

Risk mitigation is where you create and begin to implement the plan for the best way to reduce the
likelihood and/or impact of each risk.

You may not be able to come up with a mitigation plan for each and every risk, but it’s important to
try to identify what changes in your current processes can be adjusted to reduce risk.

Risk Monitoring

It is an activity used for project tracking.

Its primary objectives include checking if predicted risks occur or not, ensuring proper application of
risk aversion steps defined for risk, collecting data for future risk analysis, and allocating what
problems are caused by which risks throughout the project.

Now that you have identified, assessed, and made a mitigation plan, you need to monitor for both
the effectiveness of your plan and the occurrence of risk events. Monitoring the status of risks,
monitoring the effectiveness of mitigation plans implemented, and consulting with key stakeholders
are all important aspects of risk monitoring.

Risk Management and Planning

It assumes that risks are inevitable and that they can be managed and planned for.

Risk mitigation is one element of risk management, and its implementation will differ by
organization.

When creating a risk mitigation plan, there are a few steps that are fairly standard for most
organizations. Recognizing recurring risks, prioritizing risk mitigation, and monitoring the established
plan are vital aspects to maintaining a thorough risk mitigation strategy.

In summary, risk mitigation, monitoring, and management are essential components of risk
management. Risk mitigation involves avoiding problems, removing causes that are the reason for
risk creation, controlling corresponding documents, and conducting timely reviews. Risk monitoring
involves checking if predicted risks occur or not, ensuring proper application of risk aversion steps,
collecting data for future risk analysis, and allocating what problems are caused by which risks
throughout the project. Risk management and planning assume that risks are inevitable and can be
managed and planned for

Question 3

a) Build a web-based order-processing system for a computer store and describe the data
objects, relationships, and attributes visible on the class diagram

To build a web-based order-processing system for a computer store, a UML class diagram can be
used to model the data structure of the system. The class diagram will show the classes, their
attributes, operations (or methods), and the relationships among objects. The main classes of the
order processing system are Product, Customer, Order, Bill, Company, and Payment. The following
are the classes and their attributes:

Product Class: Manages all the operations of Product. Attributes include product_id,
product_customer_id, product_items, product_number, product_type, and product_description.

Customer Class: Manages all the operations of Customer. Attributes include customer_id,
customer_name, customer_mobile, customer_email, customer_username, customer_password, and
customer_address.

Order Class: Manages all the operations of Order. Attributes include order_id, order_customer_id,
order_type, order_number, order_description.

Bill Class: Manages all the operations of Bill. Attributes include bill_id, bill_customer_id, bill_number,
bill_type, bill_receipt, and bill_description.

Company Class: Manages all the operations of Company. Attributes include company_id,
company_product_id, company_name, company_type, company_description, and
company_address.

Payment Class: Manages all the operations of Payment. Attributes include payment_id,
payment_customer_id, payment_date, payment_amount, and payment_description.

The relationships among the classes are as follows:

A customer can place many orders, but an order can only be placed by one customer.

An order can contain many products, and a product can be in many orders.

A bill is associated with one order, but an order can have many bills.

A company can have many products, but a product can only belong to one company.

A payment is associated with one bill, but a bill can have many payments.

The class diagram will show the relationships between the objects and describe what those objects
do
b) Explain the generic concepts and principles that apply to software life cycle models

Generic concepts and principles that apply to software life cycle models include the following:

Phases: Software life cycle models are typically divided into phases, each of which represents a stage
in the development process. These phases may include requirements gathering, design,
implementation, testing, and maintenance.

Activities: Within each phase, there are specific technical and management activities that are
performed. These activities may include tasks such as coding, testing, documentation, and project
management.

Iterative process: Many software life cycle models follow an iterative process, where each
development cycle produces an incomplete but deployable version of the software. The first
iteration implements a small set of the software requirements, and each subsequent version adds
more requirements. The last iteration contains the complete requirement set.

Documentation: Documentation is crucial in software life cycle models, regardless of the type of
model chosen for any application, and is usually done in parallel with the development process. This
documentation may include requirements documents, design documents, test plans, and user
manuals.

Verification and validation: Many software life cycle models include phases for verification and
validation. Verification ensures that the software meets its requirements, while validation ensures
that the software meets the needs of its users.

Risk management: Risk management is an important aspect of software life cycle models. This
involves identifying potential risks and taking steps to mitigate them throughout the development
process.

Tailoring: Software life cycle models may need to be tailored to fit the specific needs of a project or
organization. This may involve selecting specific phases or activities, or modifying existing phases or
activities to better suit the project.

These concepts and principles apply to a variety of software life cycle models, including the Waterfall
model, the Spiral model, and Agile software development, among others

Question 4

a)Explain the golden rules that must be followed when designing a User Interface.

When designing a user interface, there are several golden rules that should be followed to ensure
that the interface is effective and user-friendly. Here are some of the most important ones:

Put users in control: The user should always feel like they are in control of the interface. This means
that the interface should be designed in a way that allows the user to easily navigate and interact
with it.

Create familiar and consistent interfaces: The design throughout a web page or app should be
consistent, from the design of buttons to the layout of pages. This helps users to quickly learn how to
use the interface and reduces cognitive load.
Reduce cognitive load: The interface should be designed in a way that reduces the amount of mental
effort required to use it. This can be achieved by using visual cues, providing defaults, undo, and
redo options, and minimizing the need for the user to remember information.

Offer informative feedback: The interface should provide feedback to the user that is clear and
informative. This can include error messages, progress indicators, and confirmation messages.

Design dialogs to yield closure: Dialogs should be designed in a way that allows the user to easily
complete the task at hand. This can include providing clear options for the user to choose from and
ensuring that the user can easily exit the dialog.

Prevent errors: The interface should be designed in a way that minimizes the likelihood of errors
occurring. This can include providing clear instructions, using simple language, and ensuring that the
user can easily undo any actions they have taken.

Accommodate diverse users: The interface should be designed in a way that accommodates users
with different skill levels, disabilities, and technological backgrounds. This can include providing
explanations for novices and shortcuts for experts, as well as allowing users to customize the
interface.

Make the interface transparent: The interface should be designed in a way that is transparent to the
user. This means that the user should not have to think about how to use the interface, but should
be able to focus on the task at hand.

By following these golden rules, designers can create interfaces that are effective, user-friendly, and
easy to use

b)Draw a use case diagram for a student assessment management system and explain

A use case diagram for a student assessment management system would include the following
actors and use cases:

Actors:

Student

Teacher

Administrator

Use Cases:

Login

View grades

Submit assignments

Grade assignments

Manage courses

Manage students

Generate reports
The "Login" use case would be associated with all three actors, as they would need to log in to
access the system. The "View grades" use case would be associated with the Student actor, while
the "Grade assignments" use case would be associated with the Teacher actor. The "Submit
assignments" use case would be associated with the Student actor, and the "Manage courses" and
"Manage students" use cases would be associated with the Administrator actor. Finally, the
"Generate reports" use case would be associated with all three actors, as they would all need to
generate reports for different purposes.

This is just one example of a use case diagram for a student assessment management system, and
there may be other actors and use cases that could be included depending on the specific
requirements of the system

c)Illustrate the tasks involved in the requirements engineering

Tasks involved in requirements engineering are as follows:

Inception: This is the first phase of the requirements analysis process. In this phase, all the basic
questions are asked on how to go about a task or the steps required to accomplish a task. A basic
understanding of the problem is gained, and the nature of the solution is addressed. Effective
communication is very important in this stage, as this phase is the foundation as to what has to be
done further.

Elicitation: This is the process of gathering information about the needs and expectations of
stakeholders for the software system. Requirements are identified with the help of customers and
existing systems processes, if available. The aim is to identify the features and functionalities that
the software system should provide.

Elaboration: In this stage, the requirements are analyzed to determine their feasibility, consistency,
and completeness. The main task in this phase is to indulge in modeling activities and develop a
prototype that elaborates on the features and constraints using the necessary tools and functions.

Negotiation: This phase emphasizes discussion and exchanging conversation on the requirements.
The requirements are analyzed to identify inconsistencies, defects, omission, etc. We describe
requirements in terms of relationships and also resolve conflicts if any.

Specification: This is the process of defining, documenting, and maintaining the requirements. The
requirements are formalized in both graphical and textual formats. The work product is in the form
of software requirement specification.

Validation: This is the process of ensuring that the requirements are complete, consistent, and
correct. The requirements are validated to ensure that they meet the needs and expectations of
stakeholders.

Requirements Management: This is a set of activities that help the project team to identify, control,
and track the requirements. These tasks start with the identification and assign a unique identifier to
each of the requirements. After finalizing the requirement traceability table is developed.

The requirements engineering process is an iterative process that involves several steps, including
requirements elicitation, analysis, specification, verification, and validation. The process is disciplined
and involves the application of proven principles, methods, tools, and notations to describe a
proposed system's intended behavior and its associated constraints.

Question 5

a) Explain the various estimation techniques in software testing

Software test estimation is a critical task in the software development process that determines or
assesses the time and cost needed during the testing process. There are various estimation
techniques in software testing, and some of the most effective and widely used types of estimation
techniques are:

Work Breakdown Structure (WBS): This technique involves dividing the whole project into smaller
sub-tasks and then estimating each sub-task's time, cost, team member(s) needed, and skills
required. The results are easily understandable by not only the QA team but also clients.

3-Point Software Estimation Test: This technique takes into consideration three possible scenarios:
the best-case estimate, the most likely estimate, and the worst-case estimate. First, the tasks are
divided into smaller sub-tasks, and then each sub-task is estimated using these three-point
scenarios.

Wideband Delphi Technique: This technique involves a group of experts who anonymously estimate
the time and cost required for the testing process. The results are then collected, and the group
discusses the estimates until a consensus is reached.

Function Point/Testing Point Analysis: This technique involves calculating the number of function
points in the software and then estimating the time and cost required for testing based on the
number of function points. The formula used for this technique is Number of Test Cases = (Number
of Function Points) × 1.2.

Percentage Distribution: This technique involves estimating the time and cost required for each
testing activity based on the percentage of the total effort required for the testing process.

Ad-hoc Method: This technique involves estimating the time and cost required for the testing
process based on the tester's experience and intuition.

UCP Method: This technique involves calculating the number of use cases in the software and then
estimating the time and cost required for testing based on the number of use cases.

Each of these techniques has its advantages and disadvantages, and the choice of technique
depends on the project's requirements and constraints. It is essential to frequently re-check the test
estimations in the early stages of the project and make modifications if needed. Estimation is just an
estimate because it may go wrong, and we should not extend the estimation after we fix it unless
there are major changes in requirements or negotiations with the customer about re-estimation

b) Describe the client-server architectural pattern


The client-server architectural pattern is a distributed application structure that partitions tasks or
workloads between the providers of a resource or service, called servers, and service requesters,
called clients. In this model, the server hosts, delivers, and manages most of the resources and
services requested by the client. The client-server architecture is also known as the networking
computing model or client-server network.

In the client-server architecture, clients and servers communicate over a computer network on
separate hardware, but both client and server may reside in the same system. Clients usually do not
share any of their resources, but they request content or service from a server. Servers, on the other
hand, run one or more server programs, which share their resources with clients.

The client-server architecture is especially effective when clients and the server each have distinct
tasks that they routinely perform. Typically, client-server architecture is arranged in a way that
clients are often situated at workstations or on personal computers, while servers are located
elsewhere on the network, usually on more powerful machines.

Examples of computer applications that use the client-server model are email, network printing, and
the World Wide Web. A classic example of this architecture pattern is the World Wide Web. When a
user accesses a website using a web browser, the client initiates a request to the server for data
using the customer's login details. The application server interprets this data using the business logic
and then provides the appropriate output to the web server.

Advantages of the client-server model include a centralized system with all data in a single place and
cost efficiency, as it requires less maintenance cost and data

AUG 2022

Question 1

Case Study of Use case diagram

You need to develop a web-based application with the following functionalities: Users can buy
products online by placing the online order. The user can pay bill by credit card or through Paypal.

Draw the use case diagram and describe the inheritance principles for this situation

To draw the use case diagram for the given situation, we can start by identifying the actors and use
cases involved. In this case, the actors are the users, and the use cases are "buy products online" and
"pay bill". The "buy products online" use case can be further broken down into sub-use cases such as
"browse products", "add to cart", and "place order". Similarly, the "pay bill" use case can be broken
down into sub-use cases such as "enter payment information", "authorize payment", and "confirm
payment".

To show the inheritance principles for this situation, we can use the inheritance relationship
between actors. Inheritance between actors is a way to indicate that one actor inherits the behavior
of another actor. In this case, we can use inheritance to show that the "user" actor inherits the
behavior of the "customer" actor. The "customer" actor can have additional behaviors that are not
relevant to the "user" actor, such as "view order history" or "manage account".
Overall, the use case diagram for this situation would look something like this:

In this diagram, the "User" actor is shown as inheriting the behavior of the "Customer" actor. The
"Buy Products Online" and "Pay Bill" use cases are shown as being part of the "Web Application".
The "Buy Products Online" use case is further broken down into sub-use cases, and the "Pay Bill" use
case is also broken down into sub-use cases.
Note that this is just one possible way to represent the use cases and actors for this situation. The
exact diagram may vary depending on the specific requirements and design choices

Question 2

a) Give four(4) possible definitions of software engineering

Here are four possible definitions of software engineering:

Software engineering is the process of analyzing user requirements and then designing, building, and
testing software applications that will satisfy those requirements.

Software engineering is a detailed study of engineering to the design, development, and


maintenance of software. It involves the application of a systematic, disciplined, and computable
approach for the development, operation, and maintenance of software.

Software engineering is an engineering discipline that is concerned with all aspects of software
production from the early stages of system specification to maintaining the system after it has gone
into use. It involves the establishment and use of sound engineering principles in order to
economically obtain software that is reliable and works efficiently on real machines.

Software engineering is the process of building, testing, and maintaining digital products like
websites and apps. It involves capturing user requirements, designing, building, and testing a
software application that will satisfy those customer needs. Software engineers apply engineering
principles at every stage of the development process.

Note that there are many other possible definitions of software engineering, and these are just a few
examples.

b) A good software process must ensure effectiveness, maintainability, predictability,


repeatability, quality, improvement and tracking. Discuss the above characteristics.

A good software process must ensure several characteristics to produce high-quality software. Here
is a discussion of these characteristics:

Effectiveness: A software process must be effective in meeting the requirements of the software
product. It should ensure that the software meets the needs of the users and stakeholders.

Maintainability: A software process must ensure that the software is easy to maintain and modify.
This includes making sure that the code is well-organized, documented, and easy to understand.

Predictability: A software process must be predictable in terms of the time and resources required to
complete the project. This includes setting realistic deadlines and budgets and ensuring that the
project stays on track.

Repeatability: A software process must be repeatable, meaning that it can be used to produce
similar results in different projects. This includes using standardized processes and procedures.

Quality: A software process must ensure that the software product is of high quality. This includes
testing the software thoroughly and ensuring that it meets all the requirements.
Improvement: A software process must be continuously improved to ensure that it stays up-to-date
with the latest technologies and best practices. This includes regularly reviewing and updating the
process.

Tracking: A software process must include tracking mechanisms to monitor the progress of the
project and identify any issues that arise. This includes using project management tools and metrics
to measure progress and identify areas for improvement.

Overall, a good software process should ensure that the software product is effective, maintainable,
predictable, repeatable, of high quality, continuously improving, and well-tracked.

Question 4

Explain each of the following major factors involved in software cost estimation

i) Programme ability
ii) Product complexity
iii) Product size. A
iv) Available time.
v) Required reliability

Here are the major factors involved in software cost estimation, based on the search results:
i) Programme ability: This factor was not mentioned in the search results.

ii) Product complexity: The complexity of the software product is a major factor that affects the
cost of software development. The more complex the software, the more time and resources it
will take to develop, which will increase the cost.

iii) Product size: The size of the software product is another important factor that affects the cost
of development. The larger the software, the more time and resources it will take to develop,
which will increase the cost.

iv) Available time: The available time to complete the software development project is also a
major factor that affects the cost. If there is a short deadline, the development company will
likely need to expand resources to accommodate the time constraints, which will increase the
cost.

v) Required reliability: The level of reliability required for the software product is another factor
that affects the cost of development. The higher the level of reliability required, the more time
and resources it will take to develop, which will increase the cost.

In summary, the major factors involved in software cost estimation are product complexity,
product size, available time, and required reliability

Question 5

a) Discuss the following types of software testing


i. Unit testing
ii. Module testing
iii. User acceptance testing
iv. Integration testing
v. System testing

Software testing is an essential part of the software development process. There are different types
of software testing that are performed to ensure that the software is of high quality and meets the
customer's requirements. Here are the different types of software testing:

Unit Testing: This type of testing is performed at the lowest level of the software hierarchy. It
involves testing individual methods and functions of the classes, components, or modules used by
the software. Unit tests are generally quite cheap to automate and can run very quickly by a
continuous integration server.

Module Testing: This type of testing verifies that different modules or services used by the software
work well together. For example, it can be testing the interaction with the database or making sure
that microservices work together as expected. These types of tests are more expensive to run as
they require multiple.

User Acceptance Testing: This type of testing is performed to ensure that the software meets the
customer's requirements and is ready for release. It is usually performed by the customer or end-
users of the software.

Integration Testing: This type of testing verifies that different modules or services used by the
software work well together. For example, it can be testing the interaction with the database or
making sure that microservices work together as expected. These types of tests are more expensive
to run as they require multiple.

System Testing: This type of testing is performed to ensure that the software meets the
requirements of the system as a whole. It involves testing the entire system, including all the
modules and services used by the software.

Each type of testing has its own features, advantages, and disadvantages. It is important to use a
combination of different types of testing to ensure that the software meets the needs of its users.

Question 6

a) Explain in detail about


i. Structural modeling
ii. functional modeling

Structural Modeling

Structural modeling is a technique used in system analysis and design to describe the structure of
objects that support business processes in an organization. It builds upon the methods used in
functional modeling and describes the underlying structure of an object-oriented system. Structural
modeling identifies the objects, attributes, and relationships of a system. It provides an internal
static view of the evolving system, showing how the objects are organized in the system. Structural
modeling does not identify how objects are stored, created, or manipulated so that analysts can
focus on the business without being distracted by technical details.

Functional Modeling

Functional modeling is a technique used in system analysis and design to give a process perspective
of the object-oriented analysis model and an overview of what the system is supposed to do. It
defines the function of the internal processes in the system with the aid of Data Flow Diagrams
(DFDs) . Functional modeling depicts the functional derivation of the data values without indicating
how they are derived when they are computed or why they need to be computed. It is represented
through a hierarchy of DFDs, which illustrate the series of transformations or computations
performed on the objects or the system, and the external controls and objects that affect the
transformation. The last step in functional modeling is to verify that all use cases, activity diagrams,
and use-case descriptions align without contradiction.

How do they work together?

Functional, structural, and behavioral models work together to describe a system by identifying the
business processes (functional), objects (structural), and their behavior (behavioral) . After the
system requirements are known, functional modeling translates the requirements into use cases.
Next, structural modeling takes the refined requirements in functional modeling to identify the
systems’ objects, attributes, and relationships. After that, behavior modeling uses the information
identified in functional and structural modeling to communicate how objects interact and
communicate based on the use cases. Before behavioral modeling is applied, the analyst must first
understand the functional and structural models of the system. In some instances, the analyst will
make changes to the functional and structural models as a result of gaining new insight during the
behavioral stage
b) Name five principles for software design

Here are five principles for software design:

Simplicity: Code should reflect the functionality that is required, and not account for future ‘possible’
functionality. ‘Extra’ code only makes the code hard to test, maintain, extend, and comprehend, as
well as a waste of the most valuable resource - your time.

The SOLID Principles: SOLID is an acronym for software design principles, where each letter
represents one of the following principles: Single Responsibility Principle (SRP), Open/Closed
Principle (OCP), Liskov Substitution Principle (LSP), Interface Segregation Principle (ISP), and
Dependency Inversion Principle (DIP) .

Minimize Intellectual Distance: The design process should reduce the gap between real-world
problems and software solutions for that problem meaning it should simply minimize intellectual
distance.

Exhibit Uniformity and Integration: The design should display uniformity which means it should be
uniform throughout the process without any change.

Documentation: Successful documentation will make for easily accessible information, help new
users to onboard quickly, lower support and maintenance costs. In software development, we
usually document different aspects of the development process – from business requirements,
business specification, technical specification, code, test scenarios to user manuals.

These principles help developers to make a good system design, and when applied simultaneously,
they help create code that is easy to maintain and extend over time

You might also like