Professional Documents
Culture Documents
md 2
md 2
1. Functional Requirements:
o Definition: Describe what the system should do.
o Example: A user login feature where users enter their credentials to access
their accounts.
2. Non-Functional Requirements:
o
Definition: Describe how the system performs a function, focusing on quality
attributes such as performance, usability, and reliability.
o Example: The system must load the user dashboard within 3 seconds after
login.
3. Domain Requirements:
o Definition: Derived from the application domain of the system and reflect
characteristics of that domain.
o Example: For a banking application, transactions must be processed following
financial regulations and compliance standards.
Agile Principles are the core philosophies of Agile methodology that guide teams in their
work:
1. Customer Satisfaction:
o Continuous delivery of valuable software to satisfy the customer.
2. Welcome Change:
o Adapt to changing requirements, even late in development.
3. Frequent Delivery:
o Deliver working software frequently, typically in weeks rather than months.
4. Collaboration:
o Close, daily cooperation between business people and developers.
5. Motivated Individuals:
o Projects should be built around motivated individuals, who should be trusted.
6. Face-to-Face Conversation:
o The most efficient and effective method of conveying information.
7. Working Software:
o The primary measure of progress.
8. Sustainable Development:
o Promote sustainable development, able to maintain a constant pace.
9. Technical Excellence:
o Continuous attention to technical excellence and good design.
10. Simplicity:
o Maximize the amount of work not done.
11. Self-Organizing Teams:
o The best architectures, requirements, and designs emerge from self-organizing
teams.
12. Reflect and Adjust:
o At regular intervals, the team reflects on how to become more effective, then
tunes and adjusts its behavior accordingly.
W5HH Principle is a project management framework developed by Barry Boehm that stands
for:
Project Scheduling Tools and Techniques help manage and plan project timelines and
activities. Some common ones include:
1. Gantt Charts:
o Visual representation of a project schedule, showing start and end dates of
tasks.
o Example: Microsoft Project.
2. PERT (Program Evaluation and Review Technique):
o Uses a statistical approach to estimate project timeframes.
o Example: Estimating time required for tasks based on optimistic, pessimistic,
and most likely scenarios.
3. CPM (Critical Path Method):
o Identifies the longest path of planned activities to determine the shortest
possible project duration.
o Example: Calculating the critical path in a construction project to ensure
timely completion.
4. Work Breakdown Structure (WBS):
o Decomposes the project into smaller, manageable components.
o Example: Breaking down software development into modules, features, and
tasks.
5. Kanban:
o Visual tool for managing workflow and tracking progress.
o Example: Using a Kanban board in tools like Trello or Jira to monitor tasks in
different stages.
Code Restructuring:
Data Restructuring:
• Definition: Modifying the organization of data structures to improve efficiency and
maintainability.
• Example: Normalizing a database schema to reduce redundancy and improve data
integrity.
The Spiral Model is a risk-driven software development process model that combines
iterative development with systematic aspects of the waterfall model. It consists of four main
phases:
1. Planning:
o Objective: Determine objectives, alternatives, and constraints.
o Activities: Requirements gathering, feasibility studies, resource allocation,
and risk analysis.
o Example: Defining project goals, identifying potential risks, and exploring
possible solutions.
2. Risk Analysis:
o Objective: Identify and evaluate risks, develop strategies to mitigate them.
o Activities: Prototyping, simulations, and alternative analysis.
o Example: Creating a prototype to test the feasibility of a new technology.
3. Engineering:
oObjective: Develop and verify the next level product by implementing the
identified solution.
o Activities: Design, coding, testing, and integration.
o Example: Writing and testing code for a specific module of the software.
4. Evaluation:
o Objective: Assess the progress and plan the next iteration.
o Activities: Customer evaluation, feedback collection, and review.
o Example: Presenting the developed module to stakeholders for feedback and
making necessary adjustments.
Each cycle of the spiral represents a phase of the software development lifecycle, with the
product gradually evolving through repeated iterations.
1. Specification:
o Objective: Formally specify the system requirements using mathematical
methods.
o Activities: Create formal specifications that describe the system behavior
precisely.
o Example: Writing a formal specification for a user authentication system.
2. Development:
o Objective: Incrementally develop the software using a box structure approach.
o Activities: Develop code incrementally, performing thorough reviews and
inspections at each step.
o Example: Implementing small, incremental code units and verifying their
correctness through peer reviews.
3. Verification:
o Objective: Verify correctness through mathematical proof and rigorous
testing.
o Activities: Conduct formal proofs to ensure the software meets its
specifications.
o Example: Using formal methods to prove the correctness of an algorithm
implemented in the software.
4. Statistical Quality Control:
o Objective: Monitor and control the quality of the software using statistical
methods.
o Activities: Perform statistical testing to estimate software reliability.
o Example: Using statistical models to predict the number of potential defects
in the software.
5. Certification:
o Objective: Certify the software for release based on its reliability metrics.
o Activities: Review and certify the software based on statistical quality control
results.
o Example: Issuing a certification that the software meets the required
reliability standards for deployment.
Black Box Testing is a software testing method that focuses on evaluating the functionality
of an application without peering into its internal structures or workings. Common techniques
include:
1. Equivalence Partitioning:
o Objective: Divide input data into equivalent partitions that can be tested.
o Example: For an input field accepting values 1-100, creating partitions for
valid ranges (1-50, 51-100) and invalid ranges (0, 101).
2. Boundary Value Analysis:
o Objective: Focus on values at the boundaries of input domains.
o Example: Testing values at the edge of the input range, such as 0, 1, 100, and
101 for a field that accepts 1-100.
3. Decision Table Testing:
o Objective: Use decision tables to represent complex business rules and
conditions.
o Example: Creating a table with combinations of conditions and actions for an
online order system to ensure all scenarios are covered.
4. State Transition Testing:
o Objective: Test the different states of an application and the transitions
between them.
o Example: Verifying the behavior of a login system as it transitions from
logged out to logged in states and vice versa.
5. Error Guessing:
o Objective: Use intuition and experience to guess the most likely error-prone
areas.
o Example: Testing boundary cases, invalid inputs, and stress conditions based
on prior knowledge of similar systems.
6. Cause-Effect Graphing:
o Objective: Identify the relationships between causes (inputs) and effects
(outputs).
o Example: Creating a cause-effect graph to ensure that all logical conditions
and their effects are tested for a feature like user access control.
Risk Mitigation involves identifying, assessing, and taking steps to reduce the impact of
risks on a project.
1. Risk Avoidance:
o Definition: Changing the project plan to eliminate the risk or its impact.
o Example: Choosing a technology with a proven track record over an
experimental one to avoid technical risks.
2. Risk Reduction:
o Definition: Taking actions to reduce the likelihood or impact of the risk.
o Example: Implementing additional security measures to reduce the risk of a
data breach.
3. Risk Transfer:
o Definition: Shifting the impact of the risk to a third party.
o Example: Purchasing insurance to cover potential losses from project delays.
4. Risk Acceptance:
o Definition: Acknowledging the risk and preparing to deal with its impact if it
occurs.
o Example: Accepting the risk of minor schedule delays due to external
dependencies and planning accordingly.
1. Prototyping:
o Definition: Creating prototypes to identify and mitigate risks early in the
project.
o Example: Developing a prototype of a critical module to test its feasibility.
2. Contingency Planning:
o Definition: Developing plans to deal with potential risks if they materialize.
o Example: Having a backup supplier in case the primary supplier fails to
deliver on time.
3. Risk Audits:
o Definition: Regularly reviewing and assessing risks and the effectiveness of
mitigation strategies.
o Example: Conducting quarterly risk audits to ensure all identified risks are
being managed appropriately.
4. Root Cause Analysis:
o Definition: Identifying the root causes of potential risks and addressing them.
o Example: Analyzing past project failures to understand their root causes and
implementing measures to prevent recurrence.
5. Risk Workshops:
o Definition: Bringing together stakeholders to discuss and address potential
risks.
o Example: Organizing a workshop with the project team to brainstorm and
plan for potential risks.
1. Version Control:
o Definition: Tracks and manages changes to software artifacts.
o Example: Keeping track of different versions of source code files to manage
updates and rollbacks.
2. Baselining:
o Definition: Establishes a set of configuration items at a specific point in time,
serving as a reference.
o Example: Creating a baseline for a software release that includes specific
versions of code, documentation, and test scripts.
3. Access Control:
o Definition: Manages permissions and access rights to the repository.
o Example: Granting read/write access to developers and read-only access to
testers and stakeholders.
4. Audit Trails:
o Definition: Maintains records of changes, including who made the change,
when, and why.
o Example: Logging all changes to a configuration item to provide a history of
modifications.
5. Automated Builds:
o Definition: Facilitates automated building and integration of software
components.
o Example: Using continuous integration tools to automatically build and test
the software whenever changes are committed to the repository.
6. Dependency Management:
o Definition: Manages dependencies between different configuration items.
o Example: Tracking dependencies between libraries and applications to ensure
compatibility.
7. Change Management:
o Definition: Supports the process of requesting, evaluating, and implementing
changes.
o Example: Implementing a change request system that allows tracking and
approval of changes to configuration items.
8. Branching and Merging:
o Definition: Supports parallel development efforts by allowing multiple
branches of development.
o Example: Creating a branch for a new feature while maintaining the main
branch for bug fixes, and later merging the changes back into the main branch.