Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

INFORMATICS 3B

Laaiqah Seedat
Table of Contents
Chapter 15 – Quality Concepts ............................................................................................................... 5
What is Quality ? ................................................................................................................................. 5
5 Views of Quality ........................................................................................................................... 5
Software Quality ................................................................................................................................. 5
Quality Factors ................................................................................................................................ 5
Characteristics of high software quality ......................................................................................... 6
Qualitative Quality Assessment ...................................................................................................... 6
Quantitative Quality Assessment.................................................................................................... 6
The software quality dilemma ............................................................................................................ 7
Cost of Quality................................................................................................................................. 7
Achieving software quality.................................................................................................................. 8
Legal Matters (POPIA) ............................................................................................................................. 9
Terms .................................................................................................................................................. 9
Purpose ............................................................................................................................................... 9
The actual act : .................................................................................................................................... 9
Principles of POPIA............................................................................................................................ 10
Rights of data subjects ...................................................................................................................... 10
Chapter 16 Reviews .............................................................................................................................. 11
Review Metrics (Calculations) ........................................................................................................... 11
Steps for a review ............................................................................................................................. 12
Level of review formality .................................................................................................................. 12
Formal technical review .................................................................................................................... 12
Objectives...................................................................................................................................... 13
Constraints .................................................................................................................................... 13
End of review tasks ....................................................................................................................... 13
Chapter 17 Software Quality Assurance ............................................................................................... 15
Terms ................................................................................................................................................ 15
Elements of SQA( software quality assurance) ................................................................................. 15
SQA tasks........................................................................................................................................... 16
SQA Goals .......................................................................................................................................... 17
Statistical SQA ................................................................................................................................... 17
Steps for statistical SQA ................................................................................................................ 17
Causes of problems uncovered ..................................................................................................... 17
Six Sigma ....................................................................................................................................... 17
AI modelling reliability ...................................................................................................................... 18
Software safety ................................................................................................................................. 18
ISO 9000 standards ........................................................................................................................... 19
SQA Plan ............................................................................................................................................ 19
Chapter 19 Software Testing – Component Level (Unit testing) .......................................................... 21
Planning and recordkeeping ......................................................................................................... 21
What is in a testing strategy Test Case Design ............................................................................. 21
White box testing .......................................................................................................................... 22
Black box testing ........................................................................................................................... 22
Object Oriented Testing .................................................................................................................... 23
Semester Test Example ..................................................................................................................... 23
Chapter 20 Software Testing – Integration Level (Joining units) .......................................................... 25
Black Box testing (External/ Functional testing) ............................................................................... 25
White box testing (Internal / Structural testing ) ............................................................................. 25
Integration ........................................................................................................................................ 25
Top Down ...................................................................................................................................... 25
Bottom up ..................................................................................................................................... 26
Continuous integration ..................................................................................................................... 26
Regression testing And AI ................................................................................................................. 27
Integration testing in Object orientation .......................................................................................... 27
Fault based test case design ......................................................................................................... 27
Scenario based test case ............................................................................................................... 28
Validation testing .......................................................................................................................... 28
Testing Patterns ............................................................................................................................ 28
Integration testing semester test 1 2020 Q5 (Past paper) ............................................................... 28
Chapter 21 Software Testing – Mobility ............................................................................................... 30
Testing strategies .............................................................................................................................. 30
Alerts ................................................................................................................................................. 30
Web testing strategies ...................................................................................................................... 30
Security Testing ................................................................................................................................. 30
Performance testing ......................................................................................................................... 30
Real time testing ............................................................................................................................... 31
Testing AI Systems ............................................................................................................................ 31
Chapter 22 – Software Configuration Management ............................................................................ 32
SCM Features .................................................................................................................................... 33
Change categories ............................................................................................................................. 33
2020 → Question 1 Semester test 2 (software configuration management) .................................. 34
Chapter 23 – Software Metrics and Analytics....................................................................................... 35
Requirements Model Metrics ........................................................................................................... 35
Maintenance Metrics ........................................................................................................................ 36
Metrics for Software Quality............................................................................................................. 37
Chapter 24 – Project Management Concepts ....................................................................................... 38
People ............................................................................................................................................... 38
Product .............................................................................................................................................. 39
Process .............................................................................................................................................. 39
Project ............................................................................................................................................... 40
W5HH Principle .................................................................................................................................. 40
Chapter 25 – Viable Software Plan ....................................................................................................... 41
Chapter 26 – Risk Management............................................................................................................ 42
Chapter 27 – A strategy for software support ...................................................................................... 44
Semester Test 1 Revised ....................................................................................................................... 45
REVISION (150 Marks)........................................................................................................................... 45
Chapter 26 Risk management : 20 Marks ......................................................................................... 45
Chapter 15 (15 Marks) ...................................................................................................................... 46
Reviews (Chap 16) 10 Marks ............................................................................................................. 47
Software quality assurance (10 marks) ............................................................................................. 47
Chapter 25 (Creating a viable software plan) ................................................................................... 48
Component level testing chapter 19 15 marks ................................................................................. 48
Integration testing (Chapter 20) 10 Marks ....................................................................................... 49
Chapter 21 (testing for mobility) ...................................................................................................... 49
Chapter 22 (config management) ..................................................................................................... 49
Chapter 27 (a strategy for software support) ................................................................................... 49
Guest Lecturer Notes ............................................................................................................................ 51
Trust In Technology (KPMG) ............................................................................................................. 51
Emerging technology .................................................................................................................... 51
Entelect (Software engineering) ....................................................................................................... 51
Embracing Technology (PwC) ........................................................................................................... 51
Play with your food : A Guide to thriving in the software development industry (BBD) ................. 51
DVT (Microservices → Stay hungry, stay foolish) ............................................................................. 51
Chapter 15 – Quality Concepts
Why something is good or bad quality ?

The standard : https://iso25000.com/index.php/en/iso-25000-standards/iso-25010

What is Quality ?
• Everyone is responsible for quality.
• Quality of design – Characteristics that designers specify for the product. These are :
o The grade of materials
o Tolerances
o Performance specification
• The higher the grade of materials , the tighter the tolerances and greater levels of
performance are specified.

5 Views of Quality
1. Transcendental View – Quality you recognise but cannot define.
2. User View – It is quality because it meets the end users’ goals.
3. Manufacturer’s view – The product conforms to the original specifications .
4. Product View – Quality can be tied to the functions or features of the product.
5. Value based view – How much a customer is willing to pay for the product.

User Satisfaction = Compliant product + Good Quality + Delivery withing budget and schedule

I.e.: Customer satisfaction is important, because without it nothing else matters.

Software Quality
• Software Quality – An effective software process applied in a manner that creates a useful
product that provides measurable value for those who produce it and those who use it.
o Project chaos is a key contributor to poor quality.
o To build high quality software
▪ Analyse the problem
▪ Design a solid solution
o Useful product delivers the end users desires in a reliable, error free way.
o Producer gains from good software because high quality software requires less
maintenance, fewer bug fixes and low levels of customer support.
• Software quality results in
1. Greater revenue
2. Better profitability
3. Improved availability of information.

Quality Factors
• Product Operation → Operation characteristics
▪ Correctness
▪ Reliability
▪ Usability
▪ Integrity
▪ Efficiency
• Product Transition → Ability to undergo change
▪ Portability
▪ Reusability
▪ Interoperability – the ability of computer systems or software to exchange and make
use of information.
• Product Revision → Adaptability to new environments
▪ Maintainability
▪ Flexibility
▪ Testability

Characteristics of high software quality


Quality in Use Model
When considering using the product in a particular context.

Importance of customer satisfaction.

1. Effectiveness. Accuracy and completeness with which users achieve goals


2. Efficiency. Resources spent to achieve user goals completely with desired accuracy
3. Satisfaction. Usefulness, trust, pleasure, comfort
4. Freedom from risk. Mitigation of economic, health, safety, and environmental risks
5. Context coverage. Completeness, flexibility

Product Quality
Focuses on the static and dynamic nature of computer systems.

Importance of functional and non-functional requirements.

1. Functional suitability. Complete, correct, appropriate


2. Performance efficiency. Timing, resource utilization, capacity
3. Compatibility. Coexistence, interoperability
4. Usability. Appropriateness, learnability, operability, error protection, aesthetics, accessibility
5. Reliability. Maturity, availability, fault tolerance, recoverability
6. Security. Confidentiality, integrity, accountability, authenticity
7. Maintainability. Modularity, reusability, modifiability, testability
8. Portability. Adaptability, installability, replaceability

Qualitative Quality Assessment


• ISO2510
▪ Appropriateness
▪ Learnability
▪ Operability
▪ Error protection
▪ Aesthetics
▪ Accessibility

Quantitative Quality Assessment


• High coupling
• Unnecessary levels of complexity
The software quality dilemma
• If we release too quickly no one will buy it since it will be poor quality, yet if we spend too
much time on it we will lose the market and it will cost too much as we have spent a lot of
resources on the “perfect software”
• The solution to this is to meet in the middle ground with “good enough” software.
• Good enough software → High quality functions and features that meet the end users
desires but with other obscure or specialised functions and features with known bugs.
• Good enough may not work for smaller companies or application domains such as aircrafts
where software is vital and works with other hardware components.

Cost of Quality
Gaining quality comes at a cost , and low quality also comes at a cost

Cost of Quality → all costs formed in the pursuit of quality.

Can be subdivided into 3 Categories:

• Prevention Cost
Costs to prevent defects
▪ Quality control and assurance management activities.
▪ Added technical activities
▪ Test planning
▪ Training for the above activities.

It is good to have high prevention costs

• Appraisal cost
Actions that assess software to determine their quality.
▪ Cost of technical reviews
▪ Data collection and metrics evaluation
▪ Cost of testing and debugging
• Failure Cost
▪ Internal Failure
Error in Product prior to shipment
➢ Cost to rework/ repair error
➢ Cost when rework generates side effects
➢ Collection of quality metrics that assess the mode of failure
▪ External Failure
Defects found after shipment
➢ Complaint resolution
➢ Product return & replacement
➢ Help line support , labour costs with warranty
➢ Poor reputation for company

• Poor quality software can lead to many risks, even death in certain circumstances.
• Low quality can be a security risk because it can be easier to hack.
• Two software problems
▪ Bugs – Implementation problems
▪ Flaws – Architectural problems in the design.
People pay too much attention to the bugs and not enough attention to the flaws.
• Management actions can impact the quality of software
▪ They could make an irrational delivery date, that forces the developers to meet the
deadline and compromise quality
▪ They could make unrealistic scheduling sessions, as such testing may not occur the
way it was meant to and overall the software quality will suffer.
▪ Develop a contingency plan if something does go wrong.

Achieving software quality


• Good management decisions :
o Using estimation to verify if delivery dates are achievable
o Scheduled dependencies are understood and no shortcuts are taken
o Risk planning is conducted
• Quality Control → Ensures each work product meets its quality goals.
Allows the software team to tune the process if anything fails.
• Quality Assurance → Set of auditing and reporting functions that assess the effectiveness
and completeness of quality control actions.
Goal is to provide management with insight and confidence of product quality.
Legal Matters (POPIA)
• Privacy is the right to be left alone.
• POPIA – Protection of Personal Information Act. ( For South Africa)
o How they handle our information is handled.
• GDPR – General Data Protection Regulation. (For EU)

Terms
• Personal information – Any information relating to an identifiable, living, natural person, and
where it is applicable, an identifiable, existing juristic person.
o Obvious personal information
▪ Name
▪ ID Number
o Less obvious
▪ Belief system
▪ Opinions
▪ Sexual orientation
• Data subject – The person whom the personal information relates to.
• Responsible party – Public / private body or any other person that determines the purpose
and means for processing personal information.

Purpose
• Gives right of privacy.
• Justifiable limitations are aimed at :
o Balance the right to privacy and the right to access information.
o Protects important interests
• Regulates the way PI may be processed.

The actual act :


1. To promote the protection of personal information processed by public and private bodies
2. Introduced certain conditions to establish minimum requirements for the processing of
personal information
3. establishment of an Information Regulator to exercise certain powers and to perform certain
duties and functions in terms of this Act and the Promotion of Access to Information Act,
2000
4. the issuing of codes of conduct
5. rights of persons regarding unsolicited electronic communications and automated decision
making
6. regulate the flow of personal information across the borders of the Republic
7. And to provide for matters connected therewith.
Principles of POPIA
Question?? Advise what would you do to company in scenario to ensure POPIA compliance.

• Accountability – Responsible parties must ensure compliance with the 8 conditions


• Processing limitation – Personal information must be processed lawfully and responsibly to
not infringe on privacy of a person. It should be relative and not excessive. Consent should
be granted by the data subject.
o To get the data subject’s consent. Cookies are used to get consent from the data
subject.
▪ Judges will ask about POPIA compliance.
▪ Ask user for consent when they sign up with the app.
• Purpose specific – records must not be retained for a period longer than for the initial
purpose.
o Records should only be kept for how long we need it.
• Further processing limitation – Further processing must be compatible with the purpose of
collection, failing which consent must be obtained.
o Ask for consent to use the data for another reason,(further processing)
• Information Quality – Personal information must be complete, accurate , not misleading
and updated.
• Openness – Responsible party must maintain records, notification to data subject when
collecting personal information.
• Security safeguards – A responsible party must secure the integrity and confidentiality of
personal information.

Information processed by an operator or person acting under authority must do with


knowledge/ authority of the responsible party.

Information must be treated as confidential.

• Data subject participation – Data subject must have access to personal information
correction or deletion of PI if inaccurate, irrelevant outdated, excessive, incomplete,
misleading, or unlawfully obtained.
o Allow users to update information.

Rights of data subjects


• To be notified that information is being collected or personal information has been accessed
by an unauthorised person
• To establish whether a responsible party holds personal information and to request access
to it.
• To request correction, destruction, or deletion of PI.
• To not allow processing of PI for direct marketing, or in general.

EFF → Electronic Frontier Foundation

Defends digital privacy


Chapter 16 Reviews
• OBJECTIVE : Formal Technical Review → Find errors before they are passed on to another
software engineering activity or released to the end user.
• Use the limited resources to develop quality software.
• Technical reviews is an effective filter
• Reviews save time on rework and helps pick up errors that would cost more down the line.
o A single error could also become multiple errors down the line.
o The cost of finding and fixing an error grows.
• Terms
o Bugs
o Cost Effectiveness
o Defect amplification – A single error introduced early and not uncovered and
corrected can amplify into multiple errors later in the process.
o Defect propagation – The impact an undiscovered error has on future development
activities or product behaviour.
o Defect – A quality problem found only after the software has been released.
o Error density
o Error- A quality problem found before the software is released.
o Informal review – A review with no advanced planning or preparation , no agenda or
meeting structure and no follow up on errors.
o Record keeping
o Review reporting
o Technical reviews
o Pair Programming – Characterised as continuous desk check.

Review Metrics (Calculations)


∙ Preparation effort, Ep. The effort (in person-hours) required to review a
work product prior to the actual review meeting

∙ Assessment effort, Ea. The effort (in person-hours) that is expended during
the actual review

∙ Rework effort, Er. The effort (in person-hours) that is dedicated to the
correction of those errors uncovered during the review

∙ Review effort, Ereview. Represents the sum of effort measures for reviews:
Ereview = Epreparation + Eassessment + Ereview
∙ Work product size (WPS). A measure of the size of the work product that
has been reviewed (e.g., the number of UML models, the number of document
pages, or the number of lines of code)

∙ Minor errors found, Errminor. The number of errors found that can be categorized
as minor (requiring less than some prespecified effort to correct)

∙ Major errors found, Errmajor. The number of errors found that can be categorized
as major (requiring more than some prespecified effort to correct)

∙ Total errors found, Errtot. Represents the sum of the errors found:
Errtot = Errminor + Errmajor
∙ Error density. Represents the errors found per unit of work product
reviewed:
Error density =Errtot / WPS
∙ Effort saved
Effort saved per error = Etesting – Ereviews

• Keep the review metrics the way it was done in the past (i.e. : if it was calculated by UML, calculate it
by UML this time too, this helps to compare between reviews).
• If not enough errors were found , the review approach not have been thorough enough, or the team
did a great job as developing the solution / setting up requirements etc.

• Person hour – Number of hours each person puts towards a project.
o Technical review hours are also included in the main hours.
• Review Effort → Preparation Efforts +Assessment Efforts+ Rework Efforts
o Semester test 1 (Q2a)
▪ Prep Efforts (EP)= in person hours of everyone added up
• 12+10+11+10+9.5 = 52.5 hours
▪ Assessment efforts (EA) = 6 hours * 5 people = 30 hours
▪ Review efforts (ER)= effort to rework
• (9*2)+ (6*6) = 54 hours
▪ FINAL : 52.5 + 30 + 54 = 136.5 hours
• Error Density→ Error total (Number of errors)/ Work product size (Size of the work product)
o Semester test 1 (Q2b)
▪ Error total = Major errors + minor Errors
• 9+6=15
▪ WPS = 75
• We know there are 0.2 errors per page (historically), stick with what
they have done in the past because we can only compare with the
past.
• And the question says so.
▪ FINAL : 15/75= 0.2 error density.

Steps for a review


1. Planning
2. Preparation
3. Structuring the meeting
4. Noting errors
5. Making corrections
6. Verify the corrections were performed properly.

Level of review formality


• This increases when :
1. distinct roles are explicitly defined for the reviewers,
2. there is enough planning and preparation for the review,
3. a distinct structure for the review (including tasks and internal work products) is
defined, and
4. follow-up by the reviewers occurs for any corrections that are made.

Formal technical review


• Allows juniors to learn.
• Successful if it is properly controlled, attended and planned.
• Focusses on a small part of the overall software.

Objectives
(1) to uncover errors in function, logic, or implementation for any representation of the
software;
(2) to verify that the software under review meets its requirements;
(3) to ensure that the software has been represented according to predefined
standards;
(4) to achieve software that is developed in a uniform manner; and
(5) to make projects more manageable.
Constraints
• Between three and five people (typically) should be involved in the review.
• Advance preparation should occur but should require no more than 2 hours of work for each
person.
• The duration of the review meeting should be less than 2 hours.

End of review tasks


(1) accept the product without further modification,
(2) reject the product due to severe errors (once corrected, another review must be
performed), or
(3) accept the product provisionally (minor errors have been encountered and must be
corrected, but no additional review will be required).

A review summary report answers three questions:


1. What was reviewed?
2. Who reviewed it?
3. What were the findings and conclusions?

Review Issue list Purposes


(1) to identify problem areas within the product and
(2) to serve as an action item checklist that guides the producer as corrections are made

REVIEW PROCESS ON PG 333

• PME → Post Mortem Evaluation


o Examines the entire software project, focusing on excellences (Achievements) and challenges
(problems).
o Attended by members of software team and stakeholders.
o Intent
▪ Identify excellences and challenges and extract lessons from both.
o Objective
▪ Suggest improvements to both process and practice going forward.

• Semester test 1 (Q2c)


▪ The error density remains the same.
▪ When people make reviews, it is likely that they did not check properly if the
error density decreases.
▪ If the error density increases , there’s a possibility the team did not fully
understand the solution.
▪ There is consistency, so the error density remains the same , and done to
the same level as they are used too.
▪ The review was effective.
▪ Advanced preparation should not spend more than 2 hours per person.
▪ Duration of the review was only supposed to be 2 hours , yet it was 6.
▪ One person to review a 75 page is too much work, formal technical reviews
are supposed to be small components of the whole solution being reviewed.
Does not focus on one aspect.
▪ To sum up
• In terms of review , effective in terms of finding errors.
• Extra hours for the review, and extra hours for the preparation.
• Did not focus on a single scope or component. Should have worked
per component / scope.
• Good thing that there was 5 people, 3-5 people is the general
amount of people.
Chapter 17 Software Quality Assurance
Terms
1. Bayesian Inference - Method of statistical inference in which Bayes’ theorem is used
to update the probability for a hypothesis as more evidence/Information becomes
available
2. Formal approaches
3. Genetic algorithm – Heuristic search method to find near optimal solutions to search
problems based on the theory of natural selection and evolutionary biology.
▪ Used to grow reliability models.
4. Goals
5. Six sigma
6. Software reliability
7. Software safety –A software quality assurance activity that focuses on the
identification and assessment of potential hazards that may affect software
negatively and cause an entire system to fail.
8. SQA Plan
9. SQA Tasks

• What is software quality ?


o Planned and systematic pattern of actions that are required to ensure high quality.
o Identify what is software quality
o Create a set of activities to ensure all software engineering work exhibits that
quality.
o Perform quality control and assurance.
• We want to find errors early or else it will bug you later.
o It is always cheaper to do the job right the first time.
• Technical debt – leaving fixing things for later.
• Assurance of quality – You can guarantee the expected requirements are met.
o To ensure quality → Software quality encompasses :
1. Define process
2. Change control (Technical reviews + multitiered testing strategy)
3. Review and test
4. Methods and tools
5. Audits and compliance , ensure compliance with standards
6. Measurements and reporting
• Software organisation, do the right things at the right time, in the right way.

Elements of SQA( software quality assurance)


• Standards – Ensure work products conform and are adopted within the software.
• Reviews and audits – technical reviews are to uncover errors (stop them before they are
implemented). Audits ensure quality guidelines are being followed.
• Testing – intent is to find errors (these may have already been implemented). SQA plans and
ensured they are properly conducted to find errors.
• Error/ defect collection and analysis – SQA collects and analysis error and defect data to
determine how errors are introduced and the best way to eliminate them.
• Change management – If there is a change it can lead to confusion and poor quality, SQA
ensures adequate change management processes.
• Education – SQA leads in software process improvement through educational programs.
• Vendor management – SQA should ensure high quality software results by suggesting
quality practices that the vendor should follow
• Security management – SQA ensures appropriate process and technology are used for
software security. This is in line with government regulations to protect privacy and data.
• Safety – SQA is responsible for assessing the impact of software failure and initiating steps
to reduce risk.
• Risk management – SQA organization ensures that risk management activities are properly
conducted and that risk-related contingency plans have been established.

SQA tasks
• Prepare an SQA plan for the project
• Participate in the development of the projects software process description.
• Review SE activities and verifies compliance
o Identifies, documents and tracks deviations from the process and verifies
corrections.
• Audit to verify compliance
o Reviews selected work products
o Reports results to manager
• Ensures deviations are documented and handled accordingly.
• Records and reports non-compliance to management.
o This is tracked till it is resolved.
• Manage change and help to collect software metrics.
SQA Goals
• Requirement’s quality
• Design quality
• Code quality
• Quality control effectiveness

Statistical SQA
Steps for statistical SQA
1. Information about software errors and defects is collected and categorized.
2. An attempt is made to trace each error and defect to its underlying cause (e.g.,
nonconformance to specifications, design error, violation of standards, poor communication
with the customer).
3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all
possible causes), isolate the 20 percent (the vital few).
4. Once the vital few causes have been identified, move to correct the problems that have
caused the errors and defects.

These are done to improve elements that introduce errors.

Causes of problems uncovered


▪ Incomplete or erroneous specifications (IES)
▪ Misinterpretation of customer communication (MCC)
o Solution: you can implement requirements gathering techniques,
▪ Intentional deviation from specifications (IDS)
▪ Violation of programming standards (VPS)
▪ Error in data representation (EDR)
o Solution : Acquire tools for data modelling, and perform d=stringent data design
reviews.
▪ Inconsistent component interface (ICI)
▪ Error in design logic (EDL)
▪ Incomplete or erroneous testing (IET)
▪ Inaccurate or incomplete documentation (IID)
▪ Error in programming language translation of design (PLT)
▪ Ambiguous or inconsistent human/computer interface (HCI)
▪ Miscellaneous (MIS)

These Account for 53% of all errors

• Spend time focusing on things that matter but first understand what really matters !

Six Sigma
• First created for manufacturing, now a strategy for statistical quality assurance
• Used to improve an existing process or to create a solution. (Define, measure, and analyse are
the core steps)
o CORE (In both)
▪ Define – gather requirements and project goals through customer
communication
▪ Measure – Determine current quality performance. (i.e.: Collect defect metrics)
▪ Analyse – The defect metrics and determine the few causes
o Improve (DMAIC Approach)
▪ Define , Measure, Analyse
▪ Improve – eliminate root causes of defects
▪ Control – control process to ensure future work does not reintroduce the causes
for defects.
o Creation (DMADV Approach)
▪ Define , Measure, Analyse
▪ Design
• To avoid root causes pf defects
• Meet customer requirements
▪ Verify
• That the model does avoid root defects and meets customers’
requirements.
• Apply six sigma in a scenario
o Know steps and what they entail.
▪ Think of you solving an issue with code.
• Software reliability → the probability of failure-free operation of a computer program in a
specified environment for a specified time.
• Failure is non-compliance to software requirements.

Mean Time Between Failure = Mean Time to Failure +Mean time to repair
Based on CPU time, not wall clock time , measured in clock ticks / clock seconds.

Why mean time before failure can be problematic.


1. Projects a time span between failures but does not provide a projected failure rate.
2. Can be misinterpreted to mean average life span.

• Software availability → The probability that a program is operating according to its requirements
at a given point in time

Availability = [Mean Time to Failure / Mean Time Between Failure] * 100

AI modelling reliability
• Why don’t we use AI to model reliability?
• We can use AI to determine where we could make errors in code and could help us code
better.
• Finding the optimal solutions, use evolutionary means.
• Use past data to measure , assess and evaluate where your problems may occur.

Software safety
• Software must be analysed in the context of the entire system.
• Examines ways that a failure can result in a mishap, failures are considered in context of the
entire computer system and not alone.

ISO 9000 standards


A quality assurance system may be defined as the organizational structure, responsibilities,
procedures, processes, and resources for implementing quality management

Establish the elements of a quality management


system.
• Develop, implement, and improve the system.
• Define a policy that emphasizes the importance of the system.
Document the quality system.
• Describe the process.
• Produce an operational manual.
• Develop methods for controlling (updating) documents.
• Establish methods for record keeping.
Support quality control and assurance.
• Promote the importance of quality among all stakeholders.
• Focus on customer satisfaction.
• Define a quality plan that addresses objectives, responsibilities, and authority.
• Define communication mechanisms among stakeholders.
Establish review mechanisms for the quality
management system.
• Identify review methods and feedback mechanisms.
• Define follow-up procedures.
Identify quality resources
• including personnel, training, and infrastructure elements.
Establish control mechanisms.
• For planning.
• For customer requirements.
• For technical activities (e.g., analysis, design, testing).
• For project monitoring and management.
Define methods for remediation.
• Assess quality data and metrics.
• Define approach for continuous process and quality improvement.

SQA Plan
• Provides a roadmap for instituting SQA.
• SQA plans are published by IEEE.
• The standard structure (IEEE) :
• the purpose and scope of the plan,
• a description of all software engineering work products (e.g., models, documents, source
code) that fall within the purview of SQA,
• all applicable standards and practices that are applied during the software process,
• SQA actions and tasks (including reviews and audits) and their placement throughout
the software process,
• the tools and methods that support SQA actions and tasks,
• software configuration management procedures,
• methods for assembling, safeguarding, and maintaining all SQA-related records, and
• organizational roles and responsibilities relative to product quality.
Chapter 19 Software Testing – Component Level (Unit testing)
• Technical reviews – speak as a team and make review plans.
• Begin at a component level.
• As you combine two components, do integration testing.
• There are various ways to test.
• Developers test as well.
• Ensure progress is measurable.
• Testing is one element that forms part of validation and verification.
o Verification → Are we building the product right?
▪ The set of tasks that ensure that software correctly implements a specific
function.
o Validation → are we building the right product? (Requirements wise) *
▪ A different set of tasks that ensure the software that has been built is
traceable to customer requirements.
• During development , quality should be taken into consideration.
• When you test, as a developer , you do not want to show something that does not work
well.
• When you test something , you try and break it.
• Independent Test Groups (ITG) are paid to break the system.
• Software once validated should be integrated with other elements.
• You are never done with testing, therefore there are good enough software to use.
• Test for main components.
• You are done with testing once you are complete with money or time or another view is ,
testing never ends, it is only passed on to the end user.

Planning and recordkeeping


• Test in the middle , when enough work has been completed, test.
• Start by unit tests, test a certain part
o Combine this unit with another unit.
• Then do integration testing
• Then do system testing
• Driver → Main program that accepts test date
• Stub → Serve to replace modules that are invoked by the component to be tested. This is s
dummy subprogram.

What is in a testing strategy Test Case Design


• What is the purpose of the system.
• Each user should be tested in their own profile.
• Build robust system design.
• Use technical reviews to test properly.
• You can do reviews at every level and see what is important or not.
• You are going to continuously going to test the system.
• Keep track of what is being tested.
o You can use a google docs for this.
o Classify the tests
o Tests can pass and fail.
o Why it failed ?
• Look at cost effective testing :
o Testers should work smarter.
• Test for things that a user should not be able to do
• Antibugging → Anticipating error conditions and establishing error handling paths to reroute
or clearly terminate processing when an error does occur.
• Anti-requirements → Write test cases to test that a component does not do things it is not
supposed to do, also known as negative test cases and should be included to ensure the
component behaves correctly.

White box testing


• Test all boundary conditions, test for main paths.
• Predicate node – has more than two paths following from it
• white box testing methods, you can derive test cases that :
(1) guarantee that all independent paths within a module have been exercised at
least once
(2) exercise all logical decisions on their true and false sides,
(3) execute all loops at their boundaries and within their operational bounds, and
(4) exercise internal data structures to ensure their validity.
• Cyclomatic Complexity → Tells you how many test cases should be exercised in order to test
each independent logical path.
o Given a flowchart (Has diamonds and squares)
▪ Count the diamonds/ decision nodes → Predicate nodes.
▪ Cyclomatic complexity = Predicate Nodes +1
▪ V(G) = Predicate nodes + 1
o Given a flowgraph (has nodes, circles and edges, lines)
▪ V(G)= E – N + 2
▪ Cyclomatic complexity = Edges – Nodes +2
• Modules with high cyclometric complexity tend to be more error prone than those who have
lower cyclometric complexity.

Black box testing


• Applied to the later stages
• Also known as Behavioural / Functional testing
• Black box testing attempts to find errors in the following categories :
(1) incorrect or missing functions,
(2) interface errors,
(3) errors in data structures or external database access,
(4) behaviour or performance errors, and
(5) initialization and termination errors.
• Focuses on functional requirements.
• Testing designed to answer questions
o How is functional validity tested?
o How are system behaviour and performance tested?
o What classes of input will make good test cases?
o Is the system particularly sensitive to certain input values?
o How are the boundaries of a data class isolated?
o What data rates and data volume can the system tolerate?
o What effect will specific combinations of data have on system operation?
• Remember to remove debug messages.
• Validate your input and outputs.
• Unit test focuses on an entire class.
• You need to use both white box and black box testing , because they complement each
other.
• Interface Testing → Used to check that the program accepts information and returns
information in the proper datatypes and proper data formats.
• Equivalence partitioning → divides the input domain into classes of data from which test
cases can be derived.
o These are selected so that the largest number of equivalence classes are exercised at
once.
• Boundary Value Analysis → The selection of test cases that exercise bounding values.

Object Oriented Testing


• The appropriate testing mechanisms when developing an object-oriented solution.
• Class Testing
o Drive your testing through the operations of the class.
o Look at the main path and move a little off the main path to get alternative
paths.(i.e.: Look at the alternative paths that is different to the shortest path).
o Generate valid test cases to see if these work (no Anti- testing → Testing if the
program responds in a wrong way)
• Behavioural testing
o Test cases should be defined to achieve coverage of every state.
o Test cases should ensure all behaviours for the class have been adequately
exercised.
o The state model can be traversed using breadth first.

Semester Test Example


• 3 ways to answer a flow chart question
o Look at regions in the flow graph.
▪ Draw the graph

▪ 3 regions
• The whole graph, between 3 and 2
• Between (4,5,6) , 7 , (10,11) , (8,9)
o Cyclometric complexity. V(g) = e-n+2
▪ Look at the edges and nodes , from the drawn graph and do the calculation.
▪ V(g)=e-n+2 = 8-7+2 = 3
o Look at how many predicate nodes there are (Any node where it branches of) (
BEST OPTION)
▪ 2 diamonds + 1 = 3
• Final answer is 3
▪ V(g) = Predicate Nodes +1

• List the different regions :


o 1,2, 3,2,4,5,6,7,10,11
o 1,2,4,5,6,8,9,10,11
o 1,2,4,5,6,7,10,11
• ALWAYS TERMINATE AT THE END , NUMBER 11
• Predicate nodes are if statements or loops

Chapter 20 Software Testing – Integration Level (Joining units)
• A good test is one that has a high probability of uncovering an error → testing in general is
used to find errors.
• Test cases should not be redundant.
• Execute tests that will identify the most errors as possible, since we have finite resources in
terms of time and money.
• A test should not be too simple or complex.

Black Box testing (External/ Functional testing)


• Looking at a specific function and how it should work.
• Does not worry about what’s in the function, just checks if what is given gives the correct
output, if given the correct input
• Should also cater for bad inputs (i.e., should not crash if given a bad input)

White box testing (Internal / Structural testing )


• Exhaustive testing can cause logistical problems, hence only individual testing paths should
be considered.
• Look at all the individual separate paths and if each path executes correctly.
• Looks at everything in the function and checks if it executes successfully.
• Important data structures should also be tested for validity after component integration.

Integration
• Combination of completed work being joint for a final product.
• Uses both black and white box testing.
• Looks at the communication between the modules.
• The big bang approach → Take everything, combine it to the final solution and then test it.
o BAD IDEA !!!
• Do integration testing incrementally.

Top Down
• Depends on stubs (A stub is a module that acts as temporary replacement for a called
module and gives the same output as that of the actual product , can also be called mocks)
• Begin with the main control model and move down. (i.e., integrated by moving downward
through the control hierarchy)
• Stubs are replaced by the actual modules at every movement.
• Tests are conducted each time a module is integrated.
• On completion of tests, the next stub is replaced.
• Regression testing are then conducted.

Depth first search


• Go down the one path till the bottom.
• Start at level 1 (root) go down till the last level, level 1→ level 2(a)→ level 3(a)…. → level n
(a)
• Moves down vertically , starting from left and working right.
• If this is selected, a complete functionality can be demonstrated, which can boost
stakeholder confidence.
Breadth first search
• Start at the top, then do the whole next level (Everything on level 1) , then everything on
level 2.
• Level 1 → level 2(a) → level 2(b) → level 2(c) → …. → level 2(z)
• Moves down horizontally, hence a level will be completed at a time.
• Move from left to right.

Integration process steps for top-down integration


1. The main control module is used as a test driver, and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing (discussed later in this section) may be conducted to ensure that new
errors have not been introduced.

Bottom up
• Begins test at the lowest level of the structure (atomic level).
• Eliminates the need for complex stubs
• As integration moves upwards the need for separate test drivers lessens.

Integration strategy for bottom up


1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software subfunction.
2. A driver (a control program for testing) is written to coordinate test-case input and output.
a. A driver is used to feed it with information (A driver is a module that acts as
temporary replacement for a calling module and give the same output as that of the
actual product)
3. The cluster is then tested.
4. Drivers are removed and clusters are combined, moving upward in the program structure.

Continuous integration
• Practice merging components into evolving software increment once or more each day.
• Test every day to cheeck when the errors are happening.
• Smoke Testing → Is an integration testing approach that can be used when product software
is developed by an agile team using short incremental build times.
o Can be characterised as a rolling or continuous integration strategy.
• Smoke testing approach :
1. Software components that have been translated into code are integrated into a build. A
build includes all data files, libraries, reusable modules, and engineered components that are
required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “show-stopper” errors that have
the highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily. The integration approach may be top down or bottom up.

• Allows for assessment of project on frequent and a realistic basis.


• Focuses on what will break your system, or make you see what will make you fall behind
schedule.
• Smoke Testing benefits
o Minimises integration risk.
o Quality improved
o Error diagnosis and correction is simplified.
▪ Allows us to avoid adverse effects when components are integrated.
o Progress is easier to assess.
• Tests are documented in a test specification.
• Helps us configure software.

Regression testing And AI


• Re-execution of some subset of tests that have already been conducted to ensure that
changes have not caused side effects.
• Executed every time a major change is done to the software.
• There can be AI tools that does regression testing for you , when you add new components.
• Regression test Suite → subset of tests to be executed.
• Regression test suite classes of test cases :
1. A representative sample of tests that will exercise all software functions
2. Additional tests that focus on software functions that are likely to be affected by the
change
3. Tests that focus on the software components that have been changed

Integration testing in Object orientation


• Thread based testing
o Integrates a set of classes required to respond to one input or event for the system.
o Each thread is tested individually.
o Combine the class and test it together on the thread.
o Regression testing is applied to ensure no side effects occur.
o (Moves vertically)
• Use-based testing
o Testing classes , using few server classes (Independent classes are tested first).
o Classes using independent classes is tested next.
o Moving from least dependencies to most.
o Focuses on classes that do not collab. Heavily with other classes, however stubs can
be used if there is a need to collaborate with a class that has not yet been
implemented fully.
o (Moves horizontally)

Fault based test case design


• Looks at test cases with the most plausible faults.
• Looks at the requirements.
• Finds high number of errors with low expenditures.
• Ensures the executions are correct (white box testing).
• Focus on it from the clients perspective.
• Random test case design steps

1. For each client class, use the list of class operations to generate a series of random test
sequences. The operations will send messages to other server classes.

2. For each message that is generated, determine the collaborator class and the corresponding
operation in the server object.

3. For each operation in the server object (that has been invoked by messages sent from the client
object), determine the messages that it transmits.

4. For each of the messages, determine the next level of operations that are invoked and
incorporate these into the test sequence.

Scenario based test case


• Uncovers errors on what the user does, not what the product does.
o Captures the tasks the user must perform then applying them and variants as tests.
• Uncovers interaction errors.
o Needs to be more complex and more realistic than fault-based tests.
• Tends to exercise multiple subsystems in a single test.
• This will test collaboration between classes.

Validation testing
• Begins after integration testing has come to an end.
• Focuses on requirements level, things immediately apparent to end user.
• All errors have been corrected so far, so looks at customer acceptance criteria.
• Configure an audit report, this ensures the system has been built properly.
• Use the use cases to test.

Testing Patterns
• These describe common testing problems and solutions that can assist you with dealing with
the problem at hand.
• Three patterns
o Pair testing
o Separate test interface
o Scenario Testing

Integration testing semester test 1 2020 Q5 (Past paper)


• A
o White box and black box testing will work for every level of testing(Unit and
Integration)
o White box testing will guarantee all the independent paths within a module have
been exercised and tested so that all the base requirements of the system can be
tested within the intent of working instead of feeding it incorrect data and testing its
ability to handle that incorrect data. and it tests all logical decisions
o Black box reduces the amount of test cases having to be performed and saves time
and money, it also tests for incorrect data, it also tests for incorrect or missing
functions and interface errors, identify behaviour and performance errors and
termination errors
o Both white and black box testing have merit as black box focuses on external view
and white box focuses on the internal view of the system and both should be
implemented to ensure an in depth testing of the system
o Therefore, you will need to use both.
• B
o “Heavily into agile”
o Smoke testing can be used for agile development.
Chapter 21 Software Testing – Mobility
• Focused on systems that are deployed in different environments (such as mobiles, watches,
card scanners etc.)
• Testing on different computers, mobiles, systems , etc.
• Test in a real-world uncontrolled condition.
• Attributes of a good test
o Understand the network landscape and device landscape.
o Conduct testing in uncontrolled real-world test conditions.
o Select the right automation tool.
o Identify the most critical …

Testing strategies
• User experience testing → users are involved in the early stages of development.
• Device compatibility testing → testers verify it works on many different hardware and
software combinations.
• Performance testing → Testers check non functional requirements unique to mobile apps.
• Connectivity testing → Can access needed networks or web services and can tolerate weak
or interrupted access.
• Security testing → Should not compromise the privacy or security requirements of its users.
• Testing in the wild → tested under realistic conditions.
• Certification testing → testers ensure that standards established by the app stores that will
distribute it is met.

Alerts
• Popups can lose the users attention, use them sparingly.
• These should be in a way that the user can see.
• System should be fault tolerant.

Web testing strategies


• Content testing → Ensure everything is spelt correctly.
• Semantic testing, ensure the information is correct.

Security Testing
• Designed to probe vulnerabilities.

Performance testing
• If your system can handle a lot of people.
• This can impact the system security.
• Load Testing
o N – Number of concurrent users
o T – number of online transactions per unit of time.
o D – data load processed by server per transaction.
o P – Overall throughput, as calculated by N x T x D
o P → N*T*D
• P =[(20 000/2)*3]/60 = 500 kB/s = 4 Mbps
• NB : 1 Mbps = 125 kB/s
Real time testing
• Weighted device platform matrices. (WDPM)
o Purpose: To ensure quality in a system, use time wisely. Therefore, it is a tool used
for you to decide where to put in your effort to achieve a majority result.
o Helps you concentrate on where to show your efforts.
o Ranking, the highest value is the highest ranking.
o How to calculate it (Q6 past paper 2020 ST2)
▪ List the columns with the operating systems based off the scenario
(populate from the right to the left)
▪ List the targeted devices from row 3 down in the first column
▪ Write ranking in the second row, second column.
▪ Rankings should be from 0 to 10
• Enter rankings for devices and operating systems

Android iOS
Ranking 7 5
Android phone 7 49 N/A
Apple phone 5 N/A 25
Windows phone 1 N/A N/A
Desktop machine 3 N/A N/A

Interpret the table :

Looking at this table, I would focus more on the android device for testing, and secondly, I would
focus on apple phone. And would not focus much on the desktop machine or the windows phone.

Testing AI Systems
• Tests that can be run on an AI program to check if it does what it should do
o Static testing → software verification focusing on execution , checking to see if the
way the information is developed is correct according to experts in that field. (An AI
miner should perform tasks as proficient or even better than a current human
miner).
o Dynamic testing → Validation technique focusing on source code, shows the AI
conforms to the behaviours specified by human experts.
o Model based testing →

Chapter 22 – Software Configuration Management
• Change is inevitable, we need to manage change effectively.
• When change occurs, how do we ensure current development has minimal effort if change
occurs.
• How to minimise the change ?
• Configuration management → making changes and managing the change.
• Software Configuration Item (SCI) → Named element of information can vary in size. (Any
item that the project can be divided into, a unit).
• SCM Activities are developed to:
(1) identify change,
(2) control change,
(3) ensure that change is being properly implemented, and
(4) report changes to others who may have an interest.
• Software support → Software engineering activities that occur once the software has been
delivered to the customer and put into operation.
• Software Configuration management → Software configuration management is a set of
tracking and control activities that are initiated when a software engineering project begins
and terminates only when the software is taken out of operation.
• Information from SCM may be divided into three categories
1. Computer program
2. Work products describing the computer program
3. Data or content
• Sources of change
1. New business or market conditions
2. New stakeholders needs demand changes.
3. Reorganisation or business growth or downsizing
4. Budgetary or scheduling constraints.
• Elements of CM system
o Component elements
o Process elements
o Construction elements
o Human elements
• Baselines
o Stabilised form of the project, this is the baseline.
o IEEE → Institute of Electrical and Electronic Engineers (it’s a Body)
o The best working version of the project.
o Formal definition
▪ A specification or product that has been formally reviewed and agreed
upon, that thereafter serves as the basis for further development, and that
can be changed only through formal change control procedures.
o Before a product is made part of the baseline, changes can occur quickly and
informally, but once it is a part of the baseline, changes need to be made formally
and each change needs to be evaluated and verified.
• SCI’s can have an influence on one another; therefore, you need to make sure the changes
you make in one SCI do not impact the other components or if it does , you need to ensure
the components now interact well with this new component.
• When changes are introduced, involve team members.
• Best SCM Practices
o Keep code variant numbers small
o Test early and often
o Integrate early and often
o Tool used to automate testing, building and integration.
• Continuous integration advantages
o Accelerated feedback
o Increased quality
o Reduced risk
o Improved reporting
o CI improves quality by reducing the likelihood of defects escaping the development
team.
• GitHub is an SCM repository.
o Read the features and think of those features in GitHub and if you are using it.
• A software engineer can ensure change has been properly implemented by
o Technical reviews
o Software configuration audit
• Configuration status Reporting (CSR)
• Engineering Change Order (ECO)
• When you roll out to a few people , it is beta testing (people outside the company can test
it), alpha testing is internal testing.
• Once the change is tested on a beta level, then the change can be sent out at a final level.
• Be realistic and motivate why you chose that answer.
• Page 449, change control process, understand figure 22.6 and 22.7

SCM Features
Must provide support for the following features

• Versioning
• Dependency tracking and change management
• Requirement’s tracing
• Configuration management
• Audit trails

Change categories
Class 1→ Small change , does not impact other components

Tested informally

Class 2 → Small change yet does impact other components

Tested informally

Class 3 →Change that has a broad change across application

Change reviewed by all members of team

Class 4 → Design change, that will be noticeable to a user.

Change reviewed by all stakeholders


2020 → Question 1 Semester test 2 (software configuration management)
o Effectiveness → Does stuff from start to finish.
o Efficiency → The system should respond quickly, it gives you an alert once the
request has been put through.
o Satisfaction → Ease of use for users.
o Freedom from risk → Do not ask clients to provide information not important to
system operation. (Not always a security problem, can involve reputational risk and
other forms of risk)
o Context coverage → Should cover all the different types of users, different features
should be available to different users, allow a user to get emails based off the
response, or not get emails as it goes down, allows for different users to choose
what they would like.
• Classes are based on impact, lower the impact the lower the class number.
• Can you handle the change that will happen in the software?
• Checking in and out and baselines is important.
Chapter 23 – Software Metrics and Analytics
• Needs to be quantifiable.
• Qualitative is difficult to interpret, hence quantitative is easier to analyse.
• Measurements give you an estimate of where you are at the present time.
• Qualitative → feedback using words
• Quantitative →Numbers
• Measure → just a Number , is a quantitative indication of the extent, amount, dimension,
capacity, or size of some attribute of a product or process
o 300 new covid cases
• Metric → a ratio, A quantitative measure of the degree to which a system, component or
process possesses a given attribute.
o Out of 350 tests, 300 came back positive
• Indicator → Metric or a combination of metrics that provide insight into the software
process, a software project, or the product itself.
o In the past week day one had a 70% positive rate in covid tests, day 2 had a 60%
positive rate in the covid tests, day 3 had an 80% positive rate in covid tests.
• Effective metrics
o Simple and computable
▪ Metric should be understandable.
o Empirically and intuitively persuasive
▪ Everyone should be able to make sense of it.
▪ Know what to do more or less of.
o Consistent and objective
▪ Ensure the chosen metric is consistent and has relevant choices required.
o Consistent in its use of units and dimensions
▪ Use kg throughout if you use it.
o Programming language independent
▪ Metrics should be based off the programming languages.
o Effective mechanism for quality feedback
▪ Provide the software engineer with quality feedback.
• KPI (Key performance Indicator) → Used to track performance and trigger remedial action
when their values fall in a predetermined range.
• Software analytics → Allows software engineers to make decisions based off statistics
gathered.
• Summarise all the metrics on a page

Requirements Model Metrics


• Number of requirements
o Nr = Nf + Nnf
o Number of requirements = Number of functional requirements + Number of non-
Functional requirements.
• Requirement specificity
o Q1 = Nui/Nr
o Q1= Number of requirements for which all reviewers agreed on / Number of all
requirements (Total requirements).
o For Q1, closer it is to 1 the better it is.
▪ 0.5, is not good
▪ Requirements provided to team was poorly worded since not all team
members could extract the requirements properly.
• Customisation index
o C= N dp /(N dp+ N sp)
o C = Number of Dynamic screens / (Number of dynamic screens + Number of static
screens)
o More towards 0, less customisable
o More towards 1 , more customisable
o Larger value of C, the more customisable it is.
• Structural complexity
o S(i) = F out(i)
o Look at the module
o Fan out → How many modules are directly subordinate to the current module
(Modules directly invoked by module(i))
• Data Complexity
o D(i)=v(i)/[f out(i) + 1]
o V(i), input and output variables
• System Complexity
o C(i) =S(i) +D(i)
o The sum of the structural complexity and the data complexity.
o Greater complexity will lead to more requirements from integration, once you
integrate, there is a lot more to take into consideration.
o For all three above , as the number increases, overall complexity increases, hence
integration and testing effort will also increase
• Morphology Metrics
o Look at the shape of the different program architectures.
o Look at number of nodes
o Look at arcs (Number of lines)
o Calculate the depth (Longest path from the root
o Width (Maximum nodes at any level of the architecture)
o Size → Nodes + Arcs
o Arc-to-Node ratio
• R=a/n
• Ratio = Arcs /Nodes
• Shows how connected the systems architecture and may provide
indication of the coupling of the architecture.
• Figure higher , greater than one has a lot of connectivity between
the nodes
• CK Metrics Suite ********** TODO for EXAM
• Number of children → However many children are below the parent , all of them not only
direct.

Maintenance Metrics
• Software maturity index can be calculated as follows :
• SMI = [Mt – (Fa +Fc + Fd)] / Mt
o Value approaching one, the system is stabilising.
• Measuring Software
Metrics for Software Quality
• Defect Removal Efficiency
o DRE = E / E+ D
o E = errors before delivery
o D = Defects after delivery
o Ideal value is 1, yet it would possibly approach one.
o Bad if close to zero.
o Indicator of the filtering ability of quality control and assurance activities.
• DRE can also be done at a specific instance in the process, before it is passed on to the next
process
o DRE(i) = E(i) / E(i)+ E(i+1)
o Where the E(i+1) means the next process step.
• If DRE is low, consider bettering technical reviews.
Chapter 24 – Project Management Concepts
• What do we do?
o Plan
o Organise
o Monitor
o Control
• 4 P’s of the project
o People – The stakeholders of the project, the project manager, software needs people to
factor it in, the team leader- Something you can control and manage
▪ People must be organised to perform software work effectively.
o Product – Scope and requirements must be understood.
o Process – A process appropriate should be selected.
o Project – Understand what can go wrong , planned by estimating effort and calendar
time to accomplish work tasks.

People
• Maximise each person’s skills and abilities. The team leader organises this.
• The people (Stakeholders) :
1. Senior managers (product owners) who define the business issues that often have a
significant influence on the project.
2. Project (technical) managers (Scrum masters or team leads) who must plan, motivate,
organize, and coordinate the practitioners who do software work.
3. Practitioners who deliver the technical skills that are necessary to engineer a product or
application.
4. Customers who specify the requirements for the software to be engineered and other
stakeholders who have a peripheral interest in the outcome.
5. End users who interact with the software once it is released for production use.

• Team Leader practices for exemplary technology :


1. Model the way – Practice what you preach
2. Inspire and shared vision – Involve stakeholders early in the goal setting process and
motivate the team.
3. Challenge the process – Encourage team members to experiment and take risks by
helping them generate frequent small successes while learning from failures.
4. Enable others to act – Share decision making and goal setting.
5. Encourage the heart – Build team spirit by celebrating shared goals and victories,
both inside and outside the team. Let everyone know their input and contributions
are valued.

• Project factors to be considered when structuring an SE team :


1. difficulty of the problem to be solved,
2. “size” of the resultant program(s) in lines of code or function points,
3. time that the team will stay together (team lifetime),
4. degree to which the problem can be modularized,
5. quality and reliability of the system to be built,
6. rigidity of the delivery date, and
7. degree of sociability (communication) required for the project.
• Toxic Team Environment factors:
1. a frenzied work atmosphere,
▪ Solution:
✓ Project manager should ensure all team has access to all
information required.
✓ Major goals should not be redefined unless necessary.
2. high frustration that causes friction among team members,
▪ Solution
✓ Given team responsibility for decision making.
3. a “fragmented or poorly coordinated” software process,
▪ Solution
✓ Understand the product to be built & people doing the work
✓ Allow team to select process model
4. an unclear definition of roles on the software team, and
▪ Solution
✓ Establish mechanisms for accountability
✓ Define corrective processes if a team member fails to perform.
5. “Continuous and repeated exposure to failure.”
▪ Solution
✓ Establish team-based techniques for feedback and problem solving

Product
• Quantitative estimates and an organised plan are required but solid information is
unavailable.
• Software scope
o Context
▪ How does it fit into the larger system?
▪ Constraints imposed?
o Information objectives
▪ Inputs and outputs?
o Function and performance
▪ What happens to transform input data into output data.
▪ Any performance characteristics ?
• Decomposition areas
(1) the functionality and content (information) that must be delivered and
(2) the process that will be used to deliver it.
• This can be accomplished using a list of functions or with use cases or for agile work, user
stories.

Process
• Decide a process model based on :
1. the customers who have requested the product and the people who will do the
work,
2. the characteristics of the product itself, and
3. the project environment in which the software team works.
Project
• Characteristics in successful software projects:
1. Clear and well-understood requirements accepted by all stakeholders
2. Active and continuous participation of users throughout the development process
3. A project manager with required leadership skills who can share project vision with
the team
4. A project plan and schedule developed with stakeholder participation to achieve
user goals
5. Skilled and engaged team members
6. Development team members with compatible personalities who enjoy working in a
collaborative environment
7. Realistic schedule and budget estimates which are monitored and maintained
8. Customer needs that are understood and satisfied
9. Team members who experience a high degree of job satisfaction
10. A working product that reflects desired scope and quality

W5HH Principle
Questions to define key project characteristics and formulate the project plan :

1. Why is the system being developed? All stakeholders should assess the validity of business
reasons for the software work. Does the business purpose justify the expenditure of people,
time, and money?
2. What will be done? The task set required for the project is defined.
3. When will it be done? The team establishes a project schedule by identifying when project
tasks are to be conducted and when milestones are to be reached.
4. Who is responsible for a function? The role and responsibility of each member of the
software team is defined.
5. Where are they located organizationally? Not all roles and responsibilities reside within
software practitioners. The customer, users, and other stakeholders also have
responsibilities.
6. How will the job be done technically and managerially? Once product scope is established,
a management and technical strategy for the project must be defined.
7. How much of each resource is needed? The answer to this question is derived by
developing estimates based on answers to earlier questions
Chapter 25 – Viable Software Plan
• Chapter 25 and 27 , only learn what is in the slides, in the textbook, do not go over work not
in the slides
• Fail to plan, planning to fail.
• When you plan , you rough sketch , then figure out how to make it.
• Ensure these are realistic.
• You want to learn from your mistakes.
• Keep record of what you did as a developer.
• Keeping this record helps with making decisions and can help on estimating future projects.
• Over estimation can be bad.
• Estimating software cost and efforts.
• Software sizing
o Direct : Lines of code
▪ Good because
o Indirect : Function Points

Chapter 26 – Risk Management
• Identify the risks.
• Once it is identified, you can decide if action should be taken or not.
• Identify and manage risk for 20 marks for the exam !
• Technical debt → the costs associated with putting off activities like documentation and
refactoring.
• Technical debt can lead to inadequate functionality, erratic behaviour, poor quality,
insufficient documentation, and unnecessary complexity.
• Characteristics of risk
o Uncertainty
o Loss
• Categories of risk
o Project risk
o Technical risk
o Business risk
• Risk management principles
o Maintain a global perspective
o Take a forward-looking view
o Encourage open communication
o Emphasise a continuous process
o Develop a shared product vision
o Encourage teamwork
• To identify a risk (look at these if you are not sure what the risk is)
o Product size
o Business impact
o Customer characteristics
o Process definition
o Development environment
o Technology to be built
o Staff size and experience
• Risk components
o Performance risk
o Cost risk
o support risk
o Schedule risk
• Risk impact categories
o Negligible → Inconvenience and cost is low
o Marginal → affect secondary mission objectives with medium costs
o Critical → Question mission success , high cost
o Catastrophic → Mission failure , cost will be unacceptable
• Look at both this to estimate risk
o Likelihood or probability that the risk is real
o Consequences of the problems associated with the risk
o Estimate impact
o Assess overall accuracy to avoid misunderstandings
• Risk projection steps
o Establish a scale
o Delineate the consequences of the risk
o Estimate the impact of the risk on the project and the product
o Note the overall accuracy of the risk projection
• RE = P * C
o Risk Exposure = Probability of risk * Cost (This is the risk impact total if a total cost is
not provided)
o This is the probability of it happening.
• Risk impact , in worst case how much would it cost (takes the probability is assumed to be
100 and the risk is calculated)
o In the example, Risk impact = Components x LOC x cost per LOC
o By refining risk impact causes you can reduce the risk impact
• Risk refinement
• Mitigation – Avoid the risk
• Monitoring – factors we can track that will enable us to determine risk probability
• Management – What contingency plans do we have if the risk becomes a reality.
• Mitigation tries to prevent it, reduce the possibility.
• Risk information sheet:
o Probability : Percentage
o Impact : (Cost value)
o Describe the risk (Description)
o Give details , context of sub conditions
▪ Give a further description of the risk in context (by giving sub conditions)
o Mitigation and monitoring
▪ Mitigation is to prevent; monitoring looks at how to reduce possibility
o Management /Contingency plan / trigger
▪ RE computed to be …
▪ Management is to manage the risk to continue the business, the risk has
happened/ is happening, and now you need to manage it.
▪ Allocate this amount
▪ Ensure that there are plans in place to address the issue if it happens.
• EXAM QUESTION 2020
o
Chapter 27 – A strategy for software support
• Law of continuing change
• Law of increasing complexity
• Law of conservation of familiarity
• Law of continuing growth
• Law of declining quality
• Security issues can occur if systems stop being serviced.
• The people (users should know when to move to the new systems ) ***Might be in exam
• You must support a system till people stop using it, even if it is not in active development
(no new security updates).
• Reverse engineering – Create representations of a system at a higher level of abstraction
• Refactoring – process of changing a software system
• Reengineering (evolution)
• Agile maintenance
• Cost of support → You need to know of it is worth your while to do support.
Semester Test 1 Revised
• Read the chapter and articulate the answer. Can you apply the rules to the question?
• Look up what effectiveness is.
• Purpose specification → What information is used and why.

REVISION (150 Marks)


Chapter 26 Risk management : 20 Marks
• Risk management plays a role in delivering quality software to customers
o Risk management is being able to identify risks and dealing with the risks.
o To deal with it we can
▪ solve it, or
▪ reduce it or
▪ pass it on to someone else(Let someone else handle the risk)
• Risk = Probability x Cost
• PG 547 → Risk information sheet
• Describe risk → You will not successfully submit the exam , break down the risk.
o Refinement
▪ Loadshedding could hit during the paper and cut the WIFI and then there
isn’t enough time to submit the paper, or your laptop dies halfway through
and you cannot continue due to the fact that the laptop battery might make
the laptop die.
▪ Not keeping time, for the exam. Windows might update while I am writing
and time will run out before I can successfully complete and submit the
paper.
o Probability and Risk impact
▪ Risk impact might be 12 000 because the module costs 12k , but it also
might cost about 400K since that is a beginner salary being lost because I will
not be working next year. Overall it will be 412k
▪ Risk probability , 30% because loadshedding might happen and time might
be an issue, yet it might not happen.
▪ Risk exposure : 30% x 412 000
= R123 600
o Mitigation → Avoiding the risk (You can reduce the probability or reduce the
impact), This informs you of what your monitoring should include.
▪ Writing on campus will help avoid the risk of loadshedding hitting because
campus has generators.
▪ Having a UPS or Generator at home will reduce the probability of not
submitting
▪ To ensure updates does not stop you from submitting , you can ensure
updates are completed beforehand or that the automatic updates are
disabled.
o Monitoring → Keeping an eye on things to ensure mitigation is working.
▪ Keeping an eye on the loadshedding schedule , this will tell you if there is a
power outage planned in your area.
▪ Checking your laptop is updated before the exam.
o Management / Contingency → Contingency plan , if it does take place , what can
you do
▪ Switch to mobile data if loadshedding does hit.
▪ If time is being spent doing an update , write the submission down , try and
print the paper out beforehand, as soon as it is released.
• Example with covidify (2020 Exam), Risk : People are not using the covidify app
o Context and refinement
▪ People do not find it easy to use
▪ People do not know about the app
▪ App is not available on most types of devices
▪ Having Bluetooth on constantly drains users battery’s.
o Mitigation
▪ Making the user interface friendly and simple to work.
▪ Market the app well throughout south Africa to ensure people are aware of
the application.
▪ The developers should ensure the app is compatible with most devices
o Monitoring
▪ Look at how users use the application from all different ages to ensure its
usable for everyone.
▪ See how many people have downloaded the app as compared to the
number of people there are in SA.
▪ Looking at the most common devices in SA and ensure the application does
work on those devices.

Chapter 15 (15 Marks)


• Explain the characteristic as defined by the ISO 25010
• Understand the meaning of each of the characteristics.
• Make a list of each of the characteristics.
o Quality in use → Depends on the user, be specific on who the users are
▪ For covidify, it is the general public ad citizens of south Africa.
o Effectiveness → The user should be able to sign up completely with details being
correct. Once I have signed up, the sign up should be complete, and given
permissions, the app should be able to give access to Bluetooth or if the user does
not give access then the app should not have access to the Bluetooth. There should
be a popup message for the user to know that the sign up has been completed, to
ensure completeness of the system.
o Efficiency → (Do not use more resources than necessary) , The resources that are
expanded are exactly what is necessary for the sign up process. The users are not
required to enter any information that is not required for the application.
o Satisfaction → Getting a confirmation would lead to a users trust in the system , The
sign up of the system, if there is a mistake, only the field that was wrong should be
highlighted in some way, it would not be useful if all users details are deleted if they
make one mistake.
o Freedom from risk→ (Not only information security, lowers possibility of users
economic, health, safety, and environmental risks )
o Context coverage → flexibility (Giving the user the freedom of choice) , Allow the
users to sign up with a third party account, signing up with phone number rather
than an email.
• Different users have different things to look out for in terms of quality, when you describe
the quality ensure you describe the different aspects of the criteria as well as how it impacts
different users.
• Has a link to others chapters.
• Quality is useful for stakeholders and users.
• Benefits of quality
• Qualitative, Quantitative , Good enough software.
• Good enough means it does the main features, and properly.
• Quality costs money, since it takes time.
• Do not be afraid to spend a lot of on prevention costs.

Reviews (Chap 16) 10 Marks


• Understand purpose of reviews + how to conduct reviews correctly
• 2020 paper (Question 2)
o 2.1) Error density = total errors / WPS
▪ 3/14 = 0.21 per UML diagram
o Ignore preparation values, only look at the final findings.
o Perform a calculation based off the ratio given to you (per page, per UML diagram,
per 100 lines of code, etc)
o Preparation effort = 2 hours , only one person reviewed the code and hence they are
the only person that participated (Each persons’ preparation time added)
o Assessment effort = 3.5 hours (30 mins per person * 7)
o Although Donald would like Nosipho to learn, it goes against technical review
guidelines for someone to conduct technical reviews without meaningful training.
The guidelines are not followed as there are 7 people in the meeting but only
Nosipho has preparation hours recorded, so it’s either Nosipho was the only one to
prepare for the technical review or the notes were not recorded correctly.
The older error density was 2.06 per UML diagram yet the latest is 0.21 errors per
UML diagram. This could mean that the preparation was not done correctly and this
should not be based off of only one persons review. Looking at her original values,
she would have had an error density of 1.93 ( 27/14). A lot of the errors Nosipho
picked up was not considered.
o Nosipho prepared well since she spent 2 hours however her errors were being
dismissed during the review process.
• You should do reviews if you are an agile developer, since you do review continuously
anyways.

Software quality assurance (10 marks)


• Six sigma (6 marks) , Narrow down the issue to what its about.
• Step 1 :Understand what the error is.
o Problem : People who are not in contact with people who had covid are still getting
notifications.
• Step 2: Which part of the system might this issue be in
o Notification
o Bluetooth
o Distance calculation
o Data clearing
• Step 3: How will you collect data to monitor this process.
o Check the distance between two people using the application (explain in detail for
everything)
• Analysis : Checking after how long someone is added to my DB, then check if after 14 days
the person is removed from the DB.
• Future work does not reintroduce the cause of defect → Monitor that when new features
are added it does not change the way the system reports on the people who have tested
positive and the notifications of the people who have encountered said person.
• Statistical SQA (Causes on page 348)
• Looks at serious errors

Chapter 25 (Creating a viable software plan)


• Ability to see if you are on track with the project.
o Could be use case points, function points or lines of code.
• Use this to decide the cost and whether you will complete it on time.
• Ties into project management. If you are not going to finish on time, will adding two new
devs solve the problem
o Time will be wasted teaching the new developers about the project.
• Look at circumstances, and there are ways to bring in new people for the work.
• Will you be able to finish it in time. If not will you bring on new people, you could bring new
people.
• Per person month → Lines of code per person, so multiply it by the number of people there
on the team.

Component level testing chapter 19 15 marks


• Independent path testing , to test for conditions
• Object centred testing technique is important
o Generic testing uses drivers or studs, these do not apply to composition or
inheritance
• Check to see if it’s a class use the Object Oriented styles.
• Work out independent paths , flow chart
o Number of diamonds + 1
• Number of regions
o You need a flowgraph to calculate the regions
• Set of independent paths , Navigate through all the nodes , and do not repeat unless in a
loop
o You will know that there are a certain number of paths based off of the top answer.
o Every edge needs to be visited atleast once
• Remember you have to end at the end , and not the middle.
• Black box testing compliments white box testing , they address different types of errors.
• Checking an ID number
o Check that the ID number stays the same for Black Box testing , (13-digit Number,
it’s a string and not a number)
• Interface testing , what to look out for in terms of information being stored.
• Give actual examples of the values. For equivalence partitioning and boundary value
analysis.
• If a range is 0 to 8 , test 6 and 9 to test the boundary values.
• Object oriented testing
o Each class has methods, calling one method may call other methods that belong to
other classes.
o Behavioural testing.

Integration testing (Chapter 20) 10 Marks


• When you integrate a component into the rest of the working system
o Three approaches
▪ Top down
• Start off at one component, this component calls other components.
Every time you add a new component, you must test.
▪ Bottom up
• Put components in clusters, and add the clusters to something
larger then test
▪ Continuous integration
• The more agile approach , smoke testing is an example.
o All three are continuous ,
• Information going in has a driver
• Information going out goes to a stub, stub will then give you some answer back.

Chapter 21 (testing for mobility)


• 21.6 internationalisation
• Weighted device platform matrix.

Chapter 22 (config management)


• Pay attention to st2 , figure 22.7
o You cannot reject the change
• Classify change and explain.

Chapter 27 (a strategy for software support)


• Buffer space of time between the new version of the software being released
• DO NOT retire the old version of the software right away, give people time to adjust.
• These are appropriate for different circumstances
• Reverse engineering → You have the system , and do not have documentation but you need
to maintain it. The code does not make sense either. System was badly documented , and
badly written, someone else is taking over and is trying to figure out what it does. (need to
understand the system but cant)
• Refactoring → Understand the system, need to restructure it. This can be if it is outdated,
structurally poor, code is messy but understandable.
• Use calculation from slides and not textbook
• Would you do if it cost of refactoring/ Reengineering is cheaper than cost of maintenance
• You cannot refactor code if you do not understand what is happening in the code.
Cmaint = [P3 – (P1 + P2)] * L
Creeng = (P6 – (P4 + P5)) * (L -P8) – (P7* P9)
Cost benefit = Creeng - Cmaint
• Calculation from the 2020 paper (exam) Question 9
o Cmaint = [P3 – (P1 + P2)] * L
= [1 000 000 – (300 000 + 302 400)] x 1.5
= R596 400
o Creeng = (P6 – (P4 + P5)) * (L -P8) – (P7* P9)
= (1 250 000 –(150 000+133 900)) x(1.5 -0.167) –(275 000 x 1)
= R 1 012 811
o Cost benefit = Creeng – Cmaint
= 1 012 811 -596 400
= R 416 411
o If this answer is positive, then do it , however if it is very low, it is not necessarily
good because small values are negligible
Guest Lecturer Notes

Trust In Technology (KPMG)


• Ethics and best practice are factored into the technological environment to ensure trust in
technology.
• Your solution and tech space is to solve an old problem, assurance will help you solve the
new problem.

Emerging technology
• Technology will never be perfect, but you can release it and fix issues if they become better
at it.

Entelect (Software engineering)


• A software engineer is a problem solver.
• Imposter syndrome → Feeling like you must be someone else in a space. Understand where
you are and communicate well.
• Entelect has dojos where seniors help you with code.
• Have a story of why we did it, and what we done and why (Tell a story with it).

Embracing Technology (PwC)


• We can learn from data and what went wrong in the past. This can be used for data
analytics.

Play with your food : A Guide to thriving in the software development industry (BBD)
• Playing is growth.
• Prepare yourself for the transition.
• Being a software engineer is:
o Fun
o Tough
o It is what you make it.
• Play with stuff you are passionate about.
• Use visual studio share to pair program online.
• It is important to take rest so that you are not constantly in a state of burnout.
• Security should not be thought of as an afterthought.
• Writing code is easy, building software is not.

DVT (Microservices → Stay hungry, stay foolish)


• Architecture is what the users want and where
• Design is how the users want it
• Less cruft means less complexity and is less to maintain and remember
• TDD → test driven development
• BDD → Behavioural driven development
• Steeltoe can be used for C# microservices
• Java springboot and spring cloud can be used to make microservices for java.
• Kubernetes → Helps you manage and organise
• Finding a proper problem statement is very important.

You might also like