Download as pdf or txt
Download as pdf or txt
You are on page 1of 145

ISO 26262 TRAINING

Day 3 – Hardware Development – Software Development


CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 2
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 3
CONSIDERED PARTS

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 4
RESPONSIBILITIES AND TARGETS

Responsibilities
 The hardware development phase typically is in the responsibility of the
hardware suppliers ( also Chip supplier, IP supplier), who have the
knowledge for the implementation of safety mechanisms at hardware level
 Chip and IPs can be either developed as so called Safety Element out of
Context (SEooC) or in line with given customer requirements as Safety
Element in Context (SEiC)

Targets
 In the hardware development phase an electronic circuit is designed in
accordance with the required safety integrity of safety requirements derived
from the system development phase. The evaluation of the achieved safety
integrity is done by calculation of probabilistic hardware metrics.

 Functional Safety of hardware is mainly based on the evaluation of


probabilistic metrics
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 5
GENERAL WORKFLOW DURING THE
HARDWARE DEVELOPMENT PHASE

 In Context development  Out of context development


System develop. acc. ISO 26262-4 Application assumption document

Technical Safety Assumptions for technical


safety requirements and
Concept component design

Input
Initiation of the
Input
Hardware Development

Hardware Safety
Requirements
Specification

Hardware Design

Design Verification

Hardware Tests
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED
INITIATION OF HW-DEVELOPMENT

 Definition of scope of development


 Development type
– Safety Element out of Context
– Safety Element in Context
 In case of SEooC
– Use “Application Assumptions” as input
 In case of SEiC
– Use Component Design (TSC) from customer as input
 Identification of development category
– New, Modification, Reuse
– In case of modification or reuse there shall be a delta
analysis, which identifies possible impacts on Functional
Safety
– Only the safety relevant changes have to be considered
 Tailoring of activities
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 7
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 8
ALLOCATION OF HW REQUIREMENTS

Technical Safety Concept


System Element

Allocation to Allocation to
Hardware Software

Recon-
cilement
of
Refinement of HSI Refinement of
requirement requirement

HW specification SW specification
The interface between hardware and software has to be clarified
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 9
DERIVATION OF
HW-SAFETY REQUIREMENTS

System Assumptions Input Out


put Component Design
(or system safety requirements (TSC)
from customer) Power

Refinement

VBAT
Hardware Development
HW-Safety Supply HS
Driver
Out1

Requirements INP Input Control


Driver
Control
HW-Circuit
Circuit level ENB

GND
Diagnostics

IC
LS
Driver Out2

Design

Refinement

HW-Safety 1

1
2

2
3

3
4 5 6
Requirements 4
1
5
2
6
3
Block design
7 8 9

Block level Part 7


4
8
5
9
6

(e.g. IC / IP)
Part 7 8 9
Part

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 10
HARDWARE SAFETY REQUIREMENTS
EXAMPLE INTRODUCTION

 The refinement of the already


allocated component safety
requirements of the training
example is introduced

 Input requirements are taken


from the TSC

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 11
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 12
HARDWARE DESIGN

 Hardware Design is derived from:


 Component Design (TSC)
 HW safety requirements
 Hardware Design consists of:
 Circuit diagram / circuit design of the particular system element
 Assembly diagram(Layout)
 Bill Of Material (BOM)
 Hardware Design is generated in collaboration with the
Software Design

 Hardware safety requirements are a refinement


of component safety requirements
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 13
HW DESIGN PROPERTIES
ACC. TO ISO 26262-5

ASIL
Properties
A B C D
1 Hierarchical design + + + +
Precisely defined interfaces of safety-related hardware
2 ++ ++ ++ ++
components

3 Avoidance of unnecessary complexity of interfaces + + + +

Avoidance of unnecessary complexity of hardware


4 + + + +
components
5 Maintainability (service) + + ++ ++
6 Testabilitya + + ++ ++
a Testability includes testability during development and operation.

Source: ISO 26262-5, Table 2

 There are no detailed hardware design rules from ISO 26262-5


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 14
HARDWARE DESIGN
EXAMPLE INTRODUCTION

 Introduction of possible
hardware design for the
example component

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 15
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 16
DEFINITIONS

 Fault
 abnormal condition that can cause an element or an item to fail

 Error
 discrepancy between a computed, observed or measured
value or condition and the true, specified, or theoretically
correct value or condition

 Failure
 termination of the ability of an element or an item to perform a
function as required

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 17
DIFFERENCE:
FAULT – ERROR - FAILURE

ISO 26262-10, Figure 5

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 18
MORE DEFINITIONS

 safety related element


 element which has the potential to contribute to the violation or
achievement of a safety goal
 safety mechanism
 Function implemented by E/E elements, or by other
technologies, to detect faults or control failures in order to
achieve or maintain a safe state

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 19
RANDOM HARDWARE FAULT CATEGORIES

ISO 26262-5, Figure B.1

 Random hardware faults are categorized


acc. to their impact on each safety goal
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 20
SAFE FAULT

 Fault, which do not increase the probability of a


safety goal violation significant.
 Independent multiple faults with order >2 (e.g.
three time faults, four times faults, etc.) are
classified as „safe faults“.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 21
SINGLE POINT FAULT

 Fault in a HW element, which is not detected by a


„safety mechanism“ and therefore leads directly to
a safety goal violation.

 A „single point fault“ is only permissible in case of


safety goals with ASIL C and D classification, if
it’s probability is low and it’s improbable
occurrence is safeguarded by „dedicated“
measures (e.g. a burn in test).

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 22
EXAMPLE „SINGLE POINT FAULT“

Example: Microcontroller with external Watchdog


The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

Failure mode
Input 1
IN_1 “stuck-at high” at A3 is a
“single point fault”
Input 2
IN_2 Microcontroller
Input 3
IN_3
A1
A3 Output
SPI &
A2
External ok
Window
Safety related Watchdog Failure mode
HW elements “short-circuit” at Output
driver is a “single point
Non-safety related fault”
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 23
RESIDUAL FAULT

 Fault-fraction in a Hardware element, which is


not covered by diagnosis (diagnosis slip >0%)
and immediately leads to a safety goal violation.
 A diagnostic coverage of less than 90% is only
permissible in case of safety goals with ASIL C
and D classification, if the improbable occurrence
of the residual fault is safeguarded by
„dedicated“ measures (e.g. a burn in test).

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 24
EXAMPLE „RESIDUAL FAULT“

Example: Microcontroller with external Watchdog


The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

Input 1 “residual faults” are failure modes of µC,


IN_1 which are only partly covered by
implemented diagnosis measures
Input 2
IN_2 Microcontroller (e.g. RAM test, external Watchdog).

Input 3
IN_3
A1
A3 Output
SPI &
A2
External ok
Window
Safety related Watchdog
HW elements

Non-safety related
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 25
DUAL POINT FAULT

 Fault, which leads in association with an


additional fault to a safety goal.
 A combination of safety goal violating Hardware
fault and a Hardware fault, which leads to a
loss/malfunction of the according safety
mechanism is classified as „dual point fault“.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 26
EXAMPLE „DUAL POINT FAULT“
Example: Microcontroller with external Watchdog
The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

Stuck-at low:
dual point fault
Input 1 A fault of a diagnosis mechanism (Safety
IN_1 Mechanism) of µC leads in association with
Input 2 an otherwise diagnosed fault in the µC to a
AND IN_2 Microcontroller safety goal violation. It is a “dual point fault”.
Input 3
IN_3
A1
Stuck-at high: A3 Output
SPI &
dual point fault A2
External ok
Window
Safety related Watchdog
HW elements

Non-safety related
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 27
MULTIPLE POINT FAULT

 The „dual point fault“ is a „multiple point fault“


 „Multiple point faults“ higher order with sufficient
independence can be neglected usually( safe
fault).
 If „multiple point faults“ can be identified (e.g. start
up test), they are classified as „detected multiple
point faults“.
 If „multiple point faults“ can be detected and
controlled by the driver (e.g. defect of the dimmed
headlight lamp), They are classified as
„perceived multiple point faults“.
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 28
EXAMPLE „MULTIPLE POINT FAULT“

Example: Microcontroller with external Watchdog


The microcontroller shall activate the Output, if Input 1 has the state low and Input 2 the state high.

Input 1
IN_1
Input 2
IN_2 Microcontroller
Input 3
IN_3
A1
A3 Output
SPI &
A2
External ok
Window
Safety related nok
Watchdog L1 A fault of the scheduling of the
HW elements µC is detected by the
watchdog and the driver is in
Non-safety related formed by L1  detected
multiple point fault
HW elements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 29
LATENT FAULT

 A fraction of the “multiple point faults“ is called a


„latent fault“, if it can not be detected and is
hidden in the system element.
 A „latent fault“ has the potential to violate a safety
goal (usually the timing is not critically, because
it is only relevant in combination with an
additional fault, but to analyse for ASIL C and D).

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 30
SUMMARY OF
HW FAULT CATEGORIES

Safe fault (ls)


 Fault that leads to a safe condition or has no impact on the respective safety goal

Detected multiple point fault (lMPF detected)


 Detected multiple fault
 No safety goal violation

Perceived multiple point fault (lMPF perc.)


 Multiple fault detected by the driver
 No safety goal violation

ltotal Residual fault (lRF)


 Share of undetected single fault due to a non perfect diagnostics
 Leads to safety goal violation

Latent multiple point fault (lMPF latent)


 Undetected multiple fault or share of undected multiple point fault due to non perfect diagnostics
 Leads to safety goal violation

Single point fault (lSPF)


 Undetected single fault
 Leads to safety goal violation

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 31
QUANTITATIVE DESIGN VERIFICATION IN
PRACTICE

Step 1: Verification of „Hardware architecture


metrics“

Step 2a: Verification of safety goal violating


failure rate referring to the Item

Step 2b: Verification of failure classes per


Hardware component
Note: Step 2a and 2b are alternatives
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 32
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 Verification of robustness of a HW architecture regarding


random Hardware failures
 Calculation of „Single Point Fault Metric (SPFM)“ and
„Latent Fault Metric (LFM)“
 ASIL depending target values have to be reached

ASIL Calculation of SPFM ? Calculation of LFM ?


A no verification required no verification required

B recommended recommended

C highly recommended recommended


D highly recommended highly recommended

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 33
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 Calculation of
Single Point Fault Metric (SPFM)

 Complement of the relative fraction of single point


faults, which violate a safety goal, related to all possible
faults in the safety related Hardware elements of an Item

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 34
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 Calculation of
Latent Fault Metric (LFM)

 Complement of the relative fraction of latent multiple


point faults, which violate a safety goal, related to all
possible multiple point faults in the safety related
Hardware elements of an Item.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 35
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 3 ways to establish the target values


 Use of predecessor design (values must not be worse)
 Expert Judgement (determination by assessors, for example)
Typically used  Use of target values from tables 4 and 5

ASIL B ASIL C ASIL D


Single Point Fault
>= 90% >= 97% >=99%
Metric (SPFM)

Latent Fault Metric


>= 60% >= 80% >= 90%
(LFM)

ISO 26262-5, Table 4 + 5

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 36
DAY 2
EXERCISE 1

 Go back to the TSC from Day 2


 Are the defined target values
for SPFM and LFM correct?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 37
STEP 1
VERIFICATION OF HARDWARE
ARCHITECTURE METRICS

 6 activities to determine the architecture metric


1.1 Listing of all HW elements (assembly components) and
determination of the base fault rate per HW element
Quantitative design verification

1.2 Identification if a HW element is safety-relevant or not

1.3 Identification of failure modi and their statistical distribution

1.4 Classification of fault categories related to the considered


safety goal (safe fault, residual fault etc.)
1.5 Determination of diagnostic coverage for residual faults and
multiple point faults
1.6 Totalling up of the fault rate and calculation of the architecture
metrics according to the formula

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 38
STEP 1.1
DETERMINE THE ARCHITECTURE METRIC

1.1  Listing of all HW elements (assembly components) and


determination of the base fault rate per HW element
 Failure rates from recognised industry sources and/or
Quantitative design verification

1.2
manufacturers’ information:
– IEC 62380, SN29500 , MIL HDBK 217 F notice 2, RAC HDBK 217 Plus,
1.3
NPRD95, EN50129 Annex C, EN 62061 Annex D, RAC FMD97 und MIL
HDBK 338.
1.4

 Procedure:
1.5

– λref Failure rate under reference conditions


1.6 – pU Voltage dependency
– pI Current dependency
– pT Temperature dependency

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 39
DAY 2
EXERCISE 2

Base failure rates in the


example are taken from
SN29500.
What impact has the change of
the assumed operating
temperature?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 40
STEP 1.2
DETERMINE THE ARCHITECTURE METRIC

1.1  Identification if a HW element is safety-relevant or not


 Per definition a HW element is safety relevant, if it contributes
Quantitative design verification

1.2 to a violation or implementation of a safety goal.


In other words: The component is part of a signal path of the
1.3 safety goal.

1.4

1.5

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 41
DAY 3
EXERCISE 3

Please establish whether or not


a Hardware component is
safety-relevant regarding the
safety goal 1.
Give reasons for your choice.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 42
STEP 1.3
DETERMINE THE ARCHITECTURE METRIC

1.1  Identification of failure modes and their statistical distribution


 E.g. for a resistor the failure modi “short, open and drift” must
Quantitative design verification

1.2 be looked at.


 Failure mode “Functional” refers to the specified function of the
1.3 Hardware component (e.g. Filter, amplifier etc.)
 Possible resources
1.4 – IEC 62061: 2005, Annex D (IEC 62061: „Safety of machinery – Functional
safety of safety-related electrical, electronic and programmable electronic
control systems”)
1.5 – A. Birolini: e.g. „Reliability Engineering - Theory and Practice”.

1.6

Example from Birolini:


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 43
DAY 3
EXERCISE 4

Complete the missing failure


modes in the template.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 44
STEP 1.4
DETERMINE THE ARCHITECTURE METRIC

1.1  Classification of random HW-fault categories


 Determination of fault effects per safety goal in following
Quantitative design verification

1.2 sequence:
– Direct safety goal violation as single point failure?
1.3 – Is there a possibility of failure detection and control by safety
mechanism?
– Safety goal violation in combination with a failure, which slips
1.4
through the safety mechanism?
– Is there a possibility to detect or identity the slipping „multiple point
1.5 fault” and to control it?

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 45
DAY 3
EXERCISE 5

 Classify the „Fault categories“


regarding the safety goal 1 in
the example project
(some examples).
 In a first step, check whether
the considered fault has the
potential to become a single
point fault
 In a second step, check
whether the considered fault
has the potential to become a
latent fault

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 46
STEP 1.5
DETERMINE THE ARCHITECTURE METRIC

1.1  Determination of the reachable Diagnostic Coverage in


accordance with the safety mechanism, which is
Quantitative design verification

1.2 defined in the TSC, for Residual Faults and Detected


Multiple Point Faults.
1.3  Refer also to ISO26262-5, annex D
Safety See overview of Typical diagnostic
1.4 mechanism/measure techniques coverage considered Notes
achievable
Failure detection by Depends on diagnostic coverage
D.2.1.1 Low
on-line monitoring of failure detection
1.5 Test pattern D.2.6.1 High -
Input comparison/voting
Low  DC =60%
Only if dataflow changes within
diagnostic test
(1oo2, 2oo3 or better D.2.6.5 High
1.6 redundancy) Medium DC = 90%
interval

Sensor Valid Range D.2.10.1 Low


High  DC
Detects shorts = 99%
to ground or
power and some open circuits
Sensor Correlation D.2.10.2 High Detects in range failures
Sensor Rationality Check D.2.10.3 Medium

Example table ISO 26262-5, D.11


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 47
DAY 3
EXERCISE 6

 Which diagnostic coverage


may be assumed for the used
diagnostic measure (safety
mechanism)?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 48
STEP 1.6
DETERMINE THE ARCHITECTURE METRIC

1.1  Totalling up of the fault rates and metric calculation


 Totalling up failure rates per failure mode
Quantitative design verification

1.2
 Fill in the sums in the formulas for SPFM and LFM.
 Comparison with the required target values
1.3
 If necessary, optimize the HW architecture

1.4

1.5

1.6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 49
DAY 3
EXERCISE 7

 Determine the „SPFM“ and


„LFM“ for the example
component (system).
 Can the defined target values
be reached?
 What could be done, if the
target values are not reached?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 50
STEP 2A
VERIFICATION OF SAFETY GOAL VIOLATING
FAILURE RATE

 In a second step the “absolute” safety goal violating failure


rate of an item has to be evaluated
 Determination of the failure rate of an item, which are
caused by random Hardware faults and violate a safety
goal.
 Determination of PMHF values (Probabilistic Metric of
random Hardware Failures).

ASIL When to do?


A no verification required

B verification recommended

C verification highly recommended


D verification highly recommended

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 51
STEP 2A
VERIFICATION OF SAFETY GOAL VIOLATING
FAILURE RATE

 3 ways to establish the target values


 Use of predecessor design for orientation (values
must not be worse)
 Expert Judgement (determination by assessors, for
example)
Typically used  Use of target values from table 6

 ISO 26262 establishes target values for the calculation of PMHF


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 52
STEP 2A
VERIFICATION OF SAFETY GOAL VIOLATING
FAILURE RATE

 Target values for verification of PMHF


 For each safety goal it must be demonstrated that the
cumulative safety target-violating fault rate of all HW elements
of an item within the respective safety goal meets the stated
values in the table
(compare PFH value acc. to IEC 61508)

Random hardware failure target values


ASIL (RHFT)
D < 10-8 h-1
C < 10-7 h-1
B < 10-7 h-1

ISO 26262-5, table 6

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 53
DAY 3
EXERCISE 8

 Go back to the TSC from Day 2


 Why is there a budget given for
the PMHF of the example
component?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 54
STEP 2A
VERIFICATION OF SAFETY GOAL VIOLATING
FAILURE RATE
Note: This formula will be deleted in Edition 2 and substituted by more detailled considerations
 Calculation of PMHF acc. to ISO 26262-10

Safety goal violating failure rate


Single point faults Residual faults Latent multiple point
faults

SlSPF SlRF SlMPF latent

MPMHF = SlSPF + SlRF + 0,5 * Slm,DPF * Slsm,DPF,latent * TLifetime


• MPMHF is the „value for the probabilistic metric for random hardware failure (PMHF)“
• Simplified formula to calculate PMHF (refer to ISO 26262-10)
• Partioning of latent faults in faults of „mission block (m)“ and „safety mechanism (sm)“

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 55
STEP 2A
VERIFICATION OF SAFETY GOAL VIOLATING
FAILURE RATE

 Simplified calculation per rough estimation

Safety goal violating failure rate


Single point faults Residual faults Latent multiple point
faults

SlSPF SlRF SlMPF latent

PMHF = SlSPF + SlRF + β * SlMPF latent


Considered „common cause“ fraction (acc. IEC61508), typically set at 10%

Attention: Above formula represents a rough and usually conservative estimation, which is
not shown in ISO 26262.
We recommend to use instead mathematical models (e.g. quantitative FTA).
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 56
STEP 2B
VERIFICATION OF FAILURE CLASSES PER
HARDWARE COMPONENT

 Alternative method (called „method 2“ ), instead of PMFH-


evaluation
 Determination of Failure Rate Class (FRC) per HW part
 Reachable failure classes are depended of ASIL and of
kind of failure (SPF, RF, LF)

ASIL When to do?


A no verification necessary

B verification recommended

C verification highly recommended


D verification highly recommended

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 57
STEP 2B
VERIFICATION OF FAILURE CLASSES PER
HARDWARE COMPONENT
 Process for single point faults (SPF and RF)
Begin

Single-point fault?
No

Yes
Residual fault? Evaluation procedure
No for dual-point failures

Meet failure rate class Yes


with respect to Add safety mechanism
Single-point fault (table 7) to mitigate fault
No

Meet failure rate class


Yes and DC with respect to Improve safety
residual fault (table 8) No mechanism

Yes

End ISO 26262-5, Figure 3

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 58
STEP 2B
VERIFICATION OF FAILURE CLASSES PER
HARDWARE COMPONENT

 Process for multiple point faults (LF)

Begin

Potential for Evaluation and resolution


dependent failures ? of dependent failures
(see ISO26262-9:—, (see ISO26262-9:—,
Yes
Clause 7) Clause 7)

No

Plausible dual-
point failure?
No

Yes

Meet failure rate class Add or improve


and DC with respect to safety mechanism
latent fault (table 9) No

Yes
ISO 26262-5, Figure 4
End
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 59
STEP 2B
VERIFICATION OF FAILURE RATE CLASSES
PER HARDWARE PART

failure rate target value*  Definition of failure rate classes


class (FRC)
1 < 10-10 h-1  The values for failure rate class 1
should be smaller than the value for
2 < 10-9 h-1 ASIL D divided by 100.
3 < 10-8 h-1  The values for failure rate class 2
i < 10-(11-i) h-1 should be smaller than the ten-fold
* The reference point is the ASIL D value for failure rate class 1.
target value from ISO 26262-5, Table 6  The values for failure rate class 3
should be smaller than the hundred-
fold value for failure rate class 1.
ASIL (RHFT)  The values for failure rate classes >
3 result accordingly.
D < 10-8 h-1
C < 10-7 h-1
(B) < 10-7 h-1

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 60
STEP 2B
VERIFICATION OF FAILURE CLASSES PER
HARDWARE COMPONENT

 Target values for Single Point Faults (SPF)


 For each “Single Point Fault“ it must be shown that the target values from ISO
26262-5, Table 7, are met.

D Failure rate class 1 + dedicated measures


Failure rate class 2 + dedicated measures
ASIL of C Or
the
Safety Failure rate class 1
Goal Failure rate class 2
B Or
Failure rate class 1
ISO 26262-5, Table 7
Dedicated measures: Measures which back up the assumed fault rates, e.g.
» Design features such as over-design
» Initial test of the assembly components used
» “Burn in“ test
» Dedicated control plan
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 61
STEP 2B
VERIFICATION OF FAILURE CLASSES PER
HARDWARE COMPONENT

 Target values for Residual Faults (RF)


 For each “Residual Point Fault“ it must be shown that the target values from
ISO 26262-5, Table 8, are met

Diagnostic Coverage with respect to residual faults


>= 99,9% >= 99 % >= 90 % < 90 %

Failure rate Failure rate Failure rate Failure rate class 1 +


D
class 4 class 3 class 2 dedicated measures

ASIL of the
Safety Goal Failure rate Failure rate Failure rate Failure rate class 2 +
C
class 5 class 4 class 3 dedicated measures

Failure rate Failure rate Failure rate


B Failure rate class 2
class 5 class 4 class 3
ISO 26262-5, Table 8

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 62
STEP 2B
VERIFICATION OF FAILURE CLASSES PER
HARDWARE COMPONENT

 Target values for Latent Faults (LF)


 For each “Latent Fault“ it must be shown that the target values from ISO
26262-5, Table 9, are met.

Diagnostic Coverage with respect to latent faults

>= 99 % >= 90 % < 90 %

D Failure rate class 4 Failure rate class 3 Failure rate class 2


ASIL of Safety Goal
C Failure rate class 5 Failure rate class 4 Failure rate class 3

ISO 26262-5, Table 9

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 63
DAY 2
EXERCISE 9

 Results of PMHF and FRC


evaluations are shown in the
example
 What are the advantages and
disadvantages of each
method?

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 64
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 65
DERIVING TEST CASES (1)

ASIL
Methods
A B C D
1a Analysis of requirements ++ ++ ++ ++
1b Analysis of internal and external interfaces + ++ ++ ++
1c Generation and analysis of equivalence classesa + + ++ ++
1d Analysis of boundary valuesb + + ++ ++
1e Knowledge or experience based error guessingc ++ ++ ++ ++
1f Analysis of functional dependencies + + ++ ++
1g Analysis of common limit conditions, sequences and sources of common
cause failures
+ + ++ ++
1h Analysis of environmental conditions and operational use cases + ++ ++ ++
1i Standards if existingd + + + +
1j Analysis of significant variantse ++ ++ ++ ++
a In order to derive the necessary test cases efficiently, analysis of similarities can be conducted.
b EXAMPLE values approaching and crossing the boundaries between specified values, and out of range values
c “Error guessing tests” can be based on data collected through a lessons learned process, or expert judgment, or both. It can be supported by
FMEA.
d Existing standards include ISO 16750 and ISO 11452.
e The analysis of significant variants includes worst case analysis.

ISO 26262, Table 10 — Methods for deriving test cases for hardware integration testing

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 66
DERIVING TEST CASES (1)

ASIL
Methods
A B C D
1a Analysis of requirements ++ ++ ++ ++
1b Analysis of internal and external interfaces + ++ ++ ++
1c Generation and analysis of equivalence classesa + + ++ ++
1d Analysis of boundary valuesb + + ++ ++
1e Knowledge or experience based error guessingc ++ ++ ++ ++
1f Analysis of functional dependencies + + ++ ++
1g Analysis of common limit conditions, sequences and sources of common
cause failures
+ + ++ ++
1h Analysis of environmental conditions and operational use cases + ++ ++ ++
1i Standards if existingd + + + +
1j Analysis of significant variantse ++ ++ ++ ++
a In order to derive the necessary test cases efficiently, analysis of similarities can be conducted.
b EXAMPLE values approaching and crossing the boundaries between specified values, and out of range values
c “Error guessing tests” can be based on data collected through a lessons learned process, or expert judgment, or both. It can be supported by
FMEA.
d Existing standards include ISO 16750 and ISO 11452.
e The analysis of significant variants includes worst case analysis.

ISO 26262, Table 10 — Methods for deriving test cases for hardware integration testing

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 67
CHOICE OF TEST METHODS (1)

ASIL
Methods
A B C D
1 Functional testinga ++ ++ ++ ++
2 Fault injection testing + + ++ ++
3 Electrical testing ++ ++ ++ ++
ISO 26262-5, Table 11 — Hardware integration tests to verify the completeness and correctness of the safety
mechanisms implementation with respect to the hardware safety requirements

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 68
CHOICE OF TEST METHODS (2)

 1 Functional testing: On HW level is only the HW aspect of


the item should be considered. Test Cases should
demonstrate, that the HW fulfils the specified functions
(„straightforward tests“)
 2 Fault injection test: The injection of failure can
processed physically (test adapter) or logically.
In case of physically injection SW test functions are used
together with SW basis functions (e.g. driver). Both kind of
functions have to be verified sufficiently to fulfil the
requirements for the highest ASIL. The test SW
communicates the HW failures to the tester.
Logically injection based on HW simulation (e.g. Model
based development, net list etc.)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 69
CHOICE OF TEST METHODS (3)

 3 Electrical testing: Test cases are concentrated on the


compliance to all HW safety requirements (not functional)
and all electrical properties specified in the HW design.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 70
MORE HW INTEGRATION TESTS

ISO 26262-5, Table 12 — Hardware integration tests to verify robustness and operation under external stresses
ASIL
Methods
A B C D
1a Environmental testing with basic functional verification a ++ ++ ++ ++
1b Expanded functional testingb o + + ++
1c Statistical testingc o o + ++
1d Worst case testingd o o o +
1e Over limit testing e + + + +
1f Mechanical testing ++ ++ ++ ++
1g Accelerated life testf + + ++ ++
1h Mechanical Endurance testg ++ ++ ++ ++
1i EMC and ESD testh ++ ++ ++ ++
1j Chemical testingi ++ ++ ++ ++
a During environmental testing with basic functional verification the hardware is put under various environmental conditions during which the hardware requirements are
assessed. ISO 16750-4 (Road vehicles -- Environmental conditions and testing for electrical and electronic equipment -- Part 4: Climatic loads) can be applied.
b Expanded functional testing checks the functional behaviour of the item in response to input conditions that are expected to occur only rarely (for instance extreme mission
profile values), or that are outside the specification of the hardware (for instance an incorrect command). In these situations, the observed behaviour of the hardware element is
compared with the specified requirements.
c Statistical tests aim at testing the hardware element with input data selected in accordance with the expected statistical distribution of the real mission profile. The
acceptance criteria are defined so that the statistical distribution of the results confirms the required failure rate.
d Worst case testing aims at testing cases found during worst case analysis. In such a test, environmental conditions are changed to their highest permissible marginal values
defined by the specification. The related responses of the hardware are inspected and compared with the specified requirements.
e In over limit testing, the hardware elements are submitted to environmental or functional constraints increasing progressively to values more severe than specified until they
stop working or they are destroyed. The purpose of this test is to determine the margin of robustness of the elements under test with respect to the required performance.
f Accelerated life test aims at predicting the behaviour evolution of a product in its normal operational conditions by submitting it to stresses higher than those expected during
its operational lifetime. Accelerated testing is based on an analytical model of failure mode acceleration.
g The aim of these tests is to study the mean time to failure or the maximum number of cycles that the element can withstand. Test can be performed up to failure or by damage
evaluation.
h ISO 11452-2; ISO 11452-4; ISO 7637-2; ISO 10605 and ISO 7637- 3 can be applied for EMC tests and ISO 16750-2 can be applied for ESD tests.
I For chemical test, ISO 16750–5 can be applied.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 71
HARDWARE TESTS SUMMARY

 Test methods are the same for safety relevant and non-
safety relevant requirements
 Reference to other well-known industry standards
 An analysis of the existing test strategies is required,
whether they are sufficient

 Existing hardware test strategies shall be analyzed and extended


if necessary
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 72
SUMMARY OF DAY 3
HARDWARE DEVELOPMENT

 Functional Safety of hardware is mainly based on the evaluation of probabilistic


metrics
 The interface between hardware and software has to be clarified
 Hardware safety requirements are a refinement of component safety
requirements
 Random hardware faults are categorized acc. to their impact on each safety goal
 ISO 26262 establishes target values for the calculation of SPFM, LFM, PMHF
and FRC
 Existing hardware test strategies shall be analyzed and extended
if necessary

ISO 26262 Training - Day 2 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 73
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 74
CONSIDERED PARTS
OF THE SOFTWARE DEVELOPMENT

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 75
RESPONSIBILITIES AND TARGETS

Responsibilities
 The software development phase typically is in the responsibility of the
software suppliers, who have the knowledge for the implementation of
software safety mechanisms at component level
Targets
 In the software development phase a software is designed in
accordance with the required safety integrity of safety requirements
derived from the system development phase (TSC)

 Functional Safety in the software development is mainly based on the use of


processes, techniques and methods
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 76
SOFTWARE PHASE MODEL

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 77
INITIATION OF PRODUCT DEVELOPMENT AT
SOFTWARE LEVEL - ACTIVITIES

Activities during this phase:


 Planning of activities during software development
 Definition and documentation of selection criteria for
tools and programming languages
 Selection of suitable tools , techniques and methods
(see also Day 4)
 Definition of company-/project-internal tool application
guidelines

 Initiation of software development means to plan activities and select tools,


techniques and methods along Functional Safety rules of the company
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 78
INTENTION OF SW-GUIDELINES

 Supporting the developer in designing robust and fault resistent SW-


code
 Allow more flexibilty in the development teams
 Improve the quality and maintainability of the source code
Example
Without brackets: With brackets (Misra-C, rule 14.8):
while ( new_data_available ) while ( new_data_available )
{
Process_data(); Process_data();
Service_watchdog(); Service_watchdog();
}

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 79
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 80
SPECIFICATION OF SOFTWARE SAFETY
REQUIREMENTS

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 81
SPECIFICATION OF SOFTWARE SAFETY
REQUIREMENTS - ACTIVITIES

Activities during this phase:


 Derivation of the software safety requirements from the technical
safety concept and the system design specification considering the
 Specified hardware and software configurations
 Hardware-software interface
 Hardware design specification (e.g. use of multicore architecture)
 Time-related limitations (e.g. speed of the μprocessors and interfaces)
 External interfaces (e.g. diagnostic interfaces)
 Every operating mode of the vehicle, the system and the hardware which
may have impact on the software.
 Determination and documentation of the interdependencies between
software and hardware
 Verification of the work results/products regarding consistency and
completeness
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 82
SW-SPECIFICATION AND DESIGN

Source: ISO26262-10, Figure 8


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 83
SPECIFICATION OF SOFTWARE SAFETY
REQUIREMENTS (SWSR)

 SWSR are valid for software-based functions where a


malfunction may lead to the violation of a technical safety
requirement.
 Examples: functions
 which are used to detect, report and treat faults
– of safety-relevant hardware elements (e.g.faulty sensor);
– of safety-relevant software elements (e.g. Watchdog);
 which serve to achieve a safe system condition;
 which are used to perform self-tests, during operation and in a service case (e.g.
storage tests);
 which permit modifications to be made to the software during production and in a
service case (e.g. download of a new software release);
 which deal with time-critical or performance-relevant operations
 Software Safety Requirements are derived from safety requirements at
component (system) level
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 84
DAY 3
EXERCISE 10

 Create software safety


requirements based on
following technical safety
requirements:

COMPR100: Faults in RAM shall be


detected

COMPR101: The number of faults shall


be counted.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 85
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 86
SOFTWARE ARCHITECTURAL DESIGN

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 87
WHAT IS A SOFTWARE ARCHITECTURE?

 Definition orientierted at ISO 26262-1, 1.3 Architecture:


 Representation of the structure or functions or systems or elements
that allows identification of building blocks, their boundaries and
interfaces, and includes the allocation of functions to hardware and
software elements.
 Examples: Layered architecture
Event-driven architecture
Component
Events 1
Generator
Interface 1-2

Component
2
Dispatcher
Interface 2-3

Component
Handler 1 Handler 2 ... Handler 3 3

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 88
SOFTWARE ARCHITECTURE DESIGN
ACTIVITIES

Activities during this phase:


 Description of the software architecture using a suitable level of
abstraction
 Identification of all software units
 Verification of the architecture

ASIL
Methods
A B C D
1a Informal notations ++ ++ + +
1b Semi-formal notations + ++ ++ ++
1c Formal notations + + + +
Table 2 — Notations for software architectural design from ISO 26262-6

 The use of notations shall avoid systematic faults


by getting a unique interpretation
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 89
EXAMPLE OF NOTATIONS

 Natural language - e.g.


The car shall be painted in red (RAL 3020).
 Informal notations - syntax and
semantics have not at all or only partially
been defined,
e.g. use cases
 Semi-formal notations – syntax has been
defined, the associated semantics at best
partially,
e.g. data flow diagram
 Formal notations - syntax and semantics
have been defined,
e.g. finite automat
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 90
EXAMPLE
INFORMAL NOTATION

 Informal Notation
Drawings and diagrams with no fixed syntax and semantic

Example:

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 91
EXAMPLES
SEMI-FORMAL NOTATION

 Semi-formal Notation
The syntax is defined, the sematic is not completely fixed

Examples:
 logic/function block diagrams: described in IEC 61131-3
 sequence diagrams: described in IEC 61131-3
 data flow diagrams: see IEC 61508-3, ref. C.2.2
 finite state machines/state transition diagrams: see IEC 61508-3, ref. B.2.3.2
 time Petri nets: see IEC 61508-3, ref. B.2.3.3
 entity-relationship-attribute Data models: see IEC 61508-3, ref. B.2.4.4
 message sequence charts: see IEC 61508-3, ref. C.2.14
 decision/truth tables: see IEC 61508-3, ref. C.6.1
 UML: see IEC 61508-3, ref. C.3.12
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 92
EXAMPLE:
SEMI-FORMAL NOTATION
DATA FLOW DIAGRAM

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 93
FORMAL NOTATION

 Formal Notation
 Syntax and semantic is defined
 Based on mathematics
 Very rare use, because difficult to implement

 Examples:
 CCS, CSP, HOL, LOTOS, OBJ, temporal logic, VDM
and Z ( methods not described in detail)
 Other techniques like “finite state machines” and “Petri
nets”, are often seen as formal methods, depending on
their mathematical basics

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 94
SOFTWARE ARCHITECTURE

Objectives of the architecture description:


 A software structure which is verifiable.
 Progressive refinement ( SW design, SW module design)
 Maintainable/servicable SW structure
 Allocation of the safety-relevant functionality to subcomponents
 Estimations in the architecture document (runtime, memory (RAM,
ROM, parameters), interface capacities (bus, etc.))

with
 Modular structure
 Encapsulation principle
 Low complexity

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 95
BASIC PRINCIPLES OF SW ARCHITECTURE
ASIL Examples
Methods
A B C D Tool / Metric Interpretation

1a Hierarchical structure of software components ++ ++ ++ ++ DoxyGen Is it understandable?

a McCabe,
1b Restricted size of software components ++ ++ ++ ++ SW modularisation, use of HIS metric
LOC metric
a
1c Restricted size of interfaces + + + + DoxyGen Is it understandable?
measure of tightness of the connections
High cohesion within each software LCOM4,
1d + ++ ++ ++ between data and subprograms within one
component b Cohesion metric
module
Restricted coupling between software measure for the tightness of connections
1e + ++ ++ ++ RFC metric
components a, b, c between modules
Operating Ckeck of the maximum task run time,
1f Appropriate scheduling properties ++ ++ ++ ++
System (OS) TTA Architektur feasible?
Only 1 timer, 1 teceive and 1 transmission-
interrupt is allowed
a, d Otherwise: Interrupts to be prioritized
1g Restricted use of interrupts + + + ++
correctly, Interrupts closed only for a short
time for critical SW areas, Interrupts to be
documented in details
a In methods 1b, 1c, 1e and 1g "restricted" means to minimize in balance with other design considerations.
b Methods 1d and 1e can, for example, be achieved by separation of concerns which refers to the ability to identify, encapsulate, and
manipulate those parts of software that are relevant to a particular concept, goal, task, or purpose.
c Method 1e addresses the limitation of the external coupling of software components.
d Any interrupts used have to be priority-based.
ISO26262-6, Table 3 — Principles for software architectural design (with add ons from SGS TÜV Saar)
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 96
ERROR DETECTION (ARCHITECTURE)

ASIL
Methods Comments
A B C D
Online check of valid input and
1a Range checks of input and output data ++ ++ ++ ++
output data range

a e.g.: Assertion checks, comparison of


1b Plausibility check + + + ++
calculated data and expected data

b e.g.: Check sums, CRC check, dual


1c Detection of data errors + + + +
invers stored variables
c
1d External monitoring facility o + + ++ e.g.: Watchdog, 2. CPU,

1e Control flow monitoring o + ++ ++ e.g.: Watchdog

Difficult to realize and expensive 


1f Diverse software design o o + ++
here better decomposition
a Plausibility checks can include using a reference model of the desired behaviour, assertion checks, or comparing signals from
different sources.
b Types of methods that may be used to detect data errors include error detecting codes and multiple data storage.
c An external monitoring facility can be, for example, an ASIC or another software element performing a watchdog function.

ISO26262-6, Table 4 — Mechanisms for error detection at the software architectural level (with add ons from SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 97
ERROR CONTROL (ARCHITECTURE)

ASIL
Methods Comments
A B C D
If a fault is detected the system will be reset to
a
1a Static recovery mechanism + + + + an earlier internal operating condition, which
correctness is known or recovery by repeating

The architecture ensures, the functions with


b higher priority will be operated before the lower
1b Graceful degradation + + ++ ++
ones, if the resources are not sufficient to
perform all system functions.
c
1c Independent parallel redundancy o o + ++ N-Version Programming

Although only a part of faults can be corrected in


1d Correcting codes for data + + + + a safety relevant system, it is often better to
reject wrong data

a Static recovery mechanisms can include the use of recovery blocks, backward recovery, forward recovery and recovery through
repetition.
b Graceful degradation at the software level refers to prioritizing functions to minimize the adverse effects of potential failures on
functional safety.
c Independent parallel redundancy can be realized as dissimilar software in each parallel path.

ISO 26262-6, Table 5 — Mechanisms for error handling at the software architectural level (with add ons from SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 98
VERIFICATION OF SW ARCHITECTURE:
METHODS
Objectives :
 Compliance with SW SRS
 Compatibility with the target hardware
 Demonstration of guidelines
ASIL
Methods Comments
A B C D
a
1a Walk-through of the design ++ + o o Discussion with reviewer
a
1b Inspection of the design + ++ ++ ++ Review acc. a defined process (e.g. use of check lists)
Simulation of dynamic parts of
1c b + + + ++ Model based development
the design
1d Prototype generation o o + ++ Model based development

1e Formal verification o o + + Model based development. Needs an abstract model


c
1f Control flow analysis + + ++ ++ Analysis of the correct program flow
c
1g Data flow analysis + + ++ ++ e.g. SW-FMEA
a In the case of model-based development these methods can be applied to the model.
b Method 1c requires the usage of executable models for the dynamic parts of the software architecture.
c Control and data flow analysis may be limited to safety-related components and their interfaces.
ISO 26262-6, Table 6 — Methods for the verification of the software architectural design (add ons by SGS TÜV Saar)
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 99
DIGRESSION: SFMEA (NASA-GB-1740.13)

 SFMEA’s (Software Failure Mode and Effect Analysis) identify key


software fault modes for data and software actions.
 It analyzes the effects of abnormalities on the system and components
in the system.
 Each component is examined, and all the ways it can fail are listed.
 SFMEA can potentially identify:
 Hidden and unanticipated failure modes,
 System interactions,
 Dependencies within the SW and between SW and HW
 Unstated assumptions
 Inconsistencies between the requirements and the design
 Safety Analysis at the software architectural level is required
by ISO 26262 (Edt 1), but the method is not described in detail
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 100
EXAMPLE
SOFTWARE FMEA

Does the system (vehicle


System Effect level) impact map to any
Software Software (sensor level); of the identified Hazards
Local Effect (function
No. Component Description Failure Mode Functions (Consider if in HARA?
level)
(Function) Affected mitigation is already List the Hazard ID, Title
available) and ASIL ranking of the
Hazard

sensor delivers wrong


flash error not detected; value, if watchdog Wrong output
Fails to execute - does
To be clarified system behaviour can does not detect a signal/malfunction of
not run
be undefined failure in program safety function/ASIL D
behaviour

To be flash test are not sensor does not have


Incomplete Execution
clarified completed, no output any output

no local effect because


test of Timing Error - sum of
To be sensor does not take sensor delivers the
b1 Testunit_2 memory of start-up timing do not
clarified care about timing during first output delayed
flash meet requirement
initialization

sensor delivers wrong


flash error not detected; value, if watchdog
Error in Execution - ok, To be
system behaviour can does not detect a
but memory has errors clarified
be undefined failure in program
behaviour

Error in Execution - not To be specified behaviour: sensor does not have


ok, but memory ok clarified reset of sensor any output

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 101
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 102
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION - ACTIVITIES

ASIL
Methods Comments
A B C D

1a Natural language ++ ++ ++ ++ Always required


Syntax and semantic is not
1b Informal notations ++ ++ + +
defined
Syntax is defined, sematic not
1c Semi-formal notations + ++ ++ ++
completely defined

1d Formal notations + + + + Syntax and semantic is defined

ISO 26262-6, Table 7 — Notations for software unit design (add ons by SGS TÜV Saar)

 To specify the software unit design it is mandatory to use natural language


plus notations (rules for syntax and semantic)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 103
SOFTWARE UNIT
AND EMBEDDED SOFTWARE

 A software unit is
 An atomic level software component of the software architecture that
can be subjected to stand-alone testing.

 An embedded software is
 A fully-integrated software to be executed on a processing element
(e.g. Microcontroller, Field Programmable Gate Array (FPGA) or an
Application Specific Integrated Circuit (ASIC))

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 104
ISO 26262-6, Table 8 — Design principles for software unit design and implementation (ergänzt durch SGS TÜV Saar)
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION - METHODS
ASIL Umsetzung (Beispiele)
Methods
A B C D Tool Interpretation
One entry and one exit point in
1a ++ ++ ++ ++ MISRA-Rules
subprograms and functions a
No dynamic objects or variables, or else
1b + ++ ++ ++ MISRA-Rules
online test during their creation a,b
1c Initialization of variables ++ ++ ++ ++ MISRA-Rules
a
1d No multiple use of variable names + ++ ++ ++ MISRA-Rules
Avoid global variables or else justify their
1e + + ++ ++ MISRA-Rules
usage a
No pointer arithmetic, no pointres at fucntcions,
no pointer at.
1f Limited use of pointers a o + + ++ MISRA-Rules
Arrayindexing with pointers is allowed, HW access
with pointers is allowed
a,b
1g No implicit type conversions + ++ ++ ++ MISRA-Rules
c No: continue statements, backwards gotos,
1h No hidden data flow or control flow + ++ ++ ++ MISRA-Rules
unreachable code, dead code (e.g. A=A;)
1i No unconditional jumps a,b,c ++ ++ ++ ++ MISRA-Rules No: continue statements, backwards gotos
1j No recursions + + ++ ++ MISRA-Rules
a Methods 1a, 1b, 1d, 1e, 1f, 1g and 1i may not be applicable for graphical modelling notations used in model-based development.
b Methods 1g and 1i are not applicable in assembler programming.
c Methods 1h and 1i reduce the potential for modelling data flow and control flow through jumps or global variables.
ISO 26262-6, Table 8 — Design principles for software unit design and implementation (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 105
SOFTWARE UNIT DESIGN AND
IMPLEMENTATION - VERIFICATION

ASIL
Methods Comments
A B C D
a
1a Walk-through ++ + o o
a
1b Inspection + ++ ++ ++
1c Semi-formal verification + + ++ ++
1d Formal verification o o + +
b,c See next slides
1e Control flow analysis + + ++ ++
b,c
1f Data flow analysis + + ++ ++
1g Static code analysis + ++ ++ ++
d
1h Semantic code analysis + + + +
a In the case of model-based software development the software unit specification design and implementation can be verified at
the model level.
b Methods 1e and 1f can be applied at the source code level. These methods are applicable both to manual code development
and to model-based development.
c Methods 1e and 1f can be part of methods 1d, 1g or 1h.
d Method 1h is used for mathematical analysis of source code by use of an abstract representation of possible values for the
variables. For this it is not necessary to translate and execute the source code.
ISO 26262-6, Table 9 — Methods for the verification of software unit design and implementation (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 106
SW UNIT DESIGN
VERIFICATION BY INSPECTION

Example: Checklist
Checklist for Code Reviews, (list is not complete)
1. Has the design properly been translated into code? (The results of the procedural design should be
available during this review.)
2. Is the document header complete:
a) the title, referring to the scope of the content,
b) the author and approver,
c) unique identification of each different revision (version) of a document,
d) the change history,
e) the status. E.g.: “draft", "released".
3. Are there misspellings and typos?
4. Is there compliance with coding standards for language style, comments, module prologue?
5. Are there incorrect or ambiguous comments?
6. Are comments useful or are they simply alibis for poor coding?
7. Are data types and data declarations proper?
8. Are physical constants correct?
9. Has maintainability been considered?
and so on ….

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 107
SW UNIT DESIGN
VERIFICATION BY THE USE OF CONTROL
FLOW ANALYSIS

 Objective
Detection of bad or incorrect program structures
 Description
The control flow analysis is a static analysis method. The
program is analysed to get a flow graph, which can be
analysed for:
• Unreachable code, as a result of unconditional jumps
• Knoted code  bad structured code (see example next page)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 108
SW UNIT DESIGN
VERIFICATION BY THE USE OF CONTROL
FLOW ANALYSIS - EXAMPLE

Good structured Code Bad structured Code

Legend
= Knot = Instruction
= Path = Control Flow

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 109
SW UNIT DESIGN
VERIFICATION BY THE USE OF
DATA FLOW ANALYSIS

 Objective
Identification of bad or incorrect program structures
 Description
The data flow analysis is a static method, which is typically combined
with the information from the control flow analysis. The analysis checks
the following:
• Variables which can be read without initialization
• Variables which can be written several times, without reading. This could be
an indication for a skipped code
• Variables which are written but never read. This could be an indication for a
redundant code

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 110
SW UNIT DESIGN
VERIFICATION BY THE USE OF
STATIC ANALYSIS

 The goal of static analysis is the detection of existing errors in a


document (e.g. source code).
 Static Analysis
 An evaluation process in which a software programme is systematically
assessed without necessarily executing the programme.
 All static analysis methods can in principle be performed without tool
support. With exception of the inspection and review techniques, tool
support is useful.
 The evaluation may typically be computer-aided and usually includes
analysis of such features as programme logic, data paths, interfaces and
variables
 Typically combines Control Flow and Data Flow Analysis

 Static code analysis can be seen as “state of the art” at software unit level
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 111
SW UNIT DESIGN
VERIFICATION BY THE USE OF
DATA FLOW ANALYSIS - EXAMPLE

void MinMax (int &min, int &max)


{
int hilf;
if (min > max)
Alarm: Variable hilf is not initialized.
{
max = hilf; Warning: Variable max is written
several times but never read between.
max = min;
hilf = min;
Warning: Variable hilf is written but
} never read.
}
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 112
DAY 3
EXERCISE 11
Static Code Analysis
01 int main () Please note your comments
02 {
03 char c = 'x';
04 int y;
05 int i;
06 while (c != 'x');
07 {
08 c = getchar ();
09 if (c == 'e') return 0;
10 switch (c)
11 {
12 case '\n':
13 printf ("Zeilenwechsel\n");
14 default:
15 printf ("%c",c);
16 }
17 }
18 printf ("\nreturn now");
19 return y;
20 }
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 113
CONTENTS

1. Hardware Development
a. Overview
b. Hardware Safety Requirements
c. Hardware Design
d. Evaluation of Hardware Metrics
e. Hardware Tests

2. Software Development
a. Overview
b. Software Safety Requirements
c. Software Architecture and Design
d. Software Integration and Tests

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 114
SOFTWARE-INTEGRATION AND TEST

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 115
PLANNED INTEGRATION

 Software integration shall be a planned integration of all software


units considering the functional interactions/context and interfaces
 Software integration can be processed in one or more steps.
 A big bang strategy can be confusing
 A stepwise procedure is recommended

Stepwise procedure Big Bang

 Integration means a stepwise building up of the planned software architecture


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 116
SOFTWARE-UNIT TESTING

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 117
SOFTWARE UNIT TESTING
ACTIVITIES AND AIMS

Objectives:
 Compliance with SW unit design specification
 Compliance with HW/SW interface specification
 Demonstration of correct implementation of the functionality
 Demonstration that no unintended functionality has been
implemented
 Robustness
 Demonstration that sufficient resources for functionality are available

Activities during this phase:


 Selection of suitable test cases with PASS / FAIL criteria
 Execution and documentation of tests
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 118
DERIVING TEST CASES FROM
SPECIFICATIONS

 This is the standard approach for test case derivation in practise.


 Start of derivation of test cases is possible after release of specification.
 Quality of test cases depends on quality of requirements.
 Example:
 Specification: One procedure named “InterriorLightning” shall switch
off or on the interior lightning depending on the input value.
 Following test cases are used during testing:

Test Case Input Value Expected Output Value


TF1 (LIGHT_ON) Interior lightning on
TF2 (LIGHT_OFF) Interior lightning off
TF3 () ERROR
TF4 (LIGHT) ERROR
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 119
DERIVATION OF TEST CASES – EXAMPLE
OF BOUNDARY VALUE ANALYSIS

 A 10 Bit Digital-Analog-Converter has a valid input value


range from 0 till 1023.
 Creating test cases with following input values:

Test Case Description Input value Expected Output


No. (Test value) Value
1 below minimum -1 Error
2 at minimum 0 0
3 above minimum 1 1
4 below maximum 1022 1022
5 at maximum 1023 1023
6 above maximum 1024 Error

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 120
SW UNIT TESTING
METHODS
ASIL
Methods Comments
A B C D
a
1a Requirements-based test ++ ++ ++ ++
1b Interface test ++ ++ ++ ++
b
1c Fault injection test + + + ++ See next slides
c
1d Resource usage test + + + ++
Back-to-back comparison test between model and
1e d + + ++ ++
code, if applicable

a The software requirements at the unit level are the basis for this requirements-based test.
b This includes injection of arbitrary faults (e.g. by corrupting values of variables, by introducing code mutations, or by corrupting
values of CPU registers).
c Some aspects of the resource usage test can only be evaluated properly when the software unit tests are executed on the
target hardware or if the emulator for the target processor supports resource usage tests.
d This method requires a model that can simulate the functionality of the software units. Here, the model and code are stimulated
in the same way and results compared with each other.

ISO 26262-6, Table 10 — Methods for deriving test cases for software unit testing (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 121
SW UNIT TESTING
REQUIREMENTS BASED TEST
EXAMPLE

Requirement Function Test Case Teststatus


R1.1 Create a user ID input box. TC1.1 PASSED
R1.2 Create a Password input box. TC1.2 PASSED
R1.3 Create a Submit button. TC1.1 PASSED
TC1.2
TC1.3
R1.4 Create a Cancel button. TC1.1 PASSED
TC1.2
TC1.4
R2.1 Check User ID value. TC2.1 FAILED
R2.2 Check User Password value. TC2.2 EXECUTED
R3.1 Display Homepage. TC3.1 OPEN

 Requirements based testing means to have minimum one test case


per safety requirement
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 122
INTERFACE TESTING

 This method demonstrates the consistency and correct


implementation of interfaces, e.g. HW-SW Interface or
interfaces between software components.
 Interface tests comprise the test of analog, digital inputs
and outputs, boundary tests, tests with equivalency
classes, in order to fully test the specified interface
(including compatibility, timings and specified design)
 Internal interfaces can be tested by static tests of the
software / hardware compatibility as well as by dynamic
tests.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 123
SW UNIT TESTING
FAULT INJECTION TEST

 Objective: Check for robustness of each SW-unit


 Improvement of test coverage
 Test of code, which cannot be reached easy
(e.g. failure handling code).
 Examples:
 Manipulation of CPU registers by the use of a debugger
 Manipulation of variables by the use of a debugger
 Manipulation of HW signals by the use of a special test board
 Manipulation of source code before compilation
e.g. a = a + 1 will be a = a - 1

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 124
RESOURCE USAGE TESTING

 Objective: Evaluation, that the requirements of the HW-


architecture are fulfilled with sufficient tolerance:
 Minimum and maximum process execution times,
 Memory usage, e.g.:
RAM for stack and heap,
ROM for executable and non volatile data.
 Bandwidth for communication resources (e.g. data bus).

 Some aspects of the resource usage test can only be


evaluated properly when the software unit tests are
executed on the target hardware or if the emulator for the
target processor supports resource usage tests.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 125
SW UNIT TESTING
RESSOURCE USAGE TEST
EXAMPLE

 Measured memory use after testing:


Maximum memory use (kB)
Function
ROM RAM EEPROM
function 1 20,30 50,00 0,00
function 2 60,21 38,21 0,02
function 3 42,43 77,02 1,27
… .. .. ..
Test-Sum: 821,21 450,23 12,21

 Ressource usage test analysis:


Memory Size (kB)
ROM RAM EEPROM
Board size: 1024 1024 64
Test-Sum: 821,21 450,23 12,21
Free 202,79 573,77 51,79

 Stack memory usage analysis:


Memory Size (kB)
Max-Stack-Definition 256
Stack-use 125,33
Free 130,67

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 126
BACK-TO-BACK TESTING

 This method demonstrates the correct implementation of


functional and technical safety requirements.
 Comparison of a simulated behaviour and the behaviour
of the implementation using the same test case.
 Comparison of the results of test cases performed with a
simulation and the according implementation.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 127
SW UNIT TESTING
TEST COVERAGE

ASIL
Methods Comments
A B C D
1a Statement coverage ++ ++ + +

1b Branch coverage + ++ ++ ++ See next slides


MC/DC (Modified Condition/Decision
1c + + + ++
Coverage)

ISO 26262-6, Table 12 — Structural coverage metrics at the software unit level (add ons by SGS TÜV Saar)

 Integration means a stepwise building up of the planned software architecture


ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 128
STRUCTURAL COVERAGE

 At SW-unit level it is distinguished between


 Statement Coverage
(i.e. percentage of statements within the software that
have been executed)
 Branch Coverage
(i.e. percentage of branches of the control flow that have
been executed)
 MC/DC (Modified Condition/Decision Coverage) refer to
next slide

 There are no target values given for the structural test coverage.
This means 100% is the target
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 129
MC/DC (MODIFIED CONDITION/DECISION
COVERAGE)

 MC/DC:
 Every point of entry and exit in the program has been invoked at least once,
 every condition in a decision in the program has taken on all possible outcomes at
least once,
 and each condition has been shown to affect that decision outcome independently.
 A condition is shown to affect a decision’s outcome independently by varying
just that condition while holding fixed all other possible conditions. The
condition/decision criterion does not guarantee the coverage of all conditions in
a module because in many test cases, some conditions of a decision are
masked by the other conditions.
 Using the modified condition/decision criterion, each condition must be shown to
be able to act on the decision outcome by itself, everything else being held
fixed. The MC/DC criterion is thus much stronger than the condition/decision
coverage.

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 130
SIMPLE C++ EXAMPLE

int fct_example (int x, int y) ‘fct_example’ is part of a bigger program


and this program was run with some test
{
suite.
int z = 0;
if ((x>0) && (y>0)){
z = x; }
return z;
}

Execute following test cases to reach 100% structural


coverage:
 branch coverage: ‘fct_example (0,1)’, ‘fct_example (1,1)’
 condition/decision coverage: the function was called as
‘fct_example (1,1)’ , ‘fct_example (0,1)’ , ‘fct_example (1,0)’
and ‘fct_example (0,0)’
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 131
SOFTWARE-UNIT TESTING

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 132
SOFTWARE INTEGRATION AND TEST
METHODS

ASIL
Methods Comments
A B C D
a
1a Requirements-based test ++ ++ ++ ++
1b Interface test ++ ++ ++ ++
b
1c Fault injection test + + ++ ++
cd see SW-Unit Testing
1d Resource usage test + + + ++
Back-to-back comparison test between model and
1e e + + ++ ++
code, if applicable
a
The software requirements at the architectural level are the basis for this requirements-based test.
b
This includes injection of arbitrary faults in order to test safety mechanisms (e.g. by corrupting software or hardware
components).
c
To ensure the fulfilment of requirements influenced by the hardware architectural design with sufficient tolerance, properties such
as average and maximum processor performance, minimum or maximum execution times, storage usage (e.g. RAM for stack and
heap, ROM for program and data) and the bandwidth of communication links (e.g. data buses) have to be determined.
d
Some aspects of the resource usage test can only be evaluated properly when the software integration tests are executed on
the target hardware or if the emulator for the target processor supports resource usage tests.
e
This method requires a model that can simulate the functionality of the software components. Here, the model and code are
stimulated in the same way and results compared with each other.

ISO 26262-6, Table 13 — Methods for software integration testing (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 133
SOFTWARE INTEGRATION UND TEST
DERIVATION OF TEST CASES

ASIL
Methods Comments
A B C D

1a Analysis of requirements ++ ++ ++ ++
a
1b Generation and analysis of equivalence classes + ++ ++ ++
See SW-Unit Testing
b
1c Analysis of boundary values + ++ ++ ++

c
1d Error guessing + + + +

a Equivalence classes can be identified based on the division of inputs and outputs, such that a representative test value can be
selected for each class.
b This method applies to interfaces, values approaching and crossing the boundaries and out of range values.
c Error guessing tests can be based on data collected through a “lessons learned” process and expert judgment.

ISO 26262-6, Table 14 — Methods for deriving test cases for software integration testing (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 134
SOFTWARE INTEGRATION AND TEST
TEST COVERAGE

ASIL
Methods Comments
A B C D
a
1a Function coverage + + ++ ++
See next slides
b
1b Call coverage + + ++ ++

a Method 1a refers to the percentage of executed software functions. This evidence can be achieved by an appropriate
software integration strategy.
b Method 1b refers to the percentage of executed software function calls.

ISO 26262-6, Table 15 — Structural coverage metrics at the software architectural level (add ons by SGS TÜV Saar)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 135
STRUCTURAL COVERAGE

 At SW Architectural Level it is distinguished


between
 Function coverage (acc. to ISO 26262):
(i.e. the percentage of executed software functions)
 Call coverage (acc. to ISO 26262):
(i.e. the percentage of executed software function calls)

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 136
SIMPLE C++ EXAMPLE

int fct_example (int x, int y) ‘fct_example’ is part of a bigger program


and this program was run with some test
{
suite.
int z = 0;
if ((x>0) && (y>0)){
z = x; }
return z;
}

Execute following test cases to reach 100% structural


coverage:
 function coverage: function ‘fct_example' was called at
least once during test run.
 statement coverage: the function was called as
‘fct_example (1,1)’
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 137
SOFTWARE-UNIT TESTING

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 138
VERIFICATION OF SOFTWARE SAFETY
REQUIREMENTS – ACTIVITIES AND AIMS

 ISO 26262-6, section 10, Activities during this phase:


– Demonstrate that the embedded software fulfils the software safety
requirements

ASIL
Methods Comments
A B C D
1a Hardware-in-the-loop + + ++ ++ HIL-Test
a
1b Electronic control unit network environments ++ ++ ++ ++ Simulation
1c Vehicles ++ ++ ++ ++ Tests in the vehicle
a Examples include test benches partially or fully integrating the electrical systems of a vehicle, “lab-cars” or “mule” vehicles,
and “rest of the bus” simulations.

ISO 26262-6, Table 16 — Test environments for conducting the software safety requirements verification (add ons by SGS TÜV Saar)

 Verification tests are typically done together with software integration tests
ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 139
SUMMARY OF DAY 3
SOFTWARE DEVELOPMENT (1)

 Functional Safety in the software development is mainly based on the use of


processes, techniques and methods
 Initiation of software development means to plan activities and select tools,
techniques and methods along Functional Safety rules of the company
 Software Safety Requirements are derived from safety requirements at
component (system) level
 The use of notations shall avoid systematic faults by getting a unique
interpretation
 Safety Analysis at the software architectural level is required by ISO 26262
(Edt 1), but the method is not described in detail
 Static code analysis can be seen as “state of the art” at software unit level

ISO 26262 Training - Day 2 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 140
SUMMARY OF DAY 3
SOFTWARE DEVELOPMENT (2)

 Integration means a stepwise building up of the planned software architecture


 Requirements based testing means to have minimum one test case per safety
requirement
 There are no target values given for the structural test coverage.
This means 100% is the target
 Verification tests are typically done together with software integration tests

ISO 26262 Training - Day 2 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 141
PLENUM: OPEN DISCUSSION

ISO 26262 Training - Day 2 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 142
Thank you for your attention!

ISO 26262 Training - Day 2 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 143
CONTACT GCC FS

Munich, Germany (Headquarters)


SGS-TÜV Saar GmbH
Functional Safety
Hofmannstrasse 50, Phone +49 89 787475-280
D-81379 Munich fs@sgs.com

Dortmund, Germany (Branch Office)


SGS-TÜV Saar GmbH
Joseph-von-Fraunhofer-Str. 13, Phone +49 231 9742-7323
D-44227 Dortmund de.do.fs@sgs.com

Japan
SGS Japan Inc.
2-2-1, Minatomirai, Nishi-ku
The Landmark Tower Yokohama 38F Phone +81 45 330 5040
220-8138 Yokohama jp.fs@sgs.com

ISO 26262 Training - Day 2 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 144
Thank you for your attention !

ISO 26262 Training - Day 3 © SGS-TÜV Saar GmbH 2017 ALL RIGHTS RESERVED 145

You might also like