Professional Documents
Culture Documents
Deliverable D6.1 Current Test Condition and Benchmarking Report
Deliverable D6.1 Current Test Condition and Benchmarking Report
Deliverable D6.1 Current Test Condition and Benchmarking Report
Deliverable D6.1
Current test condition and Benchmarking report
Authors
1 Executive Summary
The present document constitutes the first issue of the Deliverable D6.1 “Current test
condition and benchmarking report” in the framework of the project titled “Start-up
activities for Advanced Signalling and Automation Systems” (Project Acronym: X2Rail-1;
Grant Agreement No 730640).
The key objective of zero on-site testing, is to perform functional and non-functional
tests (component test, integration test and system test) in laboratory, instead of testing
on-site, in order to save time and costs without compromising on safety.
A status-quo analysis within the railway sector has been performed, by means of a
questionnaire which has been sent to suppliers, infrastructure managers and research
institutes within the railway sector in order to get an overview over today’s testing
practice. Due to the focus on signalling systems, no Railway Undertakings participated
in our questionnaire. In this analysis, areas of improvement regarding testing have been
identified, even though there is no harmonised list of tests that should be shifted from
on-site to laboratory, as the system is too complex. Component tests are mainly done in
laboratories, where the required test environment is comparably easy to build up,
whereas system tests are done on-site as the real environmental conditions are
important, e.g. for acceptance tests, including human behaviour.
Overall, laboratory testing can enhance the quality of the product, improve bug fixing
and speed up development, as it enables parallelisation of different activities and
increases trust on the safety level.
Moreover, a benchmarking with safety-critical industries outside the railway sector has
been performed, in order to compare against lessons learnt for railway system
applications and give them a wider context. Therefore, another questionnaire has been
elaborated and sent out via the project partners. It has been discovered, that there are
big differences between sectors, where testing is a relevant input to standardisation of
approval processes. For various reasons, nearly all sectors perform tests both in
laboratory environment and on-site. Building up a laboratory environment with realistic
input data for the simulations, is quite time and cost consuming, but it allows the test of
worst case scenarios, which may not be possible in a real trackside environment
(without safety impacts) and which is necessary to fully understand the products and
services to be developed. Therefore, quite good test coverage is achieved.
GA 730640 Page 3 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Table of Contents
1 EXECUTIVE SUMMARY ............................................................................................................................ 3
3 BACKGROUND ........................................................................................................................................... 8
GA 730640 Page 4 of 77
Deliverable D6.1
Current test condition and Benchmarking report
7 HARMONISATION AND AUTHORISATION ACTIVITIES .................................................................... 48
8 CONCLUSION ........................................................................................................................................... 57
9 REFERENCES ........................................................................................................................................... 59
10 GLOSSARY ............................................................................................................................................... 60
11 APPENDICES ............................................................................................................................................ 64
GA 730640 Page 5 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 7 of 77
Deliverable D6.1
Current test condition and Benchmarking report
3 Background
The present document constitutes the first issue of Deliverable D6.1 “Current test
condition and benchmarking report” in the framework of the project titled “Start-up
activities for Advanced Signalling and Automation Systems” (Project Acronym: X2Rail-1;
Grant Agreement No 730640).
Shift2Rail (S2R) is the first joint European rail technology initiative, to seek focused
research and innovation (R&I) and market-driven solutions. This can be achieved by
accelerating the integration of new and advanced technologies into innovative rail
product solutions. Shift2Rail will promote the competitiveness of the European Rail
industry and will meet the changing EU transport needs. The R&I activities are carried
out under the Horizon 2020 initiative and will develop the necessary technology to
complete the Single European Railway Area (SERA). Further information can be found
on http://shift2rail.org/.
The X2Rail-1 project aims to research and develop six selected key technologies to
foster innovations in the field of railway signalling and automation systems. The project
is part of a longer term Shift2Rail IP2 strategy towards a flexible, real-time, intelligent
traffic management and decision support system.
In particular, Work Package 6 (WP6) “Zero on-site Testing” focuses on testing activities
to be standardised within SERA. System- and Integration Test (SIT) is a fundamental
method of system verification across a wide range of industrial sectors. Various
experiences show that the cost and the time consumption for the SIT is from 30 % up to
50% of the project costs and time. Due to the complexity of signalling systems and the
differences between sites, a large number of tests must be carried out on-site, which
takes about 5 to 10 times the effort of similar laboratory tests. Reduction of on-site tests
for signalling systems is hence a reasonable approach to reducing testing costs. WP6
as part of the X2Rail-1 project will make further improvements and the results will be
disseminated on a European level.
This report is the deliverable of Task 6.2 (Assessment of status quo in field testing and
benchmarking) and therefore, the first deliverable of WP6.
GA 730640 Page 8 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 9 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Validator
6%
Customer
33%
Supplier
50% Impartial Test
Laboratory
5%
Research
center
6%
From section 5.6 on, companies which have identified themselves as validator, have
been classified in the customer category, while companies which identified themselves
GA 730640 Page 10 of 77
Deliverable D6.1
Current test condition and Benchmarking report
The questionnaire continued with topic related questions, starting with ETCS:
1. ETCS test cases have already been specified on a European level (ERA subset-
076 [3]).
1.1. Do you think these test sequences and test cases are sufficient regarding to
interoperability, safety, operational rules etc.?
1.2. If not, where does the need for further test cases come from?
1.3. What are the risks?
1.4. Which problems occur during operation? Which subsystems were affected?
Please explain the unexpected effects on the system.
2. What products/services do you test beside the ones already mentioned in question
1 (subset- 076 [3] ETCS)? Please list the most important tests.
3. Which type of tests do you perform in order to evaluate your product/service?
3.1. Do you perform laboratory environment or on-site/ on real infrastructure / real
environment tests?
3.2. If laboratory environment: Are you performing the tests in an in-house
laboratory or at an external company/institute? If external, please give the
name of the organization.
3.3. If laboratory environment: Why do you perform tests in the laboratory?
3.4. If on-site: real track, test track or temporary track?
3.5. Which of the following test procedure are automated in your company?
3.6. What is the effort of these types of test?
3.7. How are the types of test created?
4. Why do you perform tests in the laboratory, e.g. legal regulations, time saving, cost
saving and please specify the effects?
GA 730640 Page 11 of 77
Deliverable D6.1
Current test condition and Benchmarking report
5. Do you use on-site testing as fall back, if laboratory testing was not finished in
time?
6. Do you see on-site testing as complementary to laboratory testing, with a specific
interest?
7. If your product is updated, do you repeat the full tests? Do you have a strategy to
avoid that?
8. What are you doing to gain trust in the tests executed in the laboratory?
9. Can you think of any ways to increase the quality of your tests? Which ones?
10. Do you think formal verification techniques can replace testing?
11. If yes, which formal verification techniques may replace testing?
12. Do you recommend shifting any further tests from on-site to laboratory? What is
stopping you from doing these tests in the laboratory today?
13. Which tests do you plan to shift? And what are the necessary test system limits
and borders for your shifted test.
14. Which tests do you think can be replaced by formal verifications?
15. How can you make sure to perform only meaningful laboratory tests, which can
serve as a confirmation of product/service quality and safety level?
16. How do you make sure that at least all necessary meaningful tests are performed
in laboratory?
17. What are the tests, which cannot be shifted and will have to be performed on-site?
The next questions are about the experience and lessons learnt concerning shifting
tests to laboratories:
18. Please provide your experience about shifting tests to laboratory environment.
18.1. What have been the pains or the challenges you had to deal with before
shifting tests to the laboratory environment?
18.2. What system boundaries have to be taken into account?
18.3. What has helped you solve these challenges? What is the effort?
19. How does shifting tests from on-site to laboratory influence company-specific
processes and the organisation of your company (e.g. in the fields of product
management, development process, regulation)?
20. Would you like to participate in the implementation of an independent laboratory
responsible for CCS testing during Shift2Rail (collaboration between different
suppliers in a single laboratory, in remote laboratories, remote testing in distributed
laboratories)?
20.1. If yes: What competences and capacities do you like to yield in such a
laboratory?
At the end, a few questions concerning harmonisation and approval activities have been
asked. This was not the main focus of our status quo analysis, but should give an idea
about the challenges of European approval processes.
GA 730640 Page 12 of 77
Deliverable D6.1
Current test condition and Benchmarking report
21. Do you participate in any working groups regarding testing in the past or today?
Which one?
22. Is there a further need to harmonise test strategies?
23. Explain the approval process in your home country? Please specify the approval
process.
24. Is there a need for a standardised European approval process in the future? Why?
25. What will be the main advantages of a European approval process?
All the questions are intended to get a deeper understanding of the way testing is
addressed in the different companies, focusing mainly on their evolution from field
testing to laboratory testing.
communication. Some of the operational tests are needed for on-site acceptance by the
customer, e.g. EMC tests.
The availability and the costs of appropriate laboratory test equipment are crucial as well.
Taking these factors into consideration, it is important to review the test strategy at the
start of any new project, in order to select a very small set of tests that shall be tested
on-site and plan to shift the remaining ones into the laboratory.
4
number of replies
0
STM
LEU
DMI
IXL
Balises
Field device
GSM-R
RBC
HHT
Train Interface
TMS/CTC
HW redundancy
Positioning system
Some completed and harmonized test specifications are already available. For example,
in ETCS, the so called subset 076 [3], deals with the test specification for compliance
within the systems requirements specification (subset 026 [1]), and thus with the
question of technical interoperability.
The requirements specifications form a good basis for creating test specifications, but
there is a need to specify further tests. Thus, a need for testing also results from the
GA 730640 Page 14 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Stability Tests
Product Tests
Integration Tests
Additional testing of the above mentioned in the field of confidence testing
22
20
18
16
number of replies
14
12
10
0
Product
System
Acceptance
Integration
IOT
Interoperability
Safety
Software
Compatibility
Data
EMC
Functional
Net Access
Component
Environmental
Validation
Hardware
IOP (Subset-110)
Principle
System Interface
Factory Acceptance
Site Acceptance
ETCS Subset-076
For detailed description of the terms and definition, see the glossary in chapter 10.
GA 730640 Page 16 of 77
Deliverable D6.1
Current test condition and Benchmarking report
18
16
14
number of replies
12
10
0
Component
Functional
EMC
Principle
Compatibility
Integration
Safety
IOT
Data
Net Access
Software
Product
System Interface
Environmental
ETCS Subset-076
Factory Acceptance
Interoperability
System
Acceptance
Hardware
IOP (Subset-110)
Site Acceptance
Validation
Figure 5.4 – Whether laboratory test are executed in-house or external
Contractually Defined 4 2 4 2 2 1 4 1 2 5 2
Factory Acceptance
ETCS Subset-076
Net Access
Acceptance
Principle
IOP (Subset-110)
System Interface
Interoperability
Validation
Functional
Data
Site Acceptance
Hardware
EMC
Product
System
Compatibility
Software
Environmental
Component
IOT
Integration
Safety
number of
replies x
Real Track 1 1 1 1 7 1 4 2 1 1 1 5 1 6 1
Test Track 1 1 1 1 3 3 2 1 1 1 1
Temporary Track 1 1
Factory Acceptance
ETCS Subset-076
IOP (Subset-110)
Interoperability
Site Acceptance
System Interface
Validation
Net Access
Integration
Principle
System
Functional
Software
Product
Safety
IOT
Hardware
Environmental
EMC
Component
Acceptance
Data
Compatibility
number of
replies x
Figure 5.7, whereas some test types feature the possibility of automated test case
creation and analysis, the majority has to be handled manually.
Selection 1 1 2
Execution 1 2 7 1 4 1 1 1 2 5 4
Analysis 1 1 1 1 1 2 1
Evaluation 1
Factory Acceptance
ETCS Subset-076
System Interface
IOP (Subset-110)
Site Acceptance
Interoperability
Acceptance
Environmental
Compatibility
Software
IOT
Component
Principle
Net Access
Integration
Safety
Functional
System
Validation
Hardware
EMC
Data
Product
number of
replies x
Resources for Performing Tests 3 3,5 3 2,3 1 4 2,5 3,1 2,5 3,1 2,8 4 4 4 1 2,9 4 3,4 2,8 3,4 2 3 4
Test Relevance for Service Operation 5 5 2 5 1,5 3 3 3,2 2,5 3,2 3,3 5 4 4 1 2,1 1,5 3,6 3,2 3,8 5 3 4
Factory Acceptance
ETCS Subset-076
System Interface
IOP (Subset-110)
Site Acceptance
Interoperability
Net Access
Hardware
IOT
Integration
Validation
Compatibility
Environmental
Software
Product
System
Principle
Functional
Component
Safety
Acceptance
Data
EMC
number of
replies x
Figure 5.8 – Effort of types of test (1: low effort, 5: huge effort)
GA 730640 Page 19 of 77
Deliverable D6.1
Current test condition and Benchmarking report
appear repeatedly. The main sources of test creation are requirements specifications of
all types, especially ETCS subset 026 [1], the 3GPP & EIRENE (European Integrated
Railway Radio Enhanced Network) standards, the EMC standard and other national
standards. If already available, a test specification is used, e.g. ETCS subset 076 [3].
Further significant items are operational scenarios, test specifications of the customer
and project specifications. It is also worth mentioning that hands-on experience can be
an important source, too. This can be either tester experience, information about training
modules or even knowledge about already occurred hazardous situations. Functional,
product and system tests are also created using a model based approach, which can
significantly speed up the whole testing process. Table 5.1 lists the sources for the
creation of specific tests. For detailed description of the terms and definition, see the
glossary in chapter 10.
GA 730640 Page 20 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 21 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Although, there are lots of sources where tests can be derived from, as shown in Table
5.1, it seems to be challenging to clearly define exit criteria for the whole testing process.
Today they are mainly based on personal judgement or contractual agreements which
make use of lessons learnt from previous test and project experiences.
GA 730640 Page 22 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Suppliers mentioned laboratory tests more often than customers, see Figure 5.9. At the
same time, the suppliers mainly enjoy the advantages of high availability, safety and
flexibility in the laboratory environment. The customers, on the other hand, use the
laboratory tests to ensure integration of subsystems and interoperability.
Customer
Supplier
yes no n/a
The on-site fall back is only required as a last resort, e.g. if further laboratory tests were
too expensive, but only if the tests do not avoid the execution of already planned on-site
tests. The customers rather tend to on-site fall backs than the suppliers, see Figure 5.10.
Customer
Supplier
yes no n/a
GA 730640 Page 23 of 77
Deliverable D6.1
Current test condition and Benchmarking report
If a product is updated and a full test has already been made with the previous version,
most of the partners would not repeat the full tests. The detailed answers (in percentage)
are shown in Figure 5.11. Instead, they rely on impact analysis to select the tests to be
repeated. According to five replies, this is done for changes with minor impact, by
choosing a limited subset of the tests to execute, while for a major update complete
regression tests are performed.
Customer
Supplier
yes no depends on
A high level of automation of the test environment can impact the decision, by lowering
testing costs and increasing the number of selected tests. On the other hand, increasing
experience and confidence in the testing process, may reduce the amount of selected
tests.
GA 730640 Page 24 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Automatic and configurable simulators are another important aspect, because they can
integrate the highest possible number of real or virtualised elements, such as test
environments (RBC, EVC, etc.). Configurable simulators should be equipped with
additional tools that allow performing tests according to a configurable sequence of
automated commands, resulting in different test scenarios. In this way it is possible to
obtain many benefits, like test reproducibility and multi-session based testing, so that
some kind of faults is found earlier. Simulators increase the ability to compare the result
obtained with the requirements and on the other hand the possibility to perform tests in
degraded conditions.
The use of external laboratories offers an independent view on the system and the
laboratories can provide a standardisation in test procedures and databases.
Laboratories can also make use of and share their knowledge of operational rules used
in different countries, different engineering rules, as well as cross-industry sharing of test
results.
Finally, to really gain trust in test environments, test equipment and its capabilities are a
key goal. The verification of methods, procedures and equipment by audits is very
helpful.
In order to perform only meaningful laboratory tests, good quality requirements and
specifications is needed, as well as using requirements tracing. Furthermore, the
adequacy and the completeness of the tests have to be verified as well.
The implementation of a well-defined test strategy and test plan, including the methods
used is very important. The quality of the deliverables has to be continuously evaluated,
followed by improvement actions. Also the experiences with previous tests have to be
considered.
The evaluation of previous projects improves the quality for following projects, also the
experience with (and the relation to) previous tests (for example issues identified in on-
site tests) should be considered here. FAT or agreements with the stakeholder can also
be used. This gives rise to continuous improvement of the tools and processes.
To make sure that all necessary meaningful tests are performed in laboratory, the tests
are checked against the requirements to cover the specification completely. This is
defined / planned in the test strategy / test plan as well. This gives the required ‘holistic
approach’.
The process, as well as the used methods and the documentation of the different
configurations, including a definition of specific test scenarios for each configuration (i.e.
application specific) is very important.
GA 730640 Page 25 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Customer
Supplier
Indeed, even though there are very few companies (both customers and suppliers)
claiming that it is possible to replace tests with formal verification, their real feeling is
focused to reduce total test effort or the number of tests in the laboratory, that is not a
real replacing operation.
Testing and formal verification are complementary activities and formal verification will
not be able to completely replace testing, because some portions of complex signalling
systems can be tested by formal verification; fit for safety validation, not performance.
There is no unified opinion that formal verification is able to cover degraded mode or if it
can only be tested by operation tests on site.
Some replies show that formal verifications can be useful for the first steps of software
test, where generic low level problems must be identified and corrected, such as
memory usage, overflow, divide by zero, out-of-bound array, state flow machine
verification and functional requirements verification. But, from the experiences of recent
projects, formal methods can also be applied on a system level.
GA 730640 Page 26 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Customer
Supplier
yes no n/a
Some companies provided answers about having additional problems related to the lack
of confidence in ETCS rules and data in different countries (i.e. balises, RBC data and
behaviour). So from their point of view, it is much better to perform the test on-site
instead of trying to reproduce the infrastructure and installation in the laboratory.
Further investigations on the limits and the benefits for shifting to laboratory will be
performed in the test process definition.
GA 730640 Page 28 of 77
Deliverable D6.1
Current test condition and Benchmarking report
automated testing, IOP testing, GSM/GSM-R and GPRS testing, railway operation and
operational planning, staff as laboratory and field tester, track designer, wired and
wireless saboteurs for positioning and communication systems.
GA 730640 Page 29 of 77
Deliverable D6.1
Current test condition and Benchmarking report
As stated, the focus has been set on safety critical industries which are comparable to
the railway sector and especially to the command and control systems within the railway
sector. Knowing that information about these internal processes is difficult to collect, the
broad variety of X2Rail-1 partners has been used. Every partner used his own network,
to get in contact with the relevant persons in other industries. The major advantage of
increasing the return rate by using this way of distributing the questionnaire came along
GA 730640 Page 30 of 77
Deliverable D6.1
Current test condition and Benchmarking report
with the disadvantage of not having a complete list of companies addressed and replied
to the questionnaire, due to confidentiality. Therefore, getting in contact with the people
who answered to the questionnaire is not possible easily.
GA 730640 Page 31 of 77
Deliverable D6.1
Current test condition and Benchmarking report
All the questions intend to achieve a deeper understanding of the way of testing of the
companies addressed, focusing mainly in their evolution from field testing to laboratory
testing.
Manufacturer Supplier
18% 27%
Research Regulator
institute 5%
27%
GA 730640 Page 32 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Multimodal
traffic Aviation
management 18%
9%
System Nuclear
engineering & 9%
validation
9%
Note: The answers coming from the communication industry are analysed in part A, as
they are part of the railway related industry.
Due to the distribution procedure and the fact that more or less all members of X2RAIL-
1 WP6 are from the transportation sector, the number of replies from that sector is very
high, as can be seen in Figure 6.3. Other sectors like the nuclear, the medical devices,
defence or consultants for system engineering also participated, but only in a very
limited way.
GA 730640 Page 33 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Medical Defense
Devices 5%
System 4%
Engineering
9%
Nuclear
9%
Transpor-
tation
73%
GA 730640 Page 34 of 77
Deliverable D6.1
Current test condition and Benchmarking report
These types of tests are inter-dependent and follow a specific sequence. First the
subsystem tests are performed. Once the subsystems have been validated satisfactorily,
integration test(s) can be performed. Finally, system validation tests are carried out
when integration(s) test have succeeded. Therefore, laboratory tests are not limited to
subsystem tests, but also integration and/or even system tests may be performed in
laboratory.
Beside functional validation tests, laboratory tests are also used to perform non-
functional tests like safety, security and qualification tests (such as electromagnetic
compatibility, temperature tests…).
Figure 6.4 shows that most tests performed in laboratory are subsystem tests, which are
needed before the integration tests. Therefore, the number of integration tests is greater
than the system tests, which require the completion of the other types of tests.
System
20%
Subsystem
50%
Integration
30%
Acceptance Subsystem
5% 0%
Non-functional
10% Integration
9%
System
76%
GA 730640 Page 36 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Product
related tests Contractually
10% defined
11%
Safety and
security
Cost savings 5%
Reproducibility
19% 9%
Interoperability
13%
Finally, in order to gain trust in laboratory tests, a “change in mindset” is required to rely
on laboratory tests and their outcomes Moreover, there is a need for the acceptance of
laboratory test results for safety and security.
GA 730640 Page 38 of 77
Deliverable D6.1
Current test condition and Benchmarking report
develop the laboratory testing site. Finally, human behaviour affecting the system needs
to be simulated as well; therefore, this effect should be also included in the laboratory
test site.
• Installation tests: Installation test can only be performed once the installation is on-
site.
• Field acceptance tests: Some tests, which have already been performed in the
laboratory, have to be repeated on-site.
• Area specific tests: These kind of tests apply to the space and nuclear sector and
are related for example to interaction of rocket and start base, interaction of cold &
warm reactor, etc.
• Specific on-site tests: Tests required by authorities, standards and/or customers
(e.g. in the avionics and medical areas: “no one will accept an aircraft which is not
flight-tested”).
• Human behaviour tests: Tests including human behaviours are difficult to perform
in the laboratory properly. This is usually the case in the automotive industry.
Second, these seven types of tests are arranged into two categories of reasons for
laboratory testing: functional and requested. These reasons are linked to the main
boundaries for laboratory testing, namely creating a simulation environment and gaining
trust in tests, see Table 6.1.
GA 730640 Page 40 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 41 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 42 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Specific description of the processes for each area studied is described in Table 6.2.
The main conclusions are that the aviation is an area with a large number of approved
standards and regulations and it is a safety critical area. Therefore the principles in the
aviation area can be a useful input for the railway sector. Moreover, part of the
communication (GSM-R) is already included in the railway sector.
Area Explanation of the approval process
Wireless Already worldwide standardized by 3GPP, IEEE, ETSI
Communication The Third Generation Partnership Project (3GPPTM) was
established in 1998 to develop specifications for advanced
mobile communications. It comprises:
o seven regional Standards Development Organizations
(SDOs) including ETSI
o market associations
o several hundred companies
The original scope of 3GPP was to produce globally
applicable reports and specifications for a third generation
mobile system based on evolved Global System for Mobile
communication (GSM) core networks and the radio access
technologies that they support.
Defence Each country and each customer have different requirements.
So it is not easy to harmonise and standardise the approval
process. Although the System engineering management is
quite similar.
Nuclear At European or international level, there are quite a lot of
regulation bodies dealing with regulation and control, e.g.:
o IAEA: International Atomic Energy Agency
o WENRA: Western European Nuclear Regulators
Association
o EURATOM: European Atomic Energy Community
Standards and guidelines are provided in IEEE and IEC
Significant differences among countries
Country specific regulations, such as safety aspects,
environmental conditions
Aviation Aircraft need world-wide flight permission.
To reduce certification effort for safety-critical systems, equal
standards are necessary and do exist.
o Certification process is specified by EASA (European
Aviation Safety Agency (EU)) and FAA (Federal
GA 730640 Page 43 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Table 6.2 – Statement of the standardisation level of the approval process per
sector studied
These industries and companies have to deal with the same issues as the railway
sector does.
Replies from many companies and departments have been received.
Although attempt, very limited answers from additional sectors like chemical,
pharma and medical were given. But due to the reasons already mentioned in
section 6.1.4, no further answers have been gathered.
Almost all companies of all areas perform both test: laboratory and on-site.
Taking functional tests into account (subsystem, integration and system tests), all
subsystem tests are performed in labs and a vast majority of system tests are
performed on-site.
For obvious reasons there are tests, which cannot be shifted into laboratory (e.g.
customer acceptance, installation, simulation of human behaviour).
No industry has shown the capacity and capability to perform all relevant tests in
laboratory. On-site tests are still required.
Automotive is showing that some tests currently performed in laboratory will be
shifted back to on-site in the future (Diesel emission affair).
Performance tests take place in laboratory as well as on-site, but only up to a
certain degree in labs as they have to be completed on-site for certain boundaries.
The more the companies know about the on-site tests, the more data they can
provide as an input for laboratory environment in order to gain trust in the
laboratory test and shift tests from on-site to laboratory as much as possible.
6.4 Recommendations
This section includes a list of recommendations to be applied in the test process and the
test architecture for railway systems coming from the experiences provided by other
industries, see Table 6.3.
Recommendation no. 1
Tests shall be carried out in a safe manner with regard to human, material and
environment.
Recommendation no. 2
Different level of test should be defined such as subsystem tests, integration test and
system validation tests. Moreover, non-functional tests should also be included.
Laboratory tests should start in the early stages of the product lifecycle, when failures
and features can be identified very early and therefore, costs and time can be saved.
Thus, these tests should allow testing a number of scenarios within a short time,
leading to shorter overall testing time. Laboratory tests should be planned in such a
way that logistics, management and communication processes should be reduced
compared to on-site testing.
GA 730640 Page 45 of 77
Deliverable D6.1
Current test condition and Benchmarking report
When required, customers and regulators should be included in the assessment of the
whole development process (planning, requirements, basic design, detailed design,
manufacturing, and all testing levels) including certification and laboratory testing
activities.
Recommendation no. 3
Acceptance tests can be considered as additional tests and is requested by the
customer to be done on-site and not to be moved to the laboratory. Evidence gathered
through laboratory testing may, however, contribute to acceptance tests.
Recommendation no. 4
Laboratory tests should include worst-case scenarios and stress-test scenarios.
Repeatability of laboratory tests should be guaranteed. For that, the right tools should
be found among those tools available. For example, the test tools which are qualified
against applicable codes and standards should be used. One of the key aspects for
this selection is the flexibility and easy way to adapt it to many needs.
Recommendation no. 5
Laboratory test should allow being easily corrected and/or modified allowing e.g.
regression tests. Furthermore, they should be flexible with the aim of allowing an easy
addition of specific operational modes required by the customer.
Recommendation no. 6
Standardised testing interfaces should be created with the aim of allowing the use of
the same tests and test setup independent of the supplier and end-user. Moreover, it
allows external impartial laboratories taking part in the process.
Recommendation no. 7
Laboratory test should enable a “change in mindset” with the aim of gaining trust in
laboratory tests to rely on laboratory tests and their outcomes. Experience about the
real environment conditions should be gained first, before transferring the environment
to the laboratory.
At the end, the validation of the test site should be done by means of comparing
laboratory data with on-site data, to ensure that the environment reproduced in
laboratory matches with the on-site and the behaviour of the system under test (SUT)
in laboratory matches with the behaviour on-site. Realistic data, recorded under real
life conditions, should be added to the laboratory tests to increase the quality of the
tests. External audits of the laboratory and laboratory test resulting in certifications
should be done.
Recommendation no. 8
Laboratory tests should also demonstrate that its results can contribute to the
acceptance of safety (and security). For that, the test plans/procedure should include
the demonstration of the coverage of as much hazards as possible showing, if
GA 730640 Page 46 of 77
Deliverable D6.1
Current test condition and Benchmarking report
needed, a complete safety and security test. The definition of the test and the test
coverage together with the essential preparation work should be considered for the
laboratory testing. In order to find the right balance between requirements and over-
specifications, a clean traceability should be applied. When testing a product in the
laboratory, a Product Risk Analysis should be included. Laboratory tests should
include systematic and transparent execution and documentation including
traceability.
Recommendation no. 9
When possible, standard tests, which have already been accepted by governmental
bodies, should be employed. One of the easiest ways to be sure that is to perform the
tests in an impartial and qualified laboratory.
Recommendation no. 10
Quantitative KPIs for each test, and when possible agreed upon with the customer,
should be used.
Recommendation no. 11
Importance of testing and testing processes should be included in the internal
processes. This includes laboratories, people, etc.
Recommendation no. 12
Investments in upgraded test equipment should be done for laboratory testing.
Recommendation no. 13
Laboratory tests should be part of the process for certification or approval.
Table 6.3 – Recommendations for the laboratory testing from the answers
received from outside the railway sector.
GA 730640 Page 47 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 48 of 77
Deliverable D6.1
Current test condition and Benchmarking report
EUG
No
No answer
Others
UNISIG IOP
All groups that have less than two specified answers are displayed as “others” and can
be seen below.
Others:
EMC working groups for ERTMS subsystems
ESA for positioning applied to railway applications
NVIOT / Network Vendors Interoperability Testing
RAN working group
TEN 3rd Call & "ETCS2 on GPRS experimentation" projects
UIC, participating to GPRS testing , Austria
CEF, participating to Network test cases definition
Infrabel CFL ADIF Holland
Preparation of Project "ETIP"
Subset 036 [2]
ERA working groups under the 4th RP for vehicle authorisation and TS approval
It appears that there are different and a large number of working groups regarding
testing that do exist at the moment or did exist in the past. At least, there are five main
groups: EUG, UNISIG IOP, groups related to TEN T, validation & test sub group ERA
GA 730640 Page 49 of 77
Deliverable D6.1
Current test condition and Benchmarking report
ETRMS stakeholders” platform, group related to subset 076 [3]. EUG and UNISG IOP
working group appear to be the most important ones. Eight stakeholders have no
participation at all. According to the questionnaire results, there is a real interest towards
working group, but testing working group activities seems to be fragmented. Some
companies, in particular some suppliers, participate in different working groups but in the
same time, others indicated to have no participation.
Concerning the question for further need to harmonise test strategies, most of the
replies confirm that there is a further need in harmonisation of test strategies in the
future, see Figure 7.2. A few also provided a reason or an area where they see a
specific need, see Figure 7.3.
No answer
13%
No
12%
Yes
75%
GA 730640 Page 50 of 77
Deliverable D6.1
Current test condition and Benchmarking report
Yes, common
authorisation
Yes, 18%
interoperability
27%
Yes, common
process
9%
Yes, reduce
level of Yes, more
equipment effective, cost
18% and time saving
28%
27 % of the repliers who provided a reason (suppliers and system integrator) think that
interoperability is in need of harmonisation. The other answers are mostly connected to
effectiveness and a common management.
GA 730640 Page 51 of 77
Deliverable D6.1
Current test condition and Benchmarking report
OBU integration
12%
No answer
29%
OBU integration,
without trackside
equipment
17%
A few of the repliers answered that there is a need for standardisation of the OBU
approval process, but they also thought that the trackside equipment was not needed or
possible since there are big differences between the system configurations. A
standardisation of CCS systems and products will influence the related tests which are
needed for approval. Other possible advantages concern project aspects such as time
and cost reduction or the increase of effectiveness.
The different advantages are listed and evaluated in Figure 7.5.
GA 730640 Page 53 of 77
Deliverable D6.1
Current test condition and Benchmarking report
number of replies 5
Safety
Money
Time
OBU
Ease
Reduce training
Competition
Uniformity
Acceptance
Main expected advantages of a European approval process are time and competition.
Figure 7.6 show what the infrastructure managers see for the main advantages of a
European approval process, whereas Figure 7.7 shows the supplier perspective.
3
number of replies
0
No answer
OBU
Uniformity
GA 730640 Page 54 of 77
Deliverable D6.1
Current test condition and Benchmarking report
From IM perspective, the expected advantages are limited to OBU approval process and
uniformity only. If systems or products are uniformed and the approval processes as
well, the test efforts and the number of tests related to project specific concerns (e.g.
OBU in different countries and from different suppliers) will be reduced as well.
5
number of replies
0
Time
Ease
Money
Safety
no answer
Acceptance
Uniformity
Competition
From supplier perspective, the main advantage of a European approval process is time
saving. The second shared advantage for supplier is competition. These reasons differ
from infrastructure manager’s views.
GA 730640 Page 55 of 77
Deliverable D6.1
Current test condition and Benchmarking report
recognised that this area is not in scope of the X2Rail-1 project and information
gathered will be shared with those dealing with this, e.g. ERA.
GA 730640 Page 56 of 77
Deliverable D6.1
Current test condition and Benchmarking report
8 Conclusion
The key objective of zero on-site testing for CCS systems is to perform functional and
non-functional tests (component test, integration test and system test) in laboratory
instead of testing on-site in order to save time and costs without reducing safety aspects.
As the focus is on CCS systems, RUs have not been taken into account.
In the railway industry, most component tests are performed in laboratory due to ease
handling of these tests saving time and costs compared to on-site testing. Integration
tests are done in laboratory environment as well as on site, depending on the possibility
to simulate the real environment and worst-case scenarios in laboratories. Therefore,
real environmental data have to be recorded first. The complexity of the system is
important when assessing to shift tests from on-site to laboratory environment. System
tests, and especially performance tests, are mainly done in real environment in order to
gain trust in the results achieved. It is important to evaluate the interaction of the product
or service with the real environment, including human behaviour which is generally quite
difficult to simulate. As system boundaries, especially the (external) interfaces and the
hardware as well as the GSM-R and network communication have to be considered.
There is a trend to shift as much tests as possible to laboratories, even if many IMs do
not have own laboratories available. They mainly rent supplier’s laboratories or
laboratories of impartial institutes. Sometimes, customer force tests to be done in real
environment instead of laboratory tests in order to gain trust in the results. Reducing
time and costs by performing laboratory testing is possible only, if tooling is established
and grade of automation is high regarding test case generation, test execution and
evaluation of the results. Moreover, it seems to be challenging to clearly define exit
criteria for tests. Mixed teams with different experts together with a tool harmonisation
are beneficial while shifting tests to laboratory. Over all, a complete reduction of on-site
tests according to zero on-site testing won’t be possible in the near future due to
technical restrictions at laboratory environment.
Moreover, a benchmarking with safety-critical industries outside the railway sector has
been done, mainly with avionics, space and automotive sectors. Most of them state that
building up the laboratory test environment has been shown to be expensive and time
consuming at first, but very useful and helpful reducing project time and costs in the next
steps.
Almost all companies of all areas perform both laboratory tests and on-site tests. No
industry has shown the capacity and capability to perform all relevant tests in laboratory.
On-site tests are still required. The more the companies know about the on-site tests,
the more data they can provide as an input for laboratory environment in order to gain
GA 730640 Page 57 of 77
Deliverable D6.1
Current test condition and Benchmarking report
trust in the laboratory test and shift tests from on-site to laboratory as much as possible.
Automotive is showing that some tests currently performed in laboratory will be shifted
back to on-site in the future.
It is recommended to let laboratory tests start in an early stage of the product lifecycle,
when failures and features can be identified very early and therefore, costs and time can
be saved. Thus, these tests should allow testing a number of scenarios within a short
time, leading to shorter overall testing time. Laboratory tests should be planned in such
a way that logistics, management and communication processes should be reduced
compared to on-site testing.
When required, customers and regulators should be included in the assessment of the
whole development process (planning, requirements, basic design, detailed design,
manufacturing, and all testing levels) including certification and laboratory testing
activities. This can be done by standardising tests and performing standardised tests
afterwards. Laboratory tests are part of the process for certification or approval, e.g. in
medical or aviation sector.
Validation of the test site should be done by means of comparing laboratory data with
on-site data to ensure that the environment reproduced in laboratory matches with the
on-site and the behaviour of the system under test (SUT) in laboratory matches with the
behaviour on-site. When testing a product in the laboratory, a Product Risk Analysis is
quite useful. Laboratory tests should include systematic and transparent execution and
documentation including traceability.
The aviation is an area with a large number of approved standards and regulations and
it is a safety critical area. Therefore, the principles in the aviation area can be a useful
input for the railway sector and has to be analysed more in detail in future.
The recommendations coming from the answers of the benchmarking industries will be
further investigated in order evaluate if some of them will be applicable for the railway
sector as well.
For railway application, a first attempt to unification of testing of ETCS systems has
been realized by the ETCS Interoperability (IOP) working group of UNISIG developing
standards for testing IOP in laboratory environment.
A large majority of participants from the railway sector consider that there is a need to
harmonise test strategies and they favour a standardised European approval process in
particular for OBU. There is less support for such activities on wayside due to large
differences between systems in each country interfacing with the legacy systems.
GA 730640 Page 58 of 77
Deliverable D6.1
Current test condition and Benchmarking report
9 References
[1] Subset 026: System Requirements Specification
[2] Subset 036: FFFIS for Eurobalise
[3] Subset 076: Scope of the test specifications, test sequences and tests cases
[4] Subset 085: Test Specification for Eurobalise FFFIS
[5] Subset 092: ERTMS EuroRadio Conformance Requirements ans test cases safety
layer
[6] Subset 093: GSM-R interfaces
[7] Subset 110: UNISIG Interoperability Test – Guidelines
[8] Subset 111: Interoperability Test Environment Definition
[9] Subset 112: UNISIG Basics for Interoperability Test Scenario Specifications
GA 730640 Page 59 of 77
Deliverable D6.1
Current test condition and Benchmarking report
10 Glossary
Term Definition
Acceptance testing 1) Testing conducted to determine whether a system satisfies
its acceptance criteria and to enable the customer to determine
whether to accept the system
2) Formal testing conducted to enable a user, customer, or
other authorized entity to determine whether to accept a
system or component.
[ISO/IEC/IEEE 24765, 2010]
Compatibility 1) The ability of two or more systems or components to
perform their required functions while sharing the same
hardware or software environment
2) The ability of two or more systems or components to
exchange information.
3) The capability of a functional unit to meet the requirements
of a specified interface without appreciable modification.
[ISO/IEC/IEEE 24765, 2010]
Compatibility tests Tests regarding compatibility.
Component A constituent part of software which has well-defined interfaces
and behaviour with respect to the software architecture and
design and fulfils the following criteria:
- It is designed according to “components”;
- It covers a specific subset of software requirements;
- It is clearly identified and has an independent version inside
the configuration management system or is a part of a
collection of components (e. g. subsystems) which have an
independent version.
Component testing Tests regarding components.
Confidence testing Confidence testing is a term used to define the repeat
(duplicate) of any other test more than the absolute minimum
to achieve it's given success criteria. The number of repeat or
duplicate tests is an arbitrary number based on the user's
experience and trust in the system.
[X2Rail-1 WP6 definition]
Data tests Data tests are performed to verify the correctness of the
individual subsystem configuration data (e.g. static speed
profile) and they are related to a single subsystem in order to
test its specific application.
[X2Rail-1 WP6 definition]
Environment Anything affecting a subject system or affected by a subject
system through interactions with it, or anything sharing an
interpretation of interactions with a subject system.
GA 730640 Page 60 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 61 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 62 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 63 of 77
Deliverable D6.1
Current test condition and Benchmarking report
11 Appendices
The next two sessions consist of our questionnaires provided by WP 6 and distributed
among WP6 members (section 11.1) and among benchmarking industries (section
11.2).
GA 730640 Page 64 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 65 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 66 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 67 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 68 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 69 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 70 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 71 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 72 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 73 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 74 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 75 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 76 of 77
Deliverable D6.1
Current test condition and Benchmarking report
GA 730640 Page 77 of 77