Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

BRAVE

BRidging gaps for the adoption of Automated VEhicles


No 723021

D3.4 Vehicle Interaction and Driver


Monitoring: Functional Prototypes
Lead Author: Jan-Paul Leuteritz (FhG)
With contributions from: Leonie Terfurth (FhG), Sandra
Carrasco Limeros (UAH), Javier Alonso Ruíz (UAH), Niklas
Strand (VTI), Ignacio Solis (VTI)

Deliverable nature: Report (R)


Dissemination level: Public (PU)
(Confidentiality)
Contractual delivery 31-August-2020 (updated)
date:
Actual delivery date: 31-August-2020
Version: 1.0
Total number of pages: 26
Keywords: Automated vehicles, HMI, Driver state, interaction, design
BRAVE Deliverable D3.4

Abstract
This document presents the Functional Prototypes developed within WP3, with the cooperation of WP4, in the
BRAVE project. It documents the different virtual and real-world implementations of HMI solutions done in
the context of WP3. This includes the following implementations: (1) the driving simulator implementation
for testing an HMI linked to a VRU monitoring system at VTI’s moving base simulator in Gothenburg,
Sweden, (2) the VR walking simulator implementation to test different external HMI solutions for Vehicle-to-
VRU communication by VTI at Linköping, Sweden, (3) the implementation of one possible realization of the
general BRAVE internal HMI concept described in D3.2 in the driving simulator at Fraunhofer IAO in
Stuttgart, Germany, and (4) the implementation of the GRAIL system in the road testing vehicle by
Universidad de Alcalá de Henares, Spain, which serves for indicating to crossing VRU whether the Automated
Vehicle is yielding. The document focuses on the HMI details that were implemented and how this was
achieved at the technical level.
[End of abstract]

723021 Page 2 of 26
Deliverable D3.4 BRAVE

Executive summary
The BRAVE project has implemented the Vehicle-Driver Interaction concept described in D3.2
(Leuteritz, Fritz, Bischoff, & Kamps, 2020) and the Driver Monitoring Concept depicted in D3.3
(Carrasco et al., 2020) in the following simulators and vehicles:
 The driving simulator and VR simulator at VTI (Gothenburg and Linköping, Sweden)
 The driving simulator at Fraunhofer IAO (Stuttgart, Germany)
 The BRAVE road testing vehicle (Universidad de Alcalá de Henares, Spain)
At VTI, two main simulation implementations have been done: one in a driving simulator and another
in a walking simulator. Both implementations served the purpose of answering a specific research
question to produce insights for the design of internal and external HMI. The driving simulator
implementation was done in a moving-base simulator in Gothenburg. The aim of the test was to find
out if a specific implementation of a VRU monitoring system would increase drivers' trust and
reliance in the system and thus enhance their benefit from the safety functionalities granting more
time to engage in non-driving related tasks. The walking simulator study tested different external
HMIs to see which of the alternatives would best communicate the vehicle’s intentions and thus
facilitate safe interaction between the automated vehicle and VRUs.

At Fraunhofer IAO, the first implementations served to test whether adaptive in-car HMIs would
increase trust and acceptance. For this purpose, the transparency HMI mentioned in D3.2 (Leuteritz,
Fritz, Bischoff, & Kamps, 2020) was implemented in the Head-Up-Display (HUD), indicating the
positions of VRU and (in some cases) the level of danger (high vs. not high). Once the final HMI
concept of D3.2 was defined, the entire concept was applied to the Fraunhofer driving simulator in
Stuttgart, Germany, thus enhancing these features from previous testing. This includes the installation
of the BRAVE Driver Monitoring System (DMS), which is not described here, since it has been
extensively covered in D3.3 (Carrasco et al., 2020). Additionally included are haptic feedback (seat
vibration), auditory signals and voice output, and visual cues (Display 1 behind the steering wheel,
Display 2 in the centre console, and an LCD array (at the inside bottom of the windshield). This
implementation is meant to undergo a final user test as a proof-of-concept in November 2020.

The main implementation in the road-testing vehicle includes the DMS, as well as the GRAIL system,
which is described in the final chapter. The GRAIL serves to indicate to VRUs that are crossing
whether the Automated Vehicle is yielding or not.

In addition to this Deliverable, video material showing the implementations will be provided on the project
website.

723021 Page 3 of 26
BRAVE Deliverable D3.4

Document Information
IST Project 723021 Acronym BRAVE
Number
Full Title BRidging gaps for the adoption of Automated VEhicles
Project URL http://www.brave-project.eu/
EU Project Officer Damian Bornas-Cayuela

Deliverable Number D3.4 Title Vehicle Interaction and Driver Monitoring :


Functional Prototypes
Work Package Number WP3 Title Vehicle-driver interaction and driver
monitoring concepts

Date of Delivery Contractual M39 Actual M39


Status version 1.0
Nature Report
Dissemination level Public

Authors (Partner) Jan-Paul Leuteritz (FhG), Leonie Terfurth (FhG), Sandra Carrasco Limeros (UAH),
Javier Alonso Ruíz (UAH), Niklas Strand (VTI), Ignacio Solis Marcos (VTI)
Name Jan-Paul Leuteritz E-mail jan-
Responsible Author paul.leuteritz@iao.fraunhofer.de
Partner FhG Phone +49 711 970 2122

Abstract This document presents the Functional Prototypes developed within WP3, with
(for dissemination) the cooperation of WP4, in the BRAVE project. It documents the different
virtual and real-world implementations of HMI solutions were done in the
context of WP3. This includes the following implementations: (1) the driving
simulator implementation for testing an HMI linked to a VRU monitoring
system at VTI’s moving base simulator in Gothenburg, Sweden, (2) the VR
walking simulator implementation to test different external HMI solutions for
Vehicle-to-VRU communication by VTI at Linköping, Sweden, (3) the
implementation one possible realization of the general BRAVE internal HMI
concept described in D3.2 in the driving simulator at Fraunhofer IAO in
Stuttgart, Germany, and (4) the implementation of the GRAIL system in the
road testing vehicle by Universidad de Alcalá de Henares, Spain, which serves
for indicating to crossing VRU whether the Automated Vehicle is yielding. The
document focuses on the HMI details that were implemented and how this was
achieved at the technical level.
Keywords Automated vehicles, vulnerable road users, interaction, communication,
design, intent

Version Log
Issue Date Rev. No. Author Change
17th July, 2020 0.1 Jan-Paul Leuteritz First draft of FhG contribution
22th July, 2020 0.2 Sandra Carrasco Limeros First draft of UAH contribution
25th August, 2020 0.3 Jan-Paul Leuteritz Merging content from VTI and
revision of FhG content.
28th August 2020 0.4 Jan-Paul Leuteritz Improvements based on peer review
by Ingrid Skogsmo (VTI).
31st August 2020 0.5 Ignacio Solis, Ingrid Skogsmo Revision of all chapters
31st August 2020 1.0 Jan-Paul Leuteritz Revision of all chapters

723021 Page 4 of 26
Deliverable D3.4 BRAVE

Table of Contents
Executive summary ........................................................................................................................................... 3
Document Information ...................................................................................................................................... 4
Table of Contents .............................................................................................................................................. 5
Abbreviations .................................................................................................................................................... 6
1 Introduction ................................................................................................................................................ 7
1.1 Interrelations to other tasks in BRAVE .............................................................................................. 7
2 Implementation at VTI ............................................................................................................................... 8
2.1 Driving Simulator IV .......................................................................................................................... 8
2.1.1 General description and purpose.................................................................................................. 8
2.1.2 Technical requirements and implementations ............................................................................. 9
2.2 VTI Walking Simulator .................................................................................................................... 11
2.2.1 General description and purpose of installation......................................................................... 11
2.2.2 Technical implementation or requirements ............................................................................... 11
3 Implementation at Fraunhofer IAO .......................................................................................................... 14
3.1 Implemented components and purpose ............................................................................................. 14
3.2 Use case realization........................................................................................................................... 15
3.3 Technical description ........................................................................................................................ 20
4 Road vehicle implementation ................................................................................................................... 21
4.1 Implemented system and purpose ..................................................................................................... 21
4.2 Technical description ........................................................................................................................ 22
5 Discussion ................................................................................................................................................ 26
References ....................................................................................................................................................... 26

723021 Page 5 of 26
BRAVE Deliverable D3.4

Abbreviations

AEB: Automated Emergency Braking


ADAS&ME: EU-funded project, G.A. no. 688900
C: Programming language
C++: Programming language
DMS: Driver Monitoring System
eHMI: external Human-Machine-Interface (dedicated to persons outside the vehicle)
GRAIL: Green Assistant Interfacing Light
HMI: Human-Machine-Interface
HUD: Head-Up-Display
LED: Light Emitting Diode
OEM: Original Equipment Manufacturer
PWM: Pulse-Width-Modulation
SAE: Society of Automotive Engineers
UDP: User Datagram Protocol
VR: Virtual Reality
VRU: Vulnerable Road User

723021 Page 6 of 26
Deliverable D3.4 BRAVE

1 Introduction
The overall aim of the BRAVE project is to find out, through multidisciplinary research, what could
be done to achieve high acceptance of, and trust in (SAE) Level 3 Automated Vehicles, not only by
the vehicles’ occupants but also at a societal level. One part of these research activities (in WP3) was
to develop and test HMI (Human-Machine-Interface) concepts, for both internal HMIs (directed at
the driver or other occupants of the Automated Vehicle), and external HMIs (directed at Vulnerable
Road Users, short: VRU). WP3 has also produced a Driver Monitoring System (DMS), which is not
an HMI concept but a highly relevant HMI component.
This deliverable presents the outcomes of task 3.5 of the BRAVE project, which required the BRAVE
WP3 developments to be integrated into driving simulator mock-ups (by VTI and FhG), and also into
a road-testing vehicle (by UAH). This integration serves two main purposes: to allow for the testing
activities of WP 5 to be successfully completed, and to demonstrate the concepts and findings of the
project to stakeholders, such as representatives of car OEMs but also to political decision makers or
the general public.

1.1 Interrelations to other tasks in BRAVE


The implementations described here are related to other work in the BRAVE project:
 They are based on the HMI development activities of task 3.3 (D3.2).
 The Driver Monitoring System developed in T3.4 (D3.3) was also integrated (at Fraunhofer
IAO) in the frame of the activities described here.
 They have served / will serve as a basis to realize the testing activities described in T5.3 (D5.2)
and T5.4 (D5.3). The list of testing activities is:
o Test #1: Slovenia, not related to the implementations mentioned here
o Test#2: Germany, chapter 3 describes the final implementation that includes what was
implemented for test#2
o Test#3.1 and #3.2: Sweden, covered in chapter 2
o Test#4: France, covered in chapter 4 (road tests)
o Test#5: France, covered in chapter 4 (road tests)
o Test#6: Spain, covered in chapter 4 (road tests)
 They serve to create input to dissemination activities (T7.1 and T7.4); particularly with the
current Covid-19 situation, photos and video material of the mock-ups are expected to play a
key role.

In order to avoid redundancies among the different deliverables that cover the HMI development
(D3.2), the Driver Monitoring System (D3.3), and the testing procedures and outcomes (D5.2), this
deliverable focuses on describing and depicting the actual HMI components that were implemented,
as well as on the technical solutions that were applied to achieve this.
In this sense, this deliverable complements the other deliverables. Specifically, the HMI concept
described in D3.2 is – on purpose – very generic. The idea was to provide a general interaction concept
that still left enough room for OEMs to apply their own technical solutions, to use their own branded
icons, earcons, and other HMI elements, while still producing a somehow standardized solution that
takes the user needs gathered in BRAVE into account. In this deliverable, we provide an example of
one possible instance of how the interaction concept could be realized – through our implementation
in Fraunhofer IAO’s driving simulator.

723021 Page 7 of 26
BRAVE Deliverable D3.4

2 Implementation at VTI
To realize test #3.1 in task 5.3 Virtual prototype evaluation, technical implementations had to be
performed in VTI’s simulator environment. Implementations were done to be able to test an internal
vehicle HMI for the driving simulator study, including a VRU monitoring concept, but also an
external HMI for the pedestrian studies. Moreover, the automated driving functionality had to be
programmed in the simulated environment in order to mimic the automated driving system in this
environment. The overall purpose of the VTI implementations was to prepare the simulated
environment (both pedestrian- and driving simulator) for the tests carried out in WP5.

2.1 Driving Simulator IV


2.1.1 General description and purpose

An experiment (test#3) was conducted in June 2019 in VTI's moving-base simulator located in
Gothenburg (Figure 1). This study aimed at investigating the effects of a VRU monitoring system on
drivers' attention and behaviour during Level 3 automated driving, as well on their trust in the
automation system. The VRU monitoring system was conceptualized to support drivers during Level
3 automated driving (according to SAE's taxonomy) in mixed environments where automated
vehicles will often interact with other users, e.g., in urban areas. The VRU monitoring system
provides timely information to the drivers (15-20 seconds in advance) about the presence of VRUs in
the vicinity with whom a potential interaction may occur. This information is provided even when
the VRUs are out of the sight of the drivers. As hypothesized in our study, this information should
increase drivers' trust and reliance in the system, allowing them to benefit from the safety
functionalities of the system (e.g., better lateral and longitudinal control), and from the greater spare
time to engage in other non-driving related tasks (e.g., watching a video). At the same time, the system
should promote a better attentional strategy by alerting drivers when their attention to the road is
recommended or even necessary. The results of this study are partially presented in Deliverable 5.2.

Figure 1 – Moving-base simulator at VTI Gothenburg (Driving Simulator IV)

723021 Page 8 of 26
Deliverable D3.4 BRAVE

2.1.2 Technical requirements and implementations

To successfully conduct the research described above, a set of technical implementations on the
simulator were necessary. Next, the main components integrated during the preparation stage will be
described.

Level 3 automation
We implemented a level 3 driving automation system for this experiment. When activated, the driving
automation system kept the desired speed at 55 km/h and automatically steered the vehicle to keep it
within the lane. The driving automation was turned on by pushing a button on the steering wheel. The
system could be disengaged by pressing the brake, manually steering the vehicle, or by pressing the
on/off button attached on the steering wheel. The status of driving automation system was displayed
on the dashboard via a steering-wheel symbol. When inactive, the colour of the system symbol was
blue, and when active, it turned to green steering-wheel. Figure 2 illustrates the steering-wheel symbol
and its different status.

Figure 2 - Two status of the steering wheel symbol: blue (left) indicates that the driving
automation is off; green (right) indicates that the vehicle is driving at level 3 driving
automation

Tablet
A video presented on a Lamina tablet using Windows 10 was used as a proxy for drivers' trust in the
system. The rationale behind it is that a greater trust level in the system capabilities should reflect in
a greater visual and cognitive engagement in the video. The tablet was placed in the central
instrumental cluster of the car (see Figure 4) allowing better discrimination for when drivers were
actually engaged in the video or monitoring the environment. The video was initiated at the beginning
of the drive and was kept on throughout the whole condition. Drivers selected their preferred video
based on a pre-defined list presented to them before the start of the experiment.

723021 Page 9 of 26
BRAVE Deliverable D3.4

Figure 3 - Tablet used for video presentation during the experimental sessions

VRU monitoring system and warning system


The active status of the VRU monitoring system was conveyed via a symbol on the dashboard
represented by a magnifying glass (Figure 4). The presence of this symbol indicated to the drivers that
the system was actively detecting VRUs in the nearby who could potentially interact with him/her.
The absence of this symbol, on the contrary, indicated that such a function was not available. When
VRUs were detected, a message was shown on the top part of the windshield, prompting drivers to
co-monitor the environment with the system (see right picture in Figure 4). Drivers were required to
confirm the reception of this message by clicking a button attached to the steering wheel. In the
experiment, all drivers confirmed reception of the message.
Besides the VRU monitoring system, drivers were also supported by a multimodal warning (auditory,
visual and haptic) prompting them to take-over and react to the imminent collision with a sudden
VRU crossing the road. This warning system triggers in dangerous situations when the distance to
VRU is 50 meters or below. Since the passenger car cabin that was used is a Volvo car, Volvo's
existing collision warning system was used. The brake-pulse was applied to the moving-based system.

723021 Page 10 of 26
Deliverable D3.4 BRAVE

Figure 4 - The magnifying glass symbol (on the left) indicated drivers that the VRU
monitoring system was active. The “Monitoring zone” message (on the right) was used to
inform the presence of VRUs in the close areas

2.2 VTI Walking Simulator


2.2.1 General description and purpose of installation

A study (test#3.2) was conducted in June 2020 in the walking virtual-reality simulator located at
VTI’s premises in Linköping. The purpose of this study was to investigate how automated vehicles
should convey their intentions to facilitate smooth interactions with VRUs in crossing situations.
Most particularly, this work was directed at finding out how the explicit information conveyed by an
external HMI (i.e., the car will/will not stop) and the implicit information provided by the dynamics
of the vehicle (i.e., deceleration), should be temporally integrated during the approaching stage to
provide unambiguous information about the automated car's intention. In the VR simulator, each
participant was presented with a virtual crossing scenario where s/he had to interact with 45
automated vehicles. The vehicles presented the explicit information (via the external HMI) and
decelerated at different distances (i.e., far or close) from the pedestrian when approaching him/her.
Drivers were instructed to cross when they felt safe to do so or indicate their decision of not crossing
by pressing a button.

2.2.2 Technical implementation or requirements

Virtual Reality Simulator and Scenario


The VR simulator at VTI’s premises in Linköping consists of a set of 2 infrared cameras that
continuously monitor the participants' position in the virtual environment, an adjustable headset with
a Tobii eye-tracking system integrated, and a hand-held controller with different buttons (See Figure
5). The size of the testing area is 18 squared meters. In the experiment, a virtual unsignalized crossing
in an urban area was used as a scenario to increase the uncertainty of the crossing. Participants' task
was to stand on a green circle on the side of the road until a vehicle approached from his/her left side
(see the bottom picture in Figure 5). Then, based on the explicit (eHMI) and implicit (deceleration)
information provided by the vehicle, pedestrians decided whether to cross or not. If they decided to
cross, pedestrians were asked to walk towards another green circle situated 4 meters ahead, on the
other side of the road. If not crossing, participants were instructed to click on the trigger button in one
of the controllers.

723021 Page 11 of 26
BRAVE Deliverable D3.4

Figure 5 – Components of the virtual reality walking simulator at VTI in Linköping

For this experiment, the following measures were logged: vehicle-pedestrian distance when the
pedestrian crossed, crossing speed (from one green circle to the next), non-crossing decisions (clicks
on the controller), changes in pedestrians' decisions, run-overs and glance behaviour. Besides, after
each interaction, pedestrians were asked to report on a 5-point-Likert scale, how clear the intention
of the vehicle that just passed was (1 – very unclear, 5 – very clear).

Figure 6 - Virtual crossing scenario used in the experiment

External HMI
The eHMI consisted of a square-shaped display located on the roof of the vehicle. The display
consisted of a black panel with a frame whose colour was varied to provide different information to
the pedestrians about the vehicle's intention. A blue frame indicated the vehicle's intention was to
continue driving without stopping (left picture in Figure 7), whereas an orange frame informed about
the vehicle’s intention to stop when reaching the pedestrian's position (middle picture in Figure 7).
Finally, a dark frame (i.e., complete dark panel), indicated that automation functionality was not
available, and that vehicle could stop or not (right picture in Figure 7). In some conditions, the frames
lighted up at a far distance from the pedestrian, and in other conditions, this occurred at a close
distance.

The implemented eHMI was designed as an extension of the eHMI GRAIL (an intelligent interface
between vehicles and vulnerable road users) that has been implemented in the BRAVE test vehicle
used by UAH and ALCALA for BRAVE test #4. Thereby, we advance the physical eHMI concepts
723021 Page 12 of 26
Deliverable D3.4 BRAVE

of the test vehicles as other potential additions to GRAIL are prototyped in the simulator (Figure 7).
GRAIL is short for Green Assistant Interfacing Light and aims to enhance road safety as it reassures
vulnerable road users when they are crossing the street. GRAIL has two main features to achieve the
reassurance, namely: (1) decrease of the vehicles speed as pedestrians are detected, and (2) utilize an
array of green diodes located in the front of the vehicle. See Figure 14 and chapter 4 for an illustration
of GRAIL diodes as they are lit.

Figure 7 - Different status of the external HMI. Blue = not stopping; Orange = stopping; Dark
= No information available

723021 Page 13 of 26
BRAVE Deliverable D3.4

3 Implementation at Fraunhofer IAO


In the Vehicle Interaction Lab’s driving simulator at Fraunhofer IAO (Figure 8), the entire final
BRAVE (interior) HMI-concept was implemented, as depicted in D3.2 (Leuteritz, Fritz, Bischoff, &
Kamps, 2020) – with the exception of the LEDs on the steering wheel, as explained below. Some
components, namely the so-called transparency HMI realized on the HUD, as well as some audio
messages had already been implemented to realize test #2, as reported in D5.2 (Strand et al., 2020).
With tests #2 in Germany and #3 in Sweden, the simulator testing tasks were officially concluded and
results reported in D5.2. However, once the final BRAVE HMI concept was created, based on the
outcomes of these tests, we deemed it important to implement an exemplary realization of this HMI
concept and to perform a summative evaluation with end users (planned for October 2020). This
implementation of the entire concept follows the logic of iterative development, which should – of
course – end with a test of the entire HMI. Furthermore, the implementation in the driving simulator
may serve to showcase the BRAVE results.

Figure 8 – The Fraunhofer IAO driving simulator

3.1 Implemented components and purpose


The technical components integrated in the driving simulator play an important role in the concept.
These include the display array, which consists of nine small displays that are positioned in a
consecutive line on the car dashboard, at the bottom of the windshield (A in Figure 9). Currently,
only five displays are available, since the others are broken and could not be replaced in time.
However, five displays are enough for implementing the HMI. Also essential are the displays D1 and
D2 (see Figure 9), the steering wheel with the integrated LEDs (W in Figure 9), the vibration system

723021 Page 14 of 26
Deliverable D3.4 BRAVE

located behind the driver's seat, the sound output system (S in Figure 9) and the head-up display
(HUD, H in Figure 9). The column switches (C), and the pedals (P) are, of course, meant to be used
during the manual driving periods – yet not in another than the usual way. Hence, column switches
and pedals were not touched by the implementation work.

Figure 9 – location of used components in the Fraunhofer IAO driving simulator


The steering wheel with the integrated LEDs has previously been used in the ADAS&ME project
(https://www.adasandme.com/; G.A. no. 688900). We integrated it into the BRAVE HMI concept
since it seemed feasible and in order to achieve consistency across projects. The wheel was a
prototype provided by a third party and did not work properly by the time we integrated the BRAVE
HMI. As a consequence, we developed a solution that transferred the task of the steering wheel to the
display array. This shows that regarding visual output, the BRAVE internal HMI concept has some
redundancies that will allow OEMs to apply their own HMI elements while still following the
interaction flows and the design heuristics. In fact, leaving out the steering wheel even has the benefit
of showcasing a more cost-efficient implementation of the concept. However, the BRAVE internal
HMI concept does not have redundancies regarding other elements (e.g., it would be impossible to
replace audio- or haptic feedback). What the solution looks like is described in the following chapter.
The display D1 shows the speedometer and is positioned behind the steering wheel. The display D2
is positioned in the centre console. If desired, the position of D2 could be changed.
The main purposes of the BRAVE internal HMI are to help the driver handle situations such as the
switching of control (hand-over and take-over), and to help keep the driver in-the-loop by maintaining
the right amount of situation awareness. In each use case, individual components or a combination of
several components are used. The implementation described below takes the driving mode into
account (automated vs. manual), as well as the driver’s state (based on the Driver Monitoring System
– DMS – developed by UAH).
In the following, the implementation steps by using the HMI components are described and further
elaborations are added.
3.2 Use case realization
The HMI concept described in Table 7 in D3.2 (Leuteritz, Fritz, Bischoff, & Kamps, 2020) represents
a general approach that is meant to give room to OEMs so that they could use their proprietary sets
of wordings, icons, etc. to realize the use cases. Therefore, it was necessary to create our own set of
723021 Page 15 of 26
BRAVE Deliverable D3.4

these interaction elements when implementing the HMI in the driving simulator at Fraunhofer IAO.
This chapter shows how the generically described interaction sequences were realized.
Since the BRAVE HMI concept is described based on the BRAVE use cases, these same use cases
were also used in creating the implementation described below. In the first group of use cases, the
driver's intervention/reaction is not necessary (since the automation can handle the situation by itself).
An overview on the HMI elements used in these scenarios is given in Table 1.

Mode Array Visual effect / animation


a) Manual mode none
(automation not
available)
b) Manual mode none
(automation
available)
c) Transition Manual Filling with blue colour
 Automation (repeatedly, sand clock)
d) Automation mode none

e) Automation mode Flashing (slowly)


(takeover-request)
f) Transition Blue colour draining
Automation  (repeatedly, sand clock)
Manual mode

Figure 10 – Display array indicating automation status

1.1 Overtaking cyclist/another car


At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). The other road user is marked on the head-up display; the colour of the
marking indicates the estimated danger level, which we assume to be low for an overtaking
case (hence, otherwise the automated vehicle should not overtake).
When the overtaking manoeuvre is performed, the brightness changes from left to right. Such
a colour movement (on the arrays) is shown when overtaking any cyclist or any other car.
When shearing out to the left, a colour gradient is displayed (which becomes darker from right
to left), and when shearing to the right, it should be displayed in the opposite way.
1.2 Activate automation
At the beginning of this use case, the driver is steering the vehicle in manual mode. Once the
automation conditions are met, "Automation is possible!" is shown with the green background
colour on the display D1. The driver has the possibility to accept or reject the automation
mode by touchpad confirmation on the steering wheel. To visually inform the driver, a bar
indicator is shown on the array.

723021 Page 16 of 26
Deliverable D3.4 BRAVE

If the DMS identifies that the driver is tired or distracted and automation conditions are
fulfilled, then the vehicle informs the driver via visualization or voice output: “Automated
mode is available!” (“Automation ist verfügbar!” – in German). This is done every 30
seconds, until either automation conditions are no longer met, or the driver state improves.
1.3 Crossing cyclist while turning left
At the beginning of this use case, the automation is active, which is indicated in the display
array (see lower row in Figure 10). The vehicle wants to turn left but leaves the cyclist the
right of way. As in the above-mentioned realizations, it is constantly indicated that the car is
in the automated state. The arrays light up blue with the indicator bars being full. The cyclist
icon is also shown on the head-up display.
1.4 Stationary obstacle on the road
At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). The driver is informed that the emergency brake is activated by a
muffled sound (duration of two seconds). Then, the car verbally informs about the AEB
(“AEB activated!" - "Notbremsung eingeleitet!" in German") via voice output.
Furthermore, the arrays are filled by the bars because the driver is still in automated driving.
1.5 Small animal crosses the road
At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). The vehicle detects a small animal on the road and overruns it (since
this is the safest manoeuvre). The animal icon is shown on the head-up display. The arrays
remain filled by bars because the car is still in automated driving.
In the following scenarios, the automation system requests the driver to intervene and take control of
the vehicle. When this happens, the bars in the array decrease vertically (from lower to upper row in
Figure 10), in case automated driving is still available. At medium and high level of distraction, the
vibration system behind the seat is activated.
Two different levels of seat vibration are used. The first stage of the seat vibration is soft (the
strength increases for five seconds). The second stage is a strong vibration. The vibration system
should be turned on for a short time (duration of five seconds) so that the driver is not disturbed.
After the first stage, the second stage follows immediately, and after that an intermediate pause of
five seconds. The second stage is repeated until the driver intervenes. An overview on the HMI
elements used in these scenarios is given in
Table 2.
2.1 Stationary obstacle (AEB necessary)
At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). The vehicle encounters a situation in which it would have to trigger an
AEB unless the driver takes control. The voice output is "Attention, please take control!"
(“Übernehmen Sie bitte die Kontrolle!” in German) and is repeated every five seconds. An
icon (hands on the steering wheel) shows up on D2. Once the driver uses the touchpad on the
steering wheel, the voice output stops and the icon disappears.
2.2 Ball rolls on street (child follows to get it)
Same as in scenario 1.
2.3 Upcoming traffic lights
At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). If the traffic light cannot be identified by the vehicle’s automation
system, a high-pitched warning sound is emitted directly (duration of three seconds). This

723021 Page 17 of 26
BRAVE Deliverable D3.4

should happen max. three times. The sound is immediately followed by an audio that is
"Please take control!" (“Übernehmen Sie bitte die Kontrolle!” in German). Once the driver
uses the touchpad on the steering wheel, the warning sound stops and the icon (hands on the
steering wheel) disappears.
2.4 End of road marking (e.g. because on construction site)
At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). The same HMI components as in scenario 3 are used. In addition, red
light pulsates on the array every three seconds. Thereby the backlight of the display array is
controlled. The pulsing of light runs synchronously with the warning sound. This interaction
sequence stops as soon as the driver has taken control.
2.5 Leaving highway
Same as in scenario 4.
2.6 Deactivate automation
At the beginning of this use case, the automation is active, which is indicated in the display
array (d in Figure 10). The vehicle informs the driver that they must take control because the
automation mode will no longer be available and will be deactivated.
If the driver is not distracted or if distraction is low, the sequence of action is as described in
Table 7 of D3.2 (Leuteritz, Fritz, Bischoff, & Kamps, 2020). The speaker output “Automation
ends in X minutes” is implemented as “Ende der Automatisierung in X Minuten” in German.
The time is shown on the displays D1 and D2 by integrating a digital count-down. At the same
time, the brightness on the array increases when the driver is informed via notification. Then
the brightness decreases again.
If the driver is moderately drowsy (middle distraction state), the message “You are starting to
get drowsy. Take over to stay alert.” is implemented in German as follows: “Sie werden müde.
Übernehmen Sie bitte die Kontrolle, um wach zu bleiben.” On D1 and D2, driver state is
depicted with a permanent icon, and the demand to take control with a slowly pulsating icon.
For the case that the driver is in middle distraction state (moderately stressed, fearful or angry),
no adaptations were necessary.

Kritischer
Fahrerzustand.
Das System hält
sicher in 10
Minuten an!

Figure 11 – Display message: Critical driver state. The system will stop safely in 10 minutes!

If the driver distraction state is high, and thus is not allowed to take over, the message on D1
and D2 is: "Critical driver state. The system will stop safely in 10 minutes!" (“Kritischer
Fahrerzustand. Das System hält sicher in 10 Minuten an!” – in German). The critical driver
condition is indicated in the displays D1 and D2 by text output and a red background. A high-
pitched warning sound (beep) is sounded every minute and lasts for five seconds. If the driver
tries to control the vehicle by touching the steering wheel, the system does not react on this
input. The bars on the array indicate automated driving.

723021 Page 18 of 26
Deliverable D3.4 BRAVE

Table 1 – No take over


Use HMI- Implementation
case components
1.1 Array blue bars for automation (constantly displayed)
HUD road user is shown (icon)
1.2 Array blue line for manual mode/blue bars for automation (constantly displayed)
D1 „Automation possible“ - text (constantly displayed)
Speaker „Automation is available!“ every 30 seconds
1.3 Array blue bars for automation (constantly displayed)
HUD road user is shown (icon)
1.4 Array blue bars for automation (constantly displayed)
HUD AEB is shown (icon)
Speaker „AEB activated“ and muffled (mellow) sound (duration of 2 seconds) for 3
times
Seat vibration by high distraction level
1.5 Array blue bars for automation (constantly displayed)
HUD animal is indicated (icon)

Table 2 – Takeover numbers as above


Use HMI- Implementation
case components
2.1 Array blue bars for automation (constantly displayed)/blue line for manual mode
Speaker “Attention, please take control!” request (every 5 seconds)
Seat vibration by high distraction level
2.2 Array blue bars for automation (constantly displayed)/blue line for manual mode
Speaker “Attention, please take control!” request (every 5 seconds)
Seat vibration by high distraction level
2.3 Array blue bars for automation (constantly displayed)/blue line for manual mode
Speaker “Attention, please take control!” request and a high-pitched warning sound
(3 seconds)
Seat vibration by high distraction level
2.4 Array blue bars for automation (constantly displayed)/blue line for manual mode
and pulsating red light
Speaker “Attention, please take control!” request and a high-pitched warning sound
(3 seconds)
Seat vibration by high distraction level
2.5 Array blue bars for automation (constantly displayed)/blue line for manual mode
and pulsating red light
Speaker “Attention, please take control!” request and a high-pitched warning sound
(3 seconds)
Seat vibration by high distraction level
2.6 Array blue bars for automation (constantly displayed)/blue line for manual mode
and brightness in-/decreases
Speaker “Attention, please take control!” and a high-pitched warning sound (5
seconds)
D1 “Automation ends in 10/5/1 minute(s)” text and critical driver state is
shown
D2 “Attention, please take control” text and critical driver state is shown

723021 Page 19 of 26
BRAVE Deliverable D3.4

3.3 Technical description


Due to the restricted access to the driving simulator for reasons of workplace regulations to counter
the Covid-19 pandemic, developments have at this stage been done mostly on remote PCs and not
yet been successfully tested in the simulator. Once this is completed, an updated version of this
deliverable will provide the technical description.
The transparency HMI that marks VRUs with a coloured “V” about their heads – with the aim to
indicate to the driver that the automated vehicle has perceived them and – has been implemented
using custom features already available in the SILAB software (Figure 12).

Figure 12 – transparency HMI implemented in Fraunhofer IAO’s driving simulator

The implementation of the other HMI components was done in C/C++. The communication and
control of the individual components was done via UDP (User Datagram Protocol), which means that
there is no need to establish or terminate the connection between the components (e.g. displays) and
the managing PC.
All HMI components are connected (addressed) via UDP. An interface exists between the driving
simulator software SILAB and the HMI components. The C/C++ code is converted into an .exe file
and executed (appropriate interaction sequence for the use case) as soon as messages are sent and
received by SILAB. Among other things, libraries can be added in this development environment.
So, existing functions can be used for the program.

723021 Page 20 of 26
Deliverable D3.4 BRAVE

4 Road vehicle implementation


This section explains and illustrates the implementation of the GRAIL system, an intelligent interface
between vehicles and vulnerable road users (VRUs).

4.1 Implemented system and purpose


The GRAIL system provides a solution for deploying an intelligent and efficient interaction between
vehicles and a key group of Vulnerable Road Users (VRUs), namely pedestrians and cyclists. GRAIL
stands for GReen Assistant Interfacing Light, aiming at increasing road safety and reassuring VRUs
when crossing the street, which is one of the main objectives of the BRAVE project. In the situation
depicted in Figure 13, a couple of pedestrians are standing at the curb on a pedestrian crossing while
looking for eye contact with the driver of the oncoming car. In this scenario, the couple will not start
crossing until they observe some signal indicating that the car is giving them way.

Figure 13 - Pedestrians waiting to cross at a pedestrian crossing while looking for eye contact
with the driver of an oncoming car.

In order to increase VRUs’ reassurance, the automated system developed by the University of Alcalá
(UAH) performs two actions in parallel that contribute to improve the interaction between the car and
the VRUs. On the one hand, the vehicle starts to decrease its speed significantly as soon as those
pedestrians are detected by the on-board camera system. On the other hand, the GRAIL system, an
array of green diodes located in the front of the vehicle (as shown in Figure 14), is turned on. The rest
of the time, when no VRU is detected, the system will inform its surroundings by lighting red diodes,
hence indicating that is not safe to cross the street.

723021 Page 21 of 26
BRAVE Deliverable D3.4

Figure 14 – GRAIL system shown in detail. An array of green diodes located in the front part
of the vehicle is turned on whenever a VRU is detected on the curb.

The combination of both actions, vehicle deceleration and green light coming on, provides a distinct
sign to VRUs, indicating that the situation is safe for them to start crossing the street. The operation
of the GRAIL system is graphically illustrated in the video that can be seen at: GRAIL demonstration.

4.2 Technical description


The GRAIL system consists in two differentiated parts: the VRU detection algorithm (including the
control of the break) and the array of diodes.

On the one hand, the array of diodes is composed of 32 RGB LEDs per meter. Each pixel’s colour
(blue, red or green) can be set with 7-bit PWM precision at 1.2MHz, i.e. 21-bit colour per LED (which
is much more that the eye can easily discern). These LEDs are controlled by shift-registers that are
chained up so that the strip can be shortened or lengthened. In order to send the data, 2 digital output
pins are needed. The PWM is built into each chip (LPD8806) so that, once the colour is set, it will
continue to PWM all the LEDs, there is no need to continuously update or clock it. The strip is made
of flexible PCB material, and comes with a weatherproof sheathing. This system consumes 5V/2A,
for which we have a voltage stabilizer in the glove box of the vehicle connected to the battery. In
order to control the LPD8806, an Arduino library provided by the manufacturer was used. The data
is sent to the strip via SPI protocol. In this way, the RGB values of each pixel are sent, serially, at
each timestamp.

On the other hand, the detection of the VRUs and their pose is done using a Mask R-CNN framework,
which detects persons’ keypoints, as shown in Figure 15. This method extends Faster R-CNN by
adding a layer for predicting an object mask in parallel with the existing layer for bounding box
recognition. With this technique, it can be detected whether the pedestrian is going to cross the street
or not. Details regarding the detection of VRUs’ intentions have been reported in D4.4 (Sotelo et al.,
2020) .

723021 Page 22 of 26
Deliverable D3.4 BRAVE

Figure 15 - Detection of Pedestrians using UTAC dummies

Some tests were performed at the UTAC facilities (BRAVE test #4), using dummies with a rotatory
head. These tests entail two different scenarios:

1. In the first scenario, the pedestrian stops at the curb, waits to see the green lights and reassures
him-/herself that the vehicle decreases its speed and, only then, starts to cross. A video
demonstration showing the operation of the system in this first use case can be seen at:
GRAIL_Scenario1

Figure 16 depicts a sequence of frames, which showcase the recognition of the dummy’s
keypoints and the detection of their glance towards the vehicle.

723021 Page 23 of 26
BRAVE Deliverable D3.4

Figure 16 - GRAIL tests in UTAC: Dummy’s gaze detection.

2. In the second scenario, the pedestrian follows their path parallel to the vehicle, so that the car
must carefully overtake the dummy. Some frame samples showing this overtaking are
depicted in Figure 17. A video demonstration showing the operation of one of these tests can
be seen at: GRAIL_Scenario2.

723021 Page 24 of 26
Deliverable D3.4 BRAVE

Figure 17 - GRAIL tests in UTAC: Scenario 2.

723021 Page 25 of 26
BRAVE Deliverable D3.4

5 Discussion
The implementations of VTI and UAH have been successfully completed. The implementations by
VTI served for conducting studies that have already been finished and reported (D5.2), with exception
of the walking simulator study that has been conducted yet not been reported. The road testing by
UAH is planned for the last quarter of 2020 and the car is ready.

With regard to eHMI, significant developments in the state of the art have been observed, which are
reported in the recently published D4.5 (Leuteritz, Fritz, Widlroither et al., 2020). Based on this
updated state of the art, the GRAIL concept needs to be modified to be compliant with the current
standard ISO/TR 23049. The basic idea is to transform it into a frontal breaking light. Regarding the
technical implementation described above, this is a minor change towards the final demonstration
(test #6). Until then, the current implementation of the GRAIL serves as a proof of concept regarding
the combination of VRU intention detection and eHMI.

The implementation at Fraunhofer IAO’s driving simulator is still ongoing; when the interaction
concept of D3.2 was finalized, the Covid-19 pandemic started and access to the driving simulator was
limited since then. The implementation is planned to be completed by September 2020, and an
additional final user testing of the mock-up is planned for November 2020, with preparations starting
in October 2020.

References
Carrasco, S., Sotelo, M. Á., Alonso, J., & Salinas, C. (2020). BRAVE D3.3: Driver Monitoring Concept
Report. http://www.brave-project.eu/wp-content/uploads/2020/05/D3.3-Driver-Monitoring-Concept-
Report.pdf
ISO/TR 23049. Road Vehicles - Ergonomic aspects of external visual communication from automated
vehicles to other road users.
Leuteritz, J.‑P., Fritz, N., Bischoff, S., & Kamps, M. (2020). BRAVE D3.2: Vehicle-Driver Interaction
Concept Report. http://www.brave-project.eu/wp-content/uploads/2020/05/D3.2-Vehicle-Driver-
Interaction-Concept-Report.pdf
Leuteritz, J.‑P., Fritz, N., Widlroither, H., Terfurth, L., Strand, N., & Solis Marcos, I. (2020). BRAVE D4.5:
Vehicle-VRU Interaction Concept Report. http://www.brave-project.eu/wp-
content/uploads/2020/07/D4.5-Vehicle-VRU-Interaction-Concept-Report.pdf
Sotelo, M. Á., Izquierdo, R., Carrasco, S., Salinas, C., & Alonso, J. (2020). BRAVE D4.4: Model for the
prediction of VRUs' intentions.
Strand, N., Leuteritz, J.‑P., & Solis Marcos, I. (2020). BRAVE D5.2: Results and HMI recommendations
based on virtual prototyping. http://www.brave-project.eu/wp-content/uploads/2020/05/D5.2-Results-
and-HMI.pdf

723021 Page 26 of 26

You might also like