Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

BAHIR DAR UNIVERSITY

BAHIR DAR INSTITUTE OF TECHNOLOGY

SCHOOL OF RESEARCH AND GRADUATE STUDIES

FACULTY OF MECHANICAL AND INDUSTRIAL ENGINEERING

MSc in Electro mechanical Engineering


Research proposal on:

Designing and developing advanced vehicle collision avoidance systems using sensor
fusion methods

By:

MULUGETA MULATU

A research proposal submitted in partial fulfillment of the requirements for the


Award of degree of Master of Science in Electromechanical Engineering

Advisor:
GETNET AYELE (Ph/D)
April, 2024 G.C

Bhir Dar, Ethiopia


Designing and developing
advanced vehicle collision
avoidance systems using sensor
fusion methods
PREP.BY
MULUGETA MULATU
SUB.TO
GETNET AYELE (Ph/D)

2016 E.C
TABLE OF CONTENTS

ABSTRACT .................................................................................................................................... 3
INTRODUCTION .......................................................................................................................... 4
BACKGROUND OF THE STUDY ........................................................................................... 4
SENSORS USED IN ADCAS ........................................................................................................ 5
CURRENT ADAS SOLUTIONS ................................................................................................... 6
SENSOR FUSION .......................................................................................................................... 7
ADVANTAGES OF MULTI-SENSOR FUSION ......................................................................... 7
POSSIBLE PROBLEMS AND ISSUES ........................................................................................ 8
SENSOR FUSION ALGORITHMS............................................................................................... 8
CHARACTERISTICS AND TPES OF SENSORS USED FOR ADCAS ................................... 10
MILLIMETER WAVE RADAR .............................................................................................. 10
CAMERA .................................................................................................................................. 11
LIDAR ....................................................................................................................................... 11
STATEMENT OF THE PROBLEM ............................................................................................ 12
OBJECTIVE OF THE STUDY .................................................................................................... 13
GENERAL OBJECTIVE .......................................................................................................... 13
SPECIFIC OBJECTIVE ........................................................................................................... 13
LITERATURE REVIEW ............................................................................................................. 14
METHODOLOGY ....................................................................................................................... 18
CONCEPTUAL MODEL OF ADVANCED DRIVER COLLISION AVOIDANCE
ASSISTANCE SYSTEMS USING SENSOR FUSION METHOD BY ARDUINO ................. 18
CONCEPTUAL AVCAS SYSTEM ARCHITECTURE ............................................................. 20
ARDUINO IDE(INTEGRATED DEVELOPMENT ENVIRONMENT )STEPS ....................... 21
WORK PLAN ............................................................................................................................... 22
BUDGET PLAN ........................................................................................................................... 23
REFERANCE ............................................................................................................................... 24
Bahir Dar Institute of Technology-Bahir Dar University School of Research and Graduate
Studies

FACULTY OF MECHANICAL AND INDUSTRIAL ENGINEERING

THESIS PROPOSAL
Student:

MULUGETA MULATU 12 April 2024


Name Signature Date

The following graduate faculty members certify that this student has successfully presented the
necessary written thesis proposal and oral presentation of this proposal for partial fulfillment of
the thesis-option requirements for the Degree of Master of Science in Electromechanical
Engineering

Approved:

Advisor:
________________________________________________________________________
Name Signature Date

Chair Holder:
________________________________________________________________________
Name Signature Date

Faculty Dean:
________________________________________________________________________
Name Signature Date

1
2
Abstract

To reduce the occurrence of traffic accidents, aiming at the shortcomings of


using single sensor for target recognition in smart vehicle collision avoidance in the past, such as
low perception range and high recognition error, the collision avoidance technology of smart
vehicle system based on multi-sensor fusion is deeply studied. Sensor Fusion is one of the key
technologies enabling Advanced collision voidance Systems (ADCAS). It is implemented as a
software solution that collects together data provided by a disparate range of sensors to provide a
single, consistent picture of the environment around the vehicle. Taking a completed and
automotive-ready sensor fusion solution, ADCAS, the use of multiple sensors to gauge the
environment surrounding. it offers numerous advantages, as fusing information from more than
one sensor helps to provide highly reliable and error-free data Use Various Sensors Such As
ultrasonic sensors, Cameras, Radar, LIDAR (light detection and ranging" or "laser imaging,
detection, and ranging To Monitor).ACC(Adaptive Cruise Control), Lane Departure Warning,
Blind Spot Detection, LDWS (Lane Departure Warning System), ABD (Automatic Braking
dynamic), And Rearview Cameras. This paper aims to develop a Collision avoidance system that
uses sensor fusion methods controlled by the Arduino microcontroller. The system will tested
and the collision avoidance system will prevent the model car from colliding with a barrier by
stopping at a distance of 15cm from the barrier by using a kid’s toy car.
KEYWORDS
 ADAS (Advanced collision avoidance System)
 LIDAR (light detection and ranging" or "laser imaging, detection, and ranging To
Monitor
 ACC (Adaptive Cruise Control)
 LDWS (Lane Departure Warning System)
 ABD (Automatic Braking dynamic)

3
INTRODUCTION

Background of the Study


Collision avoidance systems are important for protecting people’s lives and preventing property
damage. The number of road traffic accidents is one of the major societal problems in the world
today. According to estimated data from the WHO, 1.2 million people are killed and as many as
50 million are injured each year[1]. Road traffic accidents in Ethiopia are a major problem with
various aspects of causes and a lack of management and policy on road safety. Traffic accidents
are increasing over time while there is no structural national government policy involving
infrastructural and legal issues. Even though the government regulated draft strategies to
improve traffic efficiency and reduce road traffic accidents, major problems are escalated by
pedestrians as well as drivers. Current technology field of the automotive industry focuses on the

Development of active safety applications and advanced driver assistant systems (ADAS) instead
of passive safety systems, active safety systems have been introduced to help the driver avoid
collisions in the first place. Nowadays, systems such as lane departure warning and rear-end
collision avoidance have been introduced These active safety systems are required to interact
much more with the driver than passive safety systems, creating a closed loop between driver,
vehicle, and the environment. Examples of such systems could be found in the Laboratory for
Intelligent and Safe Automobiles (LISA). An ADCAS shall support the driver in his/ her task to
drive the vehicle providing additional information or warning when encountering any dangerous
situation. The system should be equipped with various types of sensors to observer the
environment around the vehicle. For example, radar and laser scanners are sensors used to
measure distance and velocity of objects, and video cameras are used to detect the road surface
and lane markings or to provide additional visual information This aspire a reduction or at least
an alleviation of traffic accidents by the means of collision mitigation procedures, lane departure
warning, lateral control, safe speed and safe following measures[2].

Figure 1 Simplified block diagram of the system driver vehicle environment driver assistance

4
Sensors Used in ADCAS

Advanced vehicle collision avoidance Assistance Systems (AVCAS) rely on a variety of sensors
to detect and respond to changes in the environment. Some of These sensors include

1. Ultrasonic Sensors

Ultrasonic sensors are a type of sensor that uses sound waves to detect objects. They emit high-
frequency sound waves that bounce off objects and return to the sensor. By measuring the time it
takes for the sound wave to return, the sensor can determine the distance between itself and the
object. Ultrasonic sensors are commonly used in ADAS for parking assist systems.

2. Infrared Sensors

Infrared sensors are a type of sensor commonly used in AVCAS. They work by emitting infrared
light and measuring the reflection to detect objects in the vehicle's surroundings. Infrared sensors
are particularly useful in low-light conditions, as they can detect objects that may not be visible
to the naked eye or other types of sensors. They are often used in conjunction with other sensors,
such as cameras and lidar sensors, to provide a more complete picture of the environment.

3. Camera Sensors

Camera sensors are an important component of AVCAS technology, as they allow vehicles to
'see' and interpret their surroundings. These sensors capture images and analyze them using
complex algorithms to identify objects and obstacles in the vehicle's path. They can detect lane
markings, traffic signs, pedestrians, and other vehicles, providing valuable information to the
vehicle's control system.

4. Lidar Sensors

Lidar sensors are a key component in many advanced driver assistance systems (ADAS). They
work by emitting laser pulses and measuring the time it takes for the light to bounce back,
creating a highly accurate 3D map of the surrounding environment. This allows the system to
detect objects and obstacles in real time, making it an essential technology for autonomous
vehicles.

5
Figure 2 Sensor placement with full coverage, redundancy and sufficient overlapping area[2].

CURRENT ADAS SOLUTIONS

Accident reduction systems become crucial for automotive companies because consumers put
much more attention on safety. In order to prevent traffic accidents, a roadmap for fully
autonomous driving has been created. The development of driver assistance systems began with
Anti-Lock Braking System (ABS) introduced into a serial production in the late 70s of the
twentieth century. A roadmap concept consists of the following sensors[1].

 proprioceptive sensors - able to detect and respond to danger situation by analyzing the
behavior of the vehicle;
 exteroceptive sensors – (e.g. ultrasonic, radar, lidar, infrared and vision sensors) able to
respond on an earlier stage and to predict possible dangers;
 sensor networks - application of multisensory platforms and traffic sensor
networks.Sensor Evaluation and Selection

6
Sensor Fusion

Sensor fusion is able to provide data for the decision-making process that has a low
uncertainty owing to the inherent randomness or noise in the sensor signals, includes
significant features covering a broader range of operating conditions, and accommodates
changes in the operating characteristics of the individual sensors (due to calibration, drift,
etc.) because of redundancy. In fact, perhaps the most

advantageous aspect of sensor fusion is the richness of information available to the signal
processing/feature extraction and decision-making methodology employed as part of the
sensor system. Sensor fusion is best defined in terms of the ‘intelligent’ sensor as introduced
in since that sensor system is structured to utilize many of the same elements needed for
sensor fusion. The objective of sensor fusion is to increase the reliability of the information
so that a decision on the state of the process is reached. This tends to make fusion techniques
closely coupled with feature extraction methodologies and pattern recognition techniques[3].

Advantages of Multi-sensor Fusion

In general, multi-sensor fusion data provides significant advantages as compared to using only a
single source data. The improvement of performance is summarize in four general areas
Representation. Information obtained throughout the fusion process has an abstract level, or a
granularity, higher than the original individual input data set. This allows for a richer semantic
and higher resolution on the data compared to that of each initial source of information.
Certainty. We expect the probability of the data to increase after fusion process, increasing the
confidence rate of the data in use. The improved signal to noise ratio is also part of the reason of
better confidence in the fused data. These are associated with redundant information from group
of sensors surveying the same environment. The reliability of the system thus is improved as
well in cases of sensor error or failure.

Accuracy. If at first data is noisy or have errors, the fusion process should try to reduce or
eliminate noise and errors. Usually, the gain in certainty and the gain in accuracy are correlated.
The accuracy can be in the timing as well, fromthe parallel processing of different information
from multiple sensors.

Completeness. Through bringing new information to the current knowledge of the environment
allows for a more thorough view. If individual sensors only provide information that is
independent of other sensors, bringing them into a coherent space will give an overarching view
of the whole. Usually, if the information is redundant and concordant, the accuracy will improve.
The discrimination power of the information is also increased with more comprehensive
coverage from multiple sensors. The numbers of sensors which is employed is also a factor in the
cost analysis of whether a multi-sensor system is better than a single sensor system [2]. A
criterion has to be set up to assess the reliability of the whole system. However, as different
applications require different numbers and types of sensor, it is difficult to define an overarching
optimal number of sensors for any given system[4].

7
Possible Problems and Issues

Certainly, sensor fusion comes with its own inherent problems. Several key issues have to be
considered for sensor fusion techniques [3, 4]:
Registration. Individual sensors have its own local reference frame from which it provides data.
For fusion to occur, the different data sets have to be converted into a common reference frame,
and aligned together. Calibration error of individual sensors should be addressed during this
stage. This problem is critical in determining whether the process of fusion is successful or not.
Uncertainty in Sensor data. Diverse formats of data may
possibly create noise and ambiguity in the fusion process. Competitive or conflicting data may
thus be results from such errors. The redundancy of the data from multiple sensors have to be
engaged to reduce uncertainty, and learning to reject outliers if conflicting data is encountered.
Incomplete, Inconsistent, Spurious data. Data is considered
to be incomplete if the observed data remains the same regardless of the number of
interpretations. Some methods to make data complete is by either collecting more data features,
or through the usage of more sensors. Inconsistent sensors is defined to be two complete data
sets but having different interpretations. This is the consequences of bad sensor registration or
sensors observing different things. If data contains features that is not related to the observed
environment, it is defined to be spurious. Just like uncertainty, the redundancy data have to be
exploited to help in fusing the incomplete, inconsistent, and spurious data [5].
Correspondence / Data Association [6, 7]. One aspect of
sensor fusion is establishing whether the two tracks from each sensor represent the same object
(Track-to track). This is required to know how the data features matches each other from
different sensors, and knowing whether there are data features that are outliers. The other forms
of data association problem is measurement-to-track association, which refers to the problem of
recognizing from which target each measurement originates [8].
Granularity. The level of details from different sensors is rarely similar. The data may be sparse
or dense, relative to other sensors. The level of data may be different, and this has to be
addressed in the process of fusion. Time Scales. In different aspects, sensors may be measuring
the same environment at different rates. Another case is two identical sensors measuring at
different frequency due to manufacturing defects. The arrival timing at the fusion node may also
not coincide due to propagation delays in the system. Especially for spatial distribution of
sensors, with variation in the data rate, real-time sensor fusion has to be based on a precise time-
scale setting to ensure all data are synchronized properly. In cases where fusion algorithm
requires a history of data, how fast the sensor is able to provide data is directly related to the
validity of results.

Sensor Fusion Algorithms

Sensor fusion algorithms combine sensory data that, when properly synthesized, help reduce
uncertainty in machine perception. They take on the task of combining data from multiple
sensors each with unique pros and cons to determine the most accurate positions of objects. As
we’ll see shortly, the accuracy of sensor fusion promotes safety by allowing other systems to
respond in a timely and situation-appropriate manner.n real-world systems, noise means that
using just one sensor to identify the surrounding environment is not sufficiently reliable, which
in the world of sensors translates to errors. Sensors have various strengths and weaknesses, and a

8
good algorithm takes multiple types of sensors into consideration. The data from these different
sensors can be complementary. Because different types of sensors have their own pros and cons,
a strong algorithm will also give preference to some data points over others. For example, the
speed sensor is probably more precise than the parking sensors, so you’ll want to give it more
weight. Or perhaps the speed sensor is not very precise, and so you want to rely more on the
proximity sensors. The variations are nearly infinite and depend on the specific use case[5].Due
to the various natures of fusion process, different algorithms are engaged for different level of
fusion. These are usually probability theory, classification methods, and artificial intelligence

Kalman Filtering

The Kalman filter is an ideal statistical recursive data processing algorithm which continuously
calculates an estimate of a continuous valued state based on periodic observations of the state. It
uses an explicit statistical model of how x(t) changes over time and an explicit statistical model
of how observations z(t) which are made are related The explicit description of process and
observations lets many models of sensor to be easily incorporated in the algorithm. Not only so,
we can constantly assess the role of each sensor in the system. As every iteration requires almost
the same effort, the Kalman filter is well adapted for real-time usage. We first define a model for
the states to be estimated in the standard space-time form:

where x(t) is the state vector of interest, A(t) is the transition matrix, B(t) is the control matrix,
u(t) is a known control input. The observations equation defined in standard spacetime model

where z(t) is the observation vector, H(t) is the measurement matrix. n(t) and v(t) are random
zero-mean Gaussian variable describing uncertainty as the state evolves, with covariance
matrices Q(t) and R(t) respectively. From this, the Kalman filter proceeds recursively in two
stages, prediction and update

A prediction of the state at time k is given as

The covariance P(k|k-1) is also computed as

9
Classical approaches for sensor fusion algorithms[6].

Characteristics and tpes of Sensors used for ADCAS

Most sensors do not directly generate a signal from an external phenomenon, but via several
conversion steps. Thus, the output that is read by the user may deviate from the actual input,
and these performance-related parameters, or specifications provides information about the
deviation from the ideal behavior. There are static characteristics like accuracy, precision,
resolution and sensitivity. Typically, these can be easily managed before fusion process.
Dynamic characteristics however, varies between changes of input. The speed and frequency
of response, settling time and lag of sensors are all inevitable, and these leads to several of
the inherent errors face by sensor fusion.Most sensors are not ideal, and there are deviations
which may come into the information required. Some can be assumed to be caused by
random noise, which requires signal processing to reduce the error. The other case is a
systematic error which is correlated with time, and this can be improved through a defined
filter if the error is known.

In the fusion sensing process, the dominating adopted sensors are MMW-Radar, LiDAR,
camera, ultrasonic, GPS/IMU, and V2X sensors. Consequently, the rest of this section will
talk about the characteristics, advantages, and disadvantages of these sensors.

MILLIMETER WAVE RADAR


After radiating electromagnetic waves, the radar gathers the scattered wave of targets by the
receiving antenna, then a series of signal processing will be performed to acquire the
information of targets. At present, the mainstream frequency bands of MMW-Radars include

10
24GHZ, 60GHZ, and 77GHZ, and the most prevailing one is 77 GHz, while 60 GHz is a
frequency band only adopted in Japan and the 24 GHz band will gradually be abolished in
the future. The 79GHZ band radar has a higher resolution of range, speed, and angle, which
are extensively approved and will become the mainstream frequency band of vehicle radar in
the future.Compared to cameras and LiDAR, MMW-Radar has a longer wavelength, certain
anti-blocking, and anti-pollution ability, which can cope with rain, snow, fog, and dark
environment.A drawback of MMW-Radar is indistinguishable for relatively static stationary
targets. In addition to being disturbed by noise, AD vehicles are often suffering from false
alarms produced by metal objects such as road signs or guardrails. [7].

CAMERA
The camera is one of the earliest sensors for the AD system, and which is also the primary choice
for manufacturers and researchers at present. The camera is principally applied to accomplish
tasks such as target recognition, environment map building, lane detection, and target tracking.
In recent years, deep learning (DL) has achieved an excellent performance in the target
recognition and tracking tasks, which can obtain powerful expression ability from massive data
and replace the traditional manual features design by machine learning methods. After the AD
system accurately completes the target recognition and target tracking, further decision tasks will
implement further.At present, there are two types of cameras, which is a charge-coupled device
(CCD) and a complementary metaloxide-semiconductor (CMOS). CCD has a complicated
manufacturing process, higher quantization efficiency, lower noise, a high dynamic range, and
high image quality in low light conditions. Compared to CCD sensors, CMOS sacrifices some
performance to reduce cost. The difference between them will be wearier, and CMOS is
expected to replace CCD.Therefore, it is reasonable to conclude that a single camera sensor will
be extremely unreliable in severe weather conditions. The camera of AD vehicles also have poor
reliability in the situation of sudden changes in light, such as exiting a tunnel. By combining the
camera with GPS, HPM, and even V2X, some prior information can be introduced to adjust the
camera exposure dynamically[7][6].

LiDAR
LiDAR calculates the interval between emissive laser pulses and scattering reflected by targets to
obtain distance, which includes 2-D LiDAR and 3-D LiDAR based on the scanning structure. 2-
D LiDAR is a single-layer structure device, while 3-D LiDAR is a multi-layer one. 3-D LiDAR
is more prevailingly applied to AD vehicles but more expensive. With the increasing application
of LiDAR and production, manufacturing costs will gradually decline, and predictably reach the
situation that most automobile manufacturers can accept it. LiDAR provides practical and
precise 3D perception competence in day and night. According to the presence or absence of
motion units [68], LiDAR can be divided into three types: time-of-flight (TOF), triangulating
LiDAR, and phase-ranging LiDAR, and the mainstream is the TOF LiDAR in AD system. In the
latest research, LiDAR has been fully capable of recognizing and sensing pedestrians’ multiple
motion patterns and spatial states [69]. The multi-line LiDAR continuously emits a laser beam

11
through a transmitter, and the receiver collects the target scattered light as a point cloud image,
which helps in perceiving and recognizing pedestrians and vehicles.Although LiDAR is superior
to MMW-Radar in measurement accuracy and 3-D perception competence, whose performance
is still incompetent under severe weather conditions such as fog, snow, and rain. The
convergence of cameras, MMW-Radar and LiDAR data, will wipe off a portion of information
redundancy, provide a reliable and efficient perception ability, but the system is too expensive.

Sensor feature and fusion result[7].

STATEMENT OF THE PROBLEM

Traffic accidents happen on Ethiopia highway roads almost every day. the objective of the
proposal is to tackle this problem by development of next-generation automotive safety systems
and contribute to reducing the frequency and severity of traffic accidents.To implement this
advanced collision-avoidance system I use Arduino UNO, ultrasonic sensor, and Kids Toy car.
And I expect that it will give a positive result test.

12
OBJECTIVE OF THE STUDY

General objective
the research aims to advance the state-of-the-art in vehicle collision avoidance technology by
leveraging sensor fusion methods to enhance detection, prediction, and mitigation of
collision risks. The findings of this research can inform the development of next-generation
automotive safety systems and contribute to reducing the frequency and severity of traffic
accidents.

Specific objective
1 Analyze the strengths, limitations, and complementary characteristics of each sensor type
in terms of detection range, accuracy, resolution, cost, and environmental robustness.

2 Design algorithms for sensor data preprocessing, fusion, filtering, and object
recognition/classification to generate accurate and timely collision threat assessments.

3 Optimize the fusion algorithms to improve detection sensitivity, reduce false alarms, and
enhance robustness to sensor noise, occlusions, and environmental disturbances.

4 Design and implement a prototype collision avoidance system that integrates the selected
sensors and sensor fusion algorithms into a cohesive hardware and software platform

5 Compare the performance of the sensor fusion-based system against conventional single-
sensor and non-fusion approaches to demonstrate its superiority in terms of detection
accuracy, reaction time, false alarm rate, and collision prevention capability.

13
LITERATURE REVIEW

Advanced vehicle collision avoidance Assistance Systems (AVCAS) Are a Set of


Technologies That Are Designed to avoid collision These Systems Use a Combination of
Sensors, Cameras, And Other Technologies to Provide Drivers with Information about Their
Environment And to Help Them Avoid Collisions, Stay in Their Lanes, And Maintain a Safe
Following Distance.

AVCAS Technologies Have Been The Subject Of Numerous Studies Conducted And
Examined The Effectiveness Of Various, Technologies In Reducing Crashes And Improving
Driver Safety. The Review Found That Many AVCAS Technologies, Including Forward
Collision Warning, Lane Departure Warning, And Blind Spot Detection, Were Effective In
Reducing Crashes And Improving Driver Safety. And Also Accident Analysis & Prevention
Examined The Effects Of AVCAS On Driver Behavior. The Review Found That AVCAS
Technologies Had A Positive Effect On Driver Behavior, Including Reducing The Number
Of Crashes And Improving Driver Awareness And Reaction Times.

REVIEW OF RELATED RESEARCH WORKS

1. ADAPTIVE INTERVENTION ALGORITHMS FOR ADVANCED DRIVER


ASSISTANCE SYSTEMS (by Kui Yang , Christelle Al Haddad , Rakibul Alam , Tom Brijs
and Constantinos Antoniou)[8].

This paper proposed adaptive algorithms that could be automatically updated for each
headway monitoring, illegal overtaking, over-speeding, and fatigue based on real-time traffic
environments and driver status. And filling the gap between existing deterministic and fixed-
threshold algorithms. The findings of this work are essential as they provide the exploratory
simulation work needed to evaluate the behavior of different algorithmic possibilities and
threshold values for the definition of a Safety Tolerance Zone for different in-vehicle real-
time warnings in preparation for final choices made in real-world driving trips. Also, the
results of this paper can be a potential framework for ADAS or in-vehicle real-time warning
algorithms for industrial applications.

2. EXPLORING PERCEPTIONS OF ADVANCED DRIVER ASSISTANCE SYSTEMS


(ADAS) IN OLDER DRIVERS WITH AGE-RELATED DECLINES (Joanne M. Wood ,
Emily Henry , Sherrie-Anne Kaye , Alex A. Black , Sebastien Glaser , Kaarin J. Anstey ,
Andry Rakotonirainy)[9].

This study explored older drivers’ perceptions regarding the benefits and challenges
associated with driving ADAS-equipped vehicles, with a focus on the visual, auditory,
physical, and cognitive factors associated with using ADAS. The themes identified highlight
the need to increase the flexibility of visual and auditory warning alerts to reduce complexity
and to involve older drivers in the design process of ADAS-equipped vehicles to ensure these

14
vehicles suit their needs. Customizing vehicles to better suit the individual needs of older
drivers, beyond the limited options available on certain models, has the potential to improve
driver safety by reducing the number of road crashes involving older drivers, as well as
having positive benefits for increasing their mobility, independence, and quality of life.

3. REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED


DRIVER ASSISTANCE SYSTEM, (Ammu M Kumar And Philomina Simon)[10].

Lane Detection And Tracking Is One Of The Key Features Of Advanced Driver Assistance
System. Lane Detection Is Finding The White Markings On A Dark Road. Lane Tracking
uses the Previously Detected Lane Markers And Adjusts Itself According To The Motion
Model. In This Paper, a Review Of Lane Detection And Tracking Algorithms Developed In
The Last Decade Is Discussed. Several Modalities Are Considered For Lane Detection
Which Includes Vision, LIDAR, Vehicle Odometry Information, Information From Global
Positioning Systems, And Digital Maps. The Lane Detection And Tracking Is One Of The
Challenging Problems In Computer Vision. Different Vision-Based Lane Detection
Techniques Are Explained In The Paper. The Performance Of Different Lane Detection And
Tracking Algorithms Is Also Compared And Studied[11].

4. AUTOMOTIVE EMBEDDED SYSTEMS KEY TECHNOLOGIES, AND


INNOVATIONS[12].

Road traffic crashes lead to an estimated death of nearly 1.25 million people in a year and
millions more succumb to injuries [1]. To reduce such gruesome road traffic fatalities and
injuries, various measures grounded on evidence have been formulated by the World Health
Organization. A critical role is played by safe vehicles to avert crashes and minimize the
probability of serious injury. Safety standards for automobiles are set by the United Nations
World Forum for Harmonization of Vehicle Regulations along with the legal framework for
which voluntary application by member states is possible. Vehicles that abide by these safety
standards are less prone to accidents, and on the occurrence of road traffic crashes, the
injuries are far from life-threatening. Though there are various contributing factors to
accidents, the major single reason points towards the driver as human errors contribute to a
major percentage of accidents. Modern vehicles include various safety features that assist the
driver in various possible ways. The modern vehicle design is focused on better safety
features for the driver and the passengers[13].

5. ADVANCED DRIVER ASSISTANCE SYSTEM FOR ROAD ENVIRONMENTS TO


IMPROVE SAFETY AND EFFICIENCY (Felipe Jiménez, José Eugenio Naranjo, José
Javier Anaya Fernando García , Aurelio Ponz , José María Armingol)

The Advances In Information Technologies Have Led To More Complex Road Safety
Applications. These Systems Provide Multiple Possibilities For Improving Road Transport.
The Integrated System That This Paper Presents Deals With Two Aspects That Have Been

15
Identified As Key Topics: Safety And Efficiency. To This End, The Development And
Implementation Of An Integrated Advanced Driver Assistance System (ADAS) For Rural
And Intercity Environments Is Proposed. The System Focuses Mainly On single-carriageway
roads, Given The Complexity Of These Environments Compared To Motorways And The
High Number Of Severe And Fatal Accidents On Them. The Proposed System Is Based On
Advanced Perception Techniques, Vehicle Automation, And Communications Between
Vehicles (V2V) And The Infrastructure (V2I). Sensor Fusion Architecture Based On
Computer Vision And Laser Scanner Technologies Are Developed. It Allows Real Time
Detection And Classification Of Obstacles, And The Identification Of Potential Risks. [1]

6. HOLISTIC ASSESSMENT OF DRIVER ASSISTANCE SYSTEMS: HOW CAN


SYSTEMS BE ASSESSED WITH RESPECT TO HOW THEY IMPACT GLANCE
BEHAVIOR AND COLLISION AVOIDANCE?

This study demonstrates the need for a holistic safety-impact assessment of an advanced
driver assistance system ADAS) and its effect on eye-glance behavior. It implements a
substantial incremental development of the what-if counterfactual) simulation methodology
applied to rear-end crashes from the SHRP2 naturalistic driving data. This assessment
combines (i) the impact of the change in drivers’ off-road glance behavior due to the
presence of the ADAS, and (ii) the safety impact of the ADAS alone. The results illustrate
how the safety benefit of forward collision warning and autonomous emergency braking, in
combination with adaptive cruise control (ACC) and driver assist (DA) systems, may almost
completely dominate the safety impact of the longer off-road glances that activated ACC and
DA systems may induce. Further, this effect is shown to be robust to induced system failures.
The accuracy of these results is tempered by outlined limitations, which future estimations
will benefit from addressing. On the whole, this study is a further step towards a successively
more accurate holistic risk assessment which includes driver behavioral responses such as
off-road glances together with the safety effects provided by the ADAS[14]

7. HOW TO CONSIDER EMOTIONAL REACTIONS OF THE DRIVER WITHIN THE


DEVELOPMENT OF ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS), (Maik
urichta, Rainer Stark)

The Presented Approach Shows A Possible Integration Of A Sub-Process Into Existing


Development Processes Without A Re-Definition Of The Whole Development Process. For
The Individual Steps Methods Must Be Developed. This Concerns In Particular The
Integration Of Parameters Relating To The Measurement Of User Experience Factors In The
Development Tool. Blocks And Parameters Shall Be Integrated into the Model So That
Subjective Factors Can Be Measured, Which Can Be Supplemented with Objective Methods
(E.G. Questionnaire). Here, Further Research Is Necessary. [4]

16
8. A REVIEW OF MODEL PREDICTIVE CONTROLS APPLIED TO ADVANCED
DRIVER-ASSISTANCE SYSTEMS (Alessia Musa , Michele Pipicelli , Matteo Spano ,
Francesco Tufano , Francesco De Nola Gabriele Di Blasio, Alfredo Gimelli , Daniela Anna
Misul and Gianluca Toscano)

Model Predictive Control-Based Strategies Represent Interesting Solutions When Problems


Related To Advanced Driver-Assistance Systems Have To Be Addressed. Being A Mix Of
Classical Feedback And Optimal Controllers, An MPC Can Exhaustively Solve These
Problems By Minimizing A User-Defined Objective Function While Predicting Future States
Using Several Appropriate Methods. However, Its Implementation Is Far From Being
Trivial, Especially When Optimality Is Sought.[10]

Overall, The Literature Suggests That AVCAS Technologies Have The Potential To Greatly
Improve Driver Safety And Reduce The Number Of Crashes On The Road. However, These
Technologies have the following drawbacks

 Advanced driver collision avoidance Systems (ADCAS) are designed to mitigate road
accidents caused by human error, but the systems also pose potential risks. They’re
made up of various human-machine interface safety features that help with driving or
parking.

 The main purpose of ADCAS is to improve vehicle safety by utilizing the latest
technology. AVCAS features supply the driver with essential information to make
certain adjustments or maneuvers.

 Common ADCAS features include adaptive cruise control, Autonomous Emergency


Braking (AEB) collision avoidance system, and lane assist.

 Drawbacks of ADCAS include some drivers not being fully aware of what AVCAS
does and drivers becoming complacent while over-relying on ADCAS. ADCAS are
only a secondary safety feature and the primary one in charge of overall safety is still
the driver. There are running driverless prototypes but development is ongoing, and
these vehicles are different from cars in the marketplace equipped with ADCAS. The
technology may miss more potential road hazards around them. A Substitute For Safe
Driving Practices, Drivers Should Always Remain Alert And Attentive While Behind
The Wheel.

17
Methodology

 Requirements Analysis: Clearly defining and understanding the specific requirements and
goals of the ADCAS system is crucial. This includes identifying the desired features and
performance criteria

 Material Selection: Choosing appropriate Arduino UNO, and sensors such as Ultrasonic
sensors, or other types of sensors based on the system requirements. Each sensor has its
strengths and limitations, so selecting the right combination is essential.

 Write algorithm and Processing: wrote relevant algorithms for Arduino and sensors
ADCAS functionalities. This involves capturing data in different driving scenarios,
including normal, adverse weather, and emergency conditions

 System Integration into a coherent ADCAS system. Integration includes optimizing for
real-time processing and ensuring their seamless operation within the vehicle's
architecture

 Testing and Validation: Conducting rigorous testing to evaluate the performance,


reliability, and safety of the ADCAS system. This involves prototype scenarios

CONCEPTUAL MODEL OF ADVANCED DRIVER COLLISION AVOIDANCE


ASSISTANCE SYSTEMS USING SENSOR FUSION METHOD BY ARDUINO

ARDUINO

THE ULTRASONIC SENSOR

18
It uses sonar to determine the distance to an object. Here’s what happens:

 The ultrasound transmitter (trig pin) emits a high-frequency sound (40 kHz).

 The sound travels through the air. If it finds an object, it bounces back to the module.

 The ultrasound receiver (echo pin) receives the reflected sound (echo).

L298N MOTOR DRIVER

The L298N is a dual H-Bridge motor driver which allows speed and direction control of two
DC motors at the same time. The module can drive DC motors that have voltages between 5
and 35V, with a peak current up to 2A.

19
CONCEPTUAL AVCAS SYSTEM ARCHITECTURE

20
ARDUINO IDE(INTEGRATED DEVELOPMENT ENVIRONMENT )STEPS

The Arduino Integrated Development Environment - or Arduino Software (IDE) - connects


to the Arduino boards to upload programs and communicate with them. Programs written
using Arduino Software (IDE) are called sketches. These sketches are written in the text
editor and are saved with the file extension .ino.

21
WORK PLAN

Month
Janu Febr Marc Apri May June Jul
ary uary h l y

.Title
formulation

. Literature
review

Data collection

. Data Analysis
(design )

Prototype
simulation

Documentation

Draft thesis
submission

Final thesis
submission

Final defense

22
BUDGET PLAN

Task material unit Total cost


For Paper ,pen Packet 2000
documentation ,Print,
binding
Miscellaneous 5000
expense
For Arduino, 23,000
sensors and
accessories tool
Total 30,000

23
REFERANCE

[1] A. Ziebinski, R. Cupek, D. Grzechca, and L. Chruszczyk, “Review of advanced driver


assistance systems (ADAS),” AIP Conf. Proc., vol. 1906, 2017, doi: 10.1063/1.5012394.
[2] “Multisensor Data Fusion Strategies for Advanced Driver Assistance Systems,” pp. 141–
166, 2009, doi: 10.5772/6575.
[3] J. Rigelsford, Sensors Update 12, vol. 24, no. 2. 2004. doi:
10.1108/sr.2004.08724bae.004.
[4] C. Stiller, “Driver assistance systems,” IT - Inf. Technol., vol. 49, no. 1, pp. 3–4, 2007,
doi: 10.1524/itit.2007.49.1.3.
[5] J. Fayyad, M. A. Jaradat, D. Gruyer, and H. Najjaran, “Deep learning sensor fusion for
autonomous vehicle perception and localization: A review,” Sensors (Switzerland), vol.
20, no. 15, pp. 1–34, 2020, doi: 10.3390/s20154220.
[6] Z. Wang, Y. U. Wu, S. Member, and Q. Niu, “Multi-Sensor Fusion in Automated
Driving : A Survey,” 2020, doi: 10.1109/ACCESS.2019.2962554.
[7] J. M. Wood et al., “Exploring perceptions of Advanced Driver Assistance Systems
(ADAS) in older drivers with age-related declines,” Transp. Res. Part F Traffic Psychol.
Behav., vol. 100, no. December 2023, pp. 419–430, 2024, doi: 10.1016/j.trf.2023.12.006.
[8] K. Yang, C. Al Haddad, R. Alam, T. Brijs, and C. Antoniou, “Adaptive Intervention
Algorithms for Advanced Driver Assistance Systems,” Safety, vol. 10, no. 1, p. 10, 2024,
doi: 10.3390/safety10010010.
[9] A. M Kumar and P. Simon, “Review of Lane Detection and Tracking Algorithms in
Advanced Driver Assistance System,” Int. J. Comput. Sci. Inf. Technol., vol. 7, no. 4, pp.
65–78, 2015, doi: 10.5121/ijcsit.2015.7406.
[10] K. Lee, S. Jeon, H. Kim, and D. Kum, “Optimal Path Tracking Control of Autonomous
Vehicle : Adaptive Full-State Linear Quadratic Gaussian ( LQG ) Control,” vol. XX,
2019, doi: 10.1109/ACCESS.2019.2933895.
[11] D. Prakash, “Embedded System for Automatic Vehicle Speed Control and Data Analysis
Using Python Software,” vol. 5, no. May, pp. 14–20, 2019.
[12] M. Mulatu, “Advanced Driver Collision Avoidance Assistance Systems The Benefits of
Advanced Driver Assistance Systems”.
[13] J. Bärgman and T. Victor, “Holistic assessment of driver assistance systems: How can
systems be assessed with respect to how they impact glance behaviour and collision
avoidance?,” IET Intell. Transp. Syst., vol. 14, no. 9, pp. 1058–1067, 2020,

24
25

You might also like