Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

ADDIS ABABA SCIENCE AND TECHNOLOGY UNIVERSITY

TRAFFIC CONDITION DETECTION AND


RECOGNITION USING DEEP LEARNING
APPROACH

MSc thesis research proposal

By

GEMECHU GELETA TADESE

Advisor: ASHENAFI YADESSA GEMECHU (PhD)

DEPARTEMENT OF ELECTRICAL AND COMPUTER


ENGINEERING (COMPUTER ENGINEERING STREAM)

COLLEGE OF ELECTRICAL AND MECHANICAL ENGINEERING

JANUARY, 2023
Declaration
I declare that this thesis proposal entitled “ traffic condition detection and recognition
using deep learning approach” is my own work and has not been submitted to any
university for similar purpose. The references used in this proposal are used recognized by
proper citations.

Gemechu Geleta Tadese _________________ ___________________

Name of student signature Date

i
Approval Page

Title: Traffic condition detection and recognition using deep learning approach

Student Name: Gemechu Geleta Tadese Signature: _______ Date: _____________

Approved by the examining committee members:

Name Academic Rank Signature Date


Advisor: Dr. Ashenafi Yadesa Assistant Professor __________ _______

Co-Advisor: _________________ _______________ ___________ ________

Examiner: _________________ ________________ ____________ ________

Examiner: ________________ _________________ _____________ _______

Chairperson:
Name Signature Date
DGC Chairperson: _________________ ___________ _________

Dean/Associate Dean for Graduate


Programs:

ii
Table of Contents
Declaration ........................................................................................................................................ i
Approval Page.................................................................................................................................. ii
LIST OF FIGURE ........................................................................................................................... v
LIST OF TABLE ............................................................................................................................ vi
List of Abbreviation ....................................................................................................................... vii
Proposed Summary ....................................................................................................................... viii
1. Introduction .............................................................................................................................. 1
1.1 Statement of problem ............................................................................................................. 3
1.2 Research Questions ................................................................................................................ 4
1.3 Scope...................................................................................................................................... 5
1.4 Objective of the thesis ............................................................................................................ 5
1.4.1 General Objective ........................................................................................................... 5
1.4.2 Specific Objective ........................................................................................................... 5
1.5 Significance of the thesis ....................................................................................................... 5
2. Literature Review......................................................................................................................... 6
2.1 Object Detection .................................................................................................................... 6
2.2 Traffic Accident and Traffic Condition ................................................................................. 7
2.3 Traffic Sign Detection and Recognition ................................................................................ 7
2.4 Traffic sign Detection Under Challenging Weather condition .............................................. 8
2.5 Traffic Sign Detection by Color Segmentation...................................................................... 8
2.6 Deep Learning in the context of traffic sign detection and recognition ................................. 8
2.7 Related work Traffic condition detection and recognition .................................................. 10
3.Research Methodology ............................................................................................................... 13
3.1 Data Collection .................................................................................................................... 14
3.2 Data preprocessing ............................................................................................................... 15
3.3 Model Building .................................................................................................................... 15
3.4 Model Evaluation ................................................................................................................. 16
3.7 Development Tools .............................................................................................................. 17
3.7.1 Hard ware Tools ............................................................................................................ 17
3.7.2 Software Tools .............................................................................................................. 17
3.8 Expected out come ........................................................................................................ 18

iii
4. Work and Budget plan ........................................................................................................... 19
4.1 Work plan ............................................................................................................................ 19
4.2 Budget plan ................................................................................................................ 20
Reference ....................................................................................................................................... 21

iv
LIST OF FIGURE
Figure 1: A pedestrian thinks she is a car[3]. ................................................................................... 3
Figure 2: A car parked near a pedestrian crossing [3] ..................................................................... 3
Figure 3: color space threshold Segmentation. ................................................................................ 8
Figure 4: traffic sign Detection Steps in Faster RCNN algorithm. ................................................ 10
Figure 5: example of calculate the distance and side traffic sign. ................................................. 13
Figure 6: Methodology of the study............................................................................................... 14

v
LIST OF TABLE
Table 1: Summary of related works ...................................................................................12
Table 2: Hardware tool ......................................................................................................17
Table 3: Software tools ......................................................................................................17
Table 4: Work plan ............................................................................................................19
Table 5. Cost Breakdown ...................................................................................................20

vi
List of Abbreviation
AI……………………………………… Artificial Intelligence

GNP……………………………………. Gross national product

WHO…………………………………...world Health Organization

TCDR……………………………… …. Traffic condition detection and recognition

TSD………………………………………Traffic sign detection

TSDR……………………………………. Traffic sign Detection recognition

HIS……………………………… ………. Hue Saturation Intensity

HSV………………………………………. Herpes simplex Virus

HIS………………………………………. Hispanic serving Institution

RGB………………………………………. Red_Green_Blue(RGB)

RPN………………………………………. Region proposal network

R-CNN……………………………………...Region based convolutional neural network

ROI………………………….……………...Region-of-Interest

CNN………………………………………. Convolutional neural network

DCNN……………………………………. Deep convolutional neural network

YOLO………………………………………. You only look once

VGGNET……………………………………visual Geometry Group Network

vii
Summary
Traffic Condition refers to the state of the roads and the volume of the Vehicles traveling
on them a given time. It can be used to describe the flow of traffic, the presence of any
delays or congestion, and the overall efficiency of the transportation system. traffic
conditions can vary widely depending on the time of day, location, and another factor. In
general, a good traffic condition means that the road is clear and there are few delays, while
a poor traffic condition may involve heavy traffic, accident, or other disruption that can
cause delays and frustration for travelers. The proposed study aims to investigate the use
of deep learning models for the detection and recognition of traffic condition. this research
will address the following Questions: what are the mandatory features to prepare a Traffic
condition detection and recognition data set to apply a deep learning algorithm? which deep
leaning model, and feature selection techniques is used to automate traffic condition
detection and recognition classification with optimal accuracy. How to optimize the
performance of the deep learning model that will use in traffic condition detection and
recognition? The experimental study will be conducted on the public data set by using deep
learning model for the development of Traffic condition model.

Key words: Deep learning, traffic Condition, , Occlusion, object Detection

viii
1. Introduction
Human beings are spread in different areas of the world. resources are distributed unevenly
when the world is formed. As a result of the unequal distribution of resources, people shift
from one location to another in order to meet their resource needs. However, this process
calls for the usage of transportation services, which are essential to human existence and
are used to convey both people and commodities as well as to provide for leisure time[1].
In emerging nations, the most prevalent form of transportation is through road.[2].

A traffic accident, a traffic collision or crash occurs when a vehicle collides with another
vehicle, pedestrian, animal, road barriers, or any stationary obstruction such as a tree or a
utility pole. When a car strikes another car, a person cross the road without looking the
direction , an animal, a road barrier, or any immovable object like a tree or a utility pole, it
causes a collision, also known as a traffic accident, collision, or crash[3].

The ever-increasing number of injuries and deaths caused by road traffic accidents
motivates a wide range of studies that have been conducted with the general goal of
improving the safety of road users. A commonly accepted classification of the causes of
accidents divides them into three categories: human behaviors; vehicle characteristics;
external conditions (road, traffic, weather)[4].

The location of the accident—whether it happened on a highway, by the side of the road,
next to an intersection, where a pedestrian was crossing, where a stop sign was, where a
traffic signal was, etc.—had a more substantial impact on how long the accident lasted than
any other road factor.[5].

The economic cost of road traffic crashes and injuries is estimated to be 1 percent of Gross
National Product (GNP) in low-income countries and 1.5 percent in middle income
countries. Low income and middle income countries accounts for US $65 billion, more
than they receive in the development assistance[6]. The victims of RTAs are tend to be the
poor, young and males and it is costing on average between 1to 3 percent of Gross
Domestic Product (GDP), in low and middle income countries. The costs associated with
these deaths are a "poverty-inducing problem,"[7].

1
Different measures might be used to determine the victims' financial loss or
compensation[8]. A person who has been injured in an accident may be entitled to financial
compensation for their pain and suffering as well as future financial losses due to a
shortened life expectancy and lost wages[9]. For a complete financial loss, past lost income
on medical expenditures, vehicle repairs, vehicle rentals, and travel expenses must also be
taken into account.[10].

With respect to the risk factors for road crashes in Ethiopia, the research findings also
showed that most of the crashes were associated with drivers’ errors and demographic
characteristics[11]. Most pedestrian fatalities and physical injuries were due to drivers’ not
yielding right of way to pedestrians. In line with this finding, studies conducted on road
traffic crashes in Ethiopia showed that over 81% of road crashes were a result of driver
error, such as failure to give priority for pedestrians (drivers’ not yielding right of way to
pedestrians)[12].

The main goals of traffic condition detection and recognition are to show what needs to be
observed in contemporary street segments, to alert drivers ahead of the street to threats and
environmental problems, to remind drivers to drive at the recommended speed, and to offer
a helpful assurance for safe driving. Therefore, identifying and detecting traffic indicators
is a crucial study path to prevent harm to your guests and ensure the personal safety of
motorists.[6].

A commonly accepted classification of the causes of accidents divides them into three
categories: human behaviors, vehicle characteristics, and external conditions (road, traffic,
weather). Figures 1 and 2 show two examples of dangerous behavior. In order to mitigate
the causes of accidents and to achieve a general reduction in their number and gravity,
actions should be taken in all three categories. A crucial aspect in the definition of a plan
of intervention is related to the selection of the sites where the danger is high[4].

2
Figure 1: A pedestrian thinks she is a car [3].

Figure 2: A car parked near a pedestrian crossing [3]

The latest WHO data show that Ethiopia accounted for 31,564 or 5.60% of all fatalities in
2020 due to traffic accidents. Ethiopia is ranked #19 in the world with a 42.41 per 100,000
people age-adjusted death rate[6].

1.1 Statement of problem


In the world there are about 1.25 million deaths due to road traffic accidents in the world
and 20-50 million cases of non-fatal injuries every year. This is due to the lack of traffic
sign recognition under challenging of environment, traffic road condition and pedisteral
movement condition [10].

3
To reduce number of deaths or causalities and injuries it is important to have an automatic
system that assists drivers in detecting and recognizing the traffic condition.

The problem of accurately and efficiently detecting and recognizing traffic conditions
using deep learning models is of high importance as it can aid in the development of
improved traffic control systems. Despite recent advancements in deep learning models,
accurate and effective detection and recognition of different traffic conditions still remain
a challenge. Another issue is the lack of robustness of models to new scenarios under
challenging environment, such as road conditions, traffic light system. Previous research
has mainly focused on the movement of cars, however there is lack of research dedicated
cars driving in close proximity, such as neighboring cars, as well as there is lack of research
that considers pedestrian movement. The goal of this research is to develop an effective
deep learning base model to accurately and efficiently detect and recognize traffic
conditions that generalize across various traffic environments in order to help improve
existing traffic control systems. To this end, an approach such as deep learning methods
can be used to realize the automatic traffic condition detection and recognition system.

1.2 Research Questions


This study is intended to gain an answer to the following three Questions:

RQ1: what are the mandatory features to prepare a Traffic condition detection and
recognition data set to apply a deep learning algorithm?

RQ2: which deep learning model, and feature selection techniques is used to automate
traffic condition detection and recognition classification with optimal accuracy.

RQ3: How to optimize the performance of the deep learning model that will use in traffic
condition detection and recognition?

4
1.3 Scope
The scope of this thesis work is to develop deep learning based system for traffic condition
detection and recognition. The deep learning-based system will recognize traffic sign
including the traffic light system, will detect and predict the movement of pedestrians and
the movement of vehicles.

1.4 Objective of the thesis

1.4.1 General Objective


The main objective of this thesis work is to study and develop a traffic condition detection
and recognition system using deep learning approach.

1.4.2 Specific Objective


To meet the above general objective, the following specific objectives are set:

1. To collect and organize data set.


2. To analyze the problem of TCDR.
3. To study deep learning based methods to solve the problem.
4. To propose a deep learning-based solution to detection and recognition.
5. To evaluate or measure the performance of the proposed solution.

1.5 Significance of the thesis


The goal of traffic condition detection and identification systems is to simplify the
operation of cars that control traffic, provide drivers with safety and other information, and
provide convenience applications for passengers and road safety. Then I look into several
research that are based on TCDR and how it is used. The other benefit includes decreasing
traffic accidents, implementing transportation systems, eliminating pauses and delays at
intersections, improving travel times, managing capacity, and etc.

5
2. Literature Review
A literature Review is a means to surveys scholarly articles, or journals, and any other
sources related traffic in general traffic Condition assessment for detection and Recognition
in particular with the application of Deep Learning approach in Traffic particularly traffic
condition detection and Recognition and it helps us for more Understanding of the area ,
to provide a clear description , to summarize , compare and critical evaluation of other’s
related work concerning the research problem being investigated.

2.1 Object Detection


The idea of object detection is based on a computer vision process that identifies, locates,
and tracks objects in static shapes like photos or videos[13]. There are many practical uses
for object detection, some of which are very significant. For example, after items are
located and recognized in an image, they can be numbered, monitored, and appropriately
labeled as necessary [8]. In contrast to image recognition, which labels images according
to their class, object detection draws a boundary around the object that has been identified
in a picture. For instance, image recognition in TSD could identify a stop sign and label it
"STOP"[14]. The identical image recognition technique would once again produce "STOP"
if two stop signs were seen at the same spot. The stop sign in this instance would be
surrounded by a boundary if object detection were the case, though. Therefore, two
different boundaries would be drawn around them if two stop signs were seen at the same
place[15]. For the purpose of simplicity, object detectors typically design their boundaries
as squares or circles. In addition, when detecting objects, [9]. The whole process of object
detection is usually addressed in steps.

In early methods, object detection from static images was obtained according to the
following steps [9]:

✓ Image segmentation: Where images are divided into small segments.


✓ Classification: where a classifier is fed the segments from the previous step. The
classifier decides if an object of interest is present in the image or not. The
segments that make up the identified object are marked if the categorization is
positive.

6
A method that uses sliding windows can segment images. The algorithm uses a rectangular-
shaped sliding window that moves through the target image. Small grids (segments) are
produced by the sliding process and are employed in the subsequent steps.[16]. The main
issue with this algorithm is the involvement of segments from various sources, which
possibly vary in size, color, perspective, and shape. Once these windows are fed to a
convolutional network for classification purposes, the process itself will be very complex
and its feasibility becomes questionable[17].

2.2 Traffic Accident and Traffic Condition


The traffic accidents are the main cause of death for young people and the eighth major
cause of all deaths globally with predestined 1.24 million deaths every year. About 85% of
anniversary deaths are occurring in developing countries[18]. Males especially those
between 15 and 44 years old is the extremely affected group of people with traffic
accidents. Traffic accidents expenditure of countries is 1 to 2% of their total national
products. Even although only 52% of vehicles in the world are, recorded in developing
countries, 80% of road traffic deaths take place in these countries[3].

2.3 Traffic Sign Detection and Recognition


The detection and recognition of traffic signs, or TSDR, has received a lot of attention in
the literature. Since the beginning of the 1990s, when the concept of intelligent
transportation first emerged, there has been a lot of interest in the idea. However,
autonomous TSDR was first developed in the 1980s according to traditional wisdom.
However, preprocessing, detection, tracking, and recognition are the common processes
required to ensure a robust and accurate TSDR. The traditional perspective of a TSDR has
been formed based on the technologies now in use.[19].

A denoising process in TSD is always taken into consideration because the visibility of
traffic signs depends on factors such as weather, lightning, illumination, etc. as well as the
color quality and cleanliness of the signs[20].

The aspects of concern in this stage are often the color and shape of the traffic signs. This
step is typically a part of the preprocessing phase where images are enhanced in terms of
their visibility and the impacts of environmental noise or relevant conditions are

7
minimized[21]. Numerous strategies have been devised and employed to address the
problems caused as a result of this worry. The TSD work is also made more difficult by
dealing with colored images with numerous objects and backgrounds[19].

2.4 Traffic sign Detection Under Challenging Weather condition


The majority of traditional traffic sign detection research approaches rely on manually
extracting features from multiple attributes, including geometrical shapes, edge detection,
and color information. The majority of the color-based strategy is threshold-based
segmentation of the traffic sign region in a certain color space, like hue, saturation, and
intensity (HIS). However, a significant disadvantage of these color-based techniques is that
they are extremely vulnerable to changes in lighting, which can happen often in real-world
circumstances[20].

2.5 Traffic Sign Detection by Color Segmentation


Traffic signs are mainly based on shapes and colors, thus, color presentation on traffic signs
has to be dealt with during the image processing stages. Generally, color segmentation
methods are used for TSDR purposes. Here, HSV color space offers a better segmentation
ability, faster processing, and smaller illumination influence compared to HSI and RGB
color spaces. The mechanism of HSV color space requires a threshold setting for
segmentation purposes, and this is done after the conversion of RGB space to HSV space
takes place, as shown in Figure [22].

Figure 3: color space threshold Segmentation [13].

2.6 Deep Learning in the context of traffic sign detection and recognition
When deep learning is applied to TSDR, two sections are expected to exist: encoder and
layers. Encoders act in deep learning networks as a series of blocks for localization and

8
regression. While layers represent the conventional learning process, but they do extract
statistical features needed for object localization and labeling. In such networks, there is a
need for a decoder to predict each object’s bounding boxes and labels [23].

Virtually, a decoder in a deep learning network is represented as a regressor. However, in


reality, a regressor is attached to the deep learning encoder. By this regressor, a prediction
of both location and size of each bounding box is achieved. These boxes are composed in
the Cartesian coordinate by (X, Y) pair, which is provided for each image’s object and its
extent[24].

Despite its simplicity, the model is very limited, as it has to be provided with a specific
number of boxes in advance. In such cases, the application of TCDR will be a problem, as
the model can be set to detect only one sign, while two are met in the same image. This
problem can be overcome by certain pre knowledge of the detected objects. Meaning that,
if the number of objects to be detected in an image is known, then a pure regressor could
be a good choice. The idea of pure regression in deep learning is not always applicable;
thus, an extension to the used regressor has been adapted. The extension here is referred to
by RPN. In this case, the model works as a decoder by proposing some image regions that
expect an object to exist.

A sample of this deep learning model, namely Faster R-CNN is shown in Figure 5. This
model is more flexible in terms of the bounding box and more accurate. However, it
includes more processing steps, which means a higher computational cost [25].

9
Figure 4: traffic sign Detection Steps in Faster RCNN algorithm.

R-CNNs stands for region-based convolutional neural networks, often known as CNNs
with regions. They stand for one of the fundamental deep learning approaches that may be
used with object identification models. With the deep learning theory in mind, R-CNN
examines the image to select a few suggested regions. These areas must indicate particular
object features, such as a box's anchor.

Set offsets, which are a tagging technique for their areas based on the category and
bounding box to which they belong. The identified recommended regions features are then
extracted using CNN's standard operational procedure, feed forward. The last step is to
accurately categorize the retrieved features from the proposed regions using a prediction
model [16].

2.7 Related work Traffic condition detection and recognition


As stated in the introduction, although automated traffic sign detection and recognition
systems have been in use for more than two decades, very mature models are yet to be
developed. This could have been caused by the fact that the topic itself has not taken
enough room for attention when compared to similar visual applications, such as face and
retina detections. In addition, commercially available TCDR systems are costly, and if they

10
are set in vehicles, they would dramatically alter the price. This issue can be easily noticed
by comparing simple biometrics devices available at an affordable price, and even set
nearly everywhere needed, with very effective and life-saving systems such as TCDR[26].
Further, the incorporation of TCDR systems with standard industrial applications imposes
several limitations, such as the applicability of traffic signs [13].

The prominent method applied for TCDR is sequential processing. As explained in the
previous sections, a scene is captured, segmented, and then ROIs are determined from it,
which are used for the detection purpose. Accordingly, the main features to look for in this
case are both color and shape. Once again, color can be used for preliminary image
segmentation since, in usual circumstances; traffic signs have unique and bright colors.
However, illumination may alter this possibility and hinder its use. In addition, the color
analysis does not usually count for image features, such as edges, which makes this analysis
not adequate on its own. Further, online processing of traffic signs creates another line of
constraints, such as processing time and efficiency. Therefore, as long as there is a
computationally outweighed method, methods sequentially addressing feature extraction
may remain the best opportunity.

11
Table 1: Summary of related works

Author and Year Title Method and finding Gaps


Roshani Raut.et.al Traffic Recognition and DCNN with accuracy of Not consider the neighbor car
(2022)(Roshani Detection using Deep 95%. that may be the reason for the
Raut.et.al,…2022) convolutional Neural accident.
network for Autonomous
Driving
W.H.D Fernando .et.al(2021) Automatic road traffic sign YOLO V4 with accuracy They manually prepared data
detection and recognition 84.7% set will be noisy data set.
No method is mention to
tackle the problem like:
different lighting condition
Deformed sign
Variation of illumination and
etc.
Sabbir Ahmed .et.al A Deep Learning based frame CNN Have no traffic light sign
(2021)(Sabbir.et.al,…2021) work for Robust Traffic Sign detection in each challenge?
Detection under challenging the other is what type of
Weather Condition. measurement has taken when
detecting sign.
Jing Yu.et.al Traffic Sign Detection and YOLOV4 and VGG Those models have needed
(2022)(Jing Yu.et.al,…2022) Recognition in multi images network with accuracy more time during training and
using a Fusion Model with 90.36% testing.
YOLO and VGG network.
Yieng Song.et.al(2022) A study on the driver-Vehicle CNN with accuracy 90.65 Not contented a driver-
(Yieng Song.et.al,…2022 Interaction system in vehicle interaction system.
autonomous Vehicles
considering Driver’s
Attention status
DavidMijic.et.al(2021)(David Autonomous Driving CNN with Accuracy 95.83% The papers haven’t the subset
Mijic.et.al,…2021) Solution Based on Traffic of traffic sign of interest and
Sign Detection detecting those traffics sign
of interest and detecting those
traffic sign at greater
distances.
Jong Bae Kim.et.al(2022) (Jong Efficient Driver attention DCNN with accuracy 94.5 In this paper, there is also a
Bae Kim.et.al,….2022) Monitoring using Deep limitation in collecting a large
Convolutional Neural amount work learning images
Network Model of vehicle drivers.

12
3.Research Methodology
An approach for TCDR based on the appropriate deep learning- model will be proposed
after the problem analyzed. The role of TCDR is to obtain the needed TCDR based on its
learning mechanism is used for the detection and recognition stage. The choice was made
for using fixed learnable layers to reduce the complexity of ROIs needed for detection,
which in turn simplifies the computation. Moreover, it assists in producing a better border
presentation by allowing the cropping to take place right next to the traffic signs.

Moreover, it assists in producing a better border presentation by allowing the cropping to


take place right next to the traffic signs. As shown in Figure 5, initially, the goal is to
achieve a general TCDR in an improved way. The performance will be tested based on
four main metrics, namely precision, recall, F-score, and accuracy. Upon the successful
completion of the general detection, the developed approach will determine whether the
detected traffic signs are on the left or right. Finally, another detection scenario is then
obtained for estimating the distance of detected traffic signs from the vehicle.

Figure 5: example of calculate the distance and side traffic sign[6].

This section describes the methodology for Traffic Condition Detection and
Recognition using deep learning based model. This section describes the data collection,

13
software, and hardware tools that will be used in the successful accomplishment of the
research.

Data Acquisition and Data


Understanding

Data Preprocessing

Model Building

Model Evaluation

Figure 6: Methodology of the study

3.1 Data Understanding


In the process flow above, Data Understanding is broken down into four tasks together
with its researched outcome or output in detail.

Simply put, the Data Understanding phase’s goal is to:

Collect Initial Data or acquire the data and its access to the data listed in the researcher’s
resources. Collecting initial data also means you need to have a checklist of the dataset you
have acquired, the dataset location, the methods to acquire the datasets, and record any
problems encountered and any solutions to the problems for the other users or project
members to be aware of.

14
Describe Data by examining the properties of the data acquired, provide a description
report regarding the format of the data, quantity of data and even the records and fields in
each table or datasets.

Explore Data by using data science questions that can be quickly answered through
querying, visualization, and reporting or summary report. In this stage, you will be able to
find your first or initial hypothesis and their impact on the project.

Verify Data Quality by examining if the data is complete. If the data has errors or are
there missing values and if there is, what is the percentage of the missing values versus the
overall data obtained.

3.2 Data preprocessing


Preprocessing is the removal of an artifact of data to increase the accuracy of the model
and improve the input data to use for different task.

A, Image Enhancement Techniques: - To modify attributes of the image to make it more


suitable and to improve the quality of the image by reducing noise, increasing contrast and
providing more details. Hence, to process an image so that result is more suitable than the
original image and providing better input for automated image processing techniques.

B, Noise Removal: - Addictive noises of different types can contaminate images. Hence
there is a need to remove noise to improve the quality of the image.

C, Normalization: - This process in image processing that changes the range of pixel
intensity values. Its common purpose of converting an input image into a range of pixel
values that are more familiar to the senses. It includes size normalization, color
normalization and shape normalization.

3.3 Model Building


Here we will implement different the appropriate AI-based model will be proposed after
the problem is analyzed.

15
3.4 Model Evaluation
Evaluation metrics are used to evaluate model quality and guarantee that the model is
working properly and to its full potential.

Accuracy- is the ratio of correct predictions to total predictions.

Accuracy = (TP+TN)/ (TP+FP+TN+FN)

Where TP = True Positive, FP = False Positive, TN = True Negative, and FN = False


Negative

Specificity - is the ratio of true negatives to total negatives in the data.

Specificity = TN/ (TN+FP)

Sensitivity / Recall - is defined as the ratio of true positives to total positives in the data.

Sensitivity/Recall = TP/ (TN+FP)

Precision - is the ratio of true positives to total predicted positives.

Precision =TP/ (TP+FP)

F1-score - depends on the precision and recall value.

F1 −score = 2((Precision*Recall)/ (Precision Recall))

16
3.7 Development Tools
For this research, many types of development tools will be used to design and implement
the proposed thesis work.

3.7.1 Hard ware Tools


The following hard tools will be used for this research.

Table 2: Hardware tool

No Hardware tools Description

1. GPU enabled_ high performance To increase the computation


computer speed.

3.7.2 Software Tools


Software or programming tools is a set of computer programs used to create, maintain,
debug or support other applications and programs. The following software tools will be
used for the implementation phase of the study.

Table 3: Software tools

NO Software tools Description


1 Python Python is a high level and general-purpose programming language
that has an abundance of libraries and frameworks that facilitate
coding and save development time.
2 Anaconda Anaconda is an enterprise data science platform that distributes
python for machine learning packages
3 Jupyter Jupyter Notebook is a browser-based Code editor usually used
Notebook for research purposes.
.
4 TensorFlow TensorFlow: an open-source library for fast numerical computing

17
3.8 Expected out come
Traffic condition detection and recognition using deep learning-based model is to
automatically detect and recognize various traffic conditions from image data. Some
examples of traffic conditions that might be detected and recognized include: Number and
types of vehicles (e.g., cars, trucks, buses), Pedestrian activity, lighting conditions. The
output of a traffic condition detection and recognition system will depend on the specific
goals and design of the system. Some possible outcomes might include: Classifying traffic
conditions as "normal" or "abnormal" (e.g., identifying traffic jams or accidents),
Generating statistical reports about traffic patterns and trends, Enhancing the accuracy and
reliability of vehicle navigation. To achieve these outcomes, the deep learning-based model
will be trained on a large dataset of image data, and will use techniques such as deep
learning techniques to analyze the visual data and make predictions.

18
4. Work and Budget plan

4.1 Work plan


This is the agenda or timetable for the events taking place throughout the research's
operational term. The table below displays the thesis's work plan and timetable.

Table 4: Work plan

Id Task Start End Nov Dec Jan Feb March Apr May Jun
2022 2022 2023 2023 2023 2023 2023 2023
1 Literature
Review
3 Data Collection
4 Pre-processing
set data
5 Implementing
different
algorithm and
Model testing
6 Preparing thesis
Documentation
7 Final thesis
submission and
defense

19
4.2 Budget plan
The following table lists the cost of the study and the budget allocated for it.

Table 5. Cost Breakdown

Expense Estimation for the Research

No Expenses Unit Quantity Cost per Unit (Birr) Total Price (Birr) Total
1. Research Material
1,000
Paper Package 1 500 500
Writing materials Package 1 500 500
2. Data
Dataset collection - - 20,000 20,000 20,000
3 Publication & Dissemination
Printing and binding of Reports Number 5 200 1,000 4,000

Printing and binding of thesis Number 5 600 3,000

Total Cost (ETB) 25, 000

20
Reference
[1] W. Min, R. Liu, D. He, Q. Han, Q. Wei, and Q. Wang, “Traffic Sign Recognition
Based on Semantic Scene Understanding and Structural Traffic Sign Location,”
vol. 23, no. 9, pp. 15794–15807, 2022.

[2] E. K. Mekonen, “The Economic Effect of Road Traffic Accidents In Ethiopia:


Evidences from Addis Ababa City.,” ITIHAS - J. Indian Manag., vol. 6, no. 2, pp.
11–21, 2016, [Online]. Available:
http://libproxy.wustl.edu/login?url=http://search.ebscohost.com/login.aspx?direct=
true&db=bth&AN=116673488&site=ehost-live&scope=site

[3] A. A. Mohammed, K. Ambak, A. M. Mosa, and D. Syamsunur, “A Review of the


Traffic Accidents and Related Practices Worldwide,” Open Transp. J., vol. 13, no.
1, pp. 65–83, 2019, doi: 10.2174/1874447801913010065.

[4] S. Messelodi and C. M. Modena, “A computer vision system for traffic accident
risk measurement: A case study,” Adv. Transp. Stud., no. 7, pp. 51–66, 2005.

[5] X. Sun, H. Hu, S. Ma, K. Lin, and J. Wang, “Study on the Impact of Road Traffic
Accident Duration Based on Statistical Analysis and Spatial Distribution
Characteristics : An Empirical Analysis of Houston,” 2022.

[6] R. Raut, K. Desarda, S. Gulve, A. Bodas, and P. Jadhav, “Traffic Signs


Recognition and Detection using Deep Convolution Neural Networks for
Autonomous Driving,” pp. 207–214, 2022, doi: 10.1109/csnt.2022.38.

[7] J. Yu, X. Ye, and Q. Tu, “Traffic Sign Detection and Recognition in Multiimages
Using a Fusion Model With YOLO and VGG Network,” vol. 23, no. 9, pp. 16632–
16642, 2022.

[8] S. Y. Kim, “A Study on the Driver-Vehicle Interaction System in Autonomous


Vehicles Considering Driver ’ s Attention Status,” 2022.

[9] K. U. N. Zhou, A. Chu, and G. Wang, “An Improved Light-Weight Traffic Sign
Recognition Algorithm Based on YOLOv4-Tiny,” IEEE Access, vol. 9, pp.

21
124963–124971, 2021, doi: 10.1109/ACCESS.2021.3109798.

[10] R. K. Gorea, “Financial impact of road traffic accidents on the society,” Int. J.
Ethics, Trauma Vict., vol. 2, no. 01, pp. 6–9, 2016, doi: 10.18099/ijetv.v2i1.11129.

[11] B. C. Vinaykarthik, “Deep Learning based Object Detection Model for


Autonomous Driving Research using CARLA Simulator,” pp. 1251–1258, 2021.

[12] T. A. Abdi, B. H. Hailu, and M. P. Hagenzieker, “Road Crashes in Addis Ababa ,


Ethiopia : Empirical Findings between the Years Road Crashes in Addis Ababa ,
Ethiopia : Empirical Findings between the Years 2010 and 2014 Tariku Ayana
Abdi Faculty of Psychology University of Valencia Belay Hagos Hailu Inst,” no.
April, 2017, doi: 10.4314/afrrev.v11i2.1.

[13] A. B. Member, “Robust Traffic Sign Recognition Against Camera Failures,” IEEE
Open J. Intell. Transp. Syst., vol. 3, no. October, pp. 709–722, 2022, doi:
10.1109/OJITS.2022.3213183.

[14] J. Fang, D. Yan, J. Qiao, J. Xue, and H. Yu, “DADA : Driver Attention Prediction
in,” pp. 1–13, 2020.

[15] J. Kim, “Efficient Driver Attention Monitoring Using Pre-Trained Deep


Convolution Neural Network Models,” vol. 14, no. 2, pp. 119–128, 2022.

[16] P. Viola and M. Jones, “Robust Real-time Object Detection,” pp. 1–25, 2001.

[17] M. Swathi and K. V. Suresh, “Automatic traffic sign detection and recognition: A
review,” 2017 Int. Conf. Algorithms, Methodol. Model. Appl. Emerg. Technol.
ICAMMAET 2017, vol. 2017-Janua, pp. 1–6, 2017, doi:
10.1109/ICAMMAET.2017.8186650.

[18] H. Wan, L. Gao, M. Su, Q. You, H. Qu, and Q. Sun, “Research Article A Novel
Neural Network Model for Traffic Sign Detection and Recognition under Extreme
Conditions,” vol. 2021, no. i, 2021.

[19] O. S. K. Al-noori, “Deep Learning Measurement of Traffic Sign Plates With

22
Deep,” 2021.

[20] S. Ahmed, U. Kamal, and M. K. Hasan, “DFR-TSD: A Deep Learning Based


Framework for Robust Traffic Sign Detection Under Challenging Weather
Conditions,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 6, pp. 5150–5162,
2022, doi: 10.1109/TITS.2020.3048878.

[21] Z. Hu, C. Lv, S. Member, P. Hang, C. Huang, and Y. Xing, “Data-driven


Estimation of Driver Attention using Calibration-free Eye Gaze and Scene
Features,” vol. 0046, no. c, 2021, doi: 10.1109/TIE.2021.3057033.

[22] C. Bahlmann, Y. Zhu, V. Ramesh, M. Pellkofer, and T. Koehler, “A system for


traffic sign detection, tracking, and recognition using color, shape, and motion
information,” IEEE Intell. Veh. Symp. Proc., vol. 2005, no. July, pp. 255–260,
2005, doi: 10.1109/IVS.2005.1505111.

[23] A. G. Howard, “Some improvements on deep convolutional neural network based


image classification,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Conf. Track
Proc., 2014.

[24] “No Title”.

[25] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi:
10.1109/TPAMI.2016.2577031.

[26] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “the Impact of Residual


Connections on Learning,” pp. 4278–4284.

23
24

You might also like