Professional Documents
Culture Documents
Opening Report
Opening Report
目录
6 References
成绩:
开题答辩 答辩组长签名:
意见
年月日
签名: 系
主
任
审 签名:
指导教师 核
知情确认 意
见
年月日 年 月 日
The development of fully autonomous vehicles is moving forward at a rapid pace and every
major automaker, ride-sharing service, and tech company from Apple to Baidu has invested in
the driverless car market. There is a current advancement in the driverless car area almost every
day. Although today many advanced cars can drive themselves many researchers believe that
they are not fully autonomous cars, and they believe that to be fully self-driving they don’t need
a steering wheel, brake, or accelerator.
Autonomous vehicles are already beginning to appear on our roadways. It won't be long before
the technological obstacles to complete AV implementation are overcome and autonomous
cars-related legal, social, and transportation issues are openly discussed. And also autonomous
cars have the potential to be a significant urban transformation accelerator. [1] The majority of
automakers are working on autonomous driving, but just a few have released self-driving
vehicles on the market (with varying degrees of sophistication), with more than 10 million of
them anticipated to be on the road by 2020. [2] There is a desire to have them on the streets as
soon as feasible due to their alluring perks. The current study evaluates a fully autonomous
vehicle's a priori acceptability, attitudes, personality attributes, and intention to utilize one.
421 drivers (153 men, M = 40.2 years, range 19–73) responded to the online survey. A priori,
68.1% of the sample accepted having fully autonomous vehicles [3]. The consequences of giving
control to the automobile are extensive. Accidents will inevitably occur, thus the self-driving
automobile will have to make decisions that could save or end human lives. [4]. 94% of car
crashes in the USA are attributed to human error. Self-driving cars have the potential to
drastically reduce accidents by eliminating the human aspect, while concurrently monitoring the
environment to identify and respond swiftly to potentially hazardous events and driving
behaviors.[2]. As we all know driving a car a long distance is exhausting and being on a road
where dozens of other people will drive simultaneously asks being more careful and requiring
caution. After a certain amount of time, a human driver gets tired and his ability to sense the
environment naturally slows down. On the other hand system of self-driving cars doesn’t get
tired or sleepy, they always can make decisions faster than human beings. And also technology
for self-driving cars promises other advantages including improved traffic flow, decreased
pollution, and a reduction in accidents caused by human error. Although the general public may
have a favorable attitude toward autonomous vehicles, this attitude may change depending on
the degree of automation and the delivery method. Future cars are anticipated to have better
decision-making abilities because they will be able to acquire detailed situational awareness
through the use of multiple sensors, which when combined with artificial intelligence may allow
self-driving cars to anticipate and respond to the environment better than humans[4].
2. Current state of Research
According to recent World Health Organization statistics, 1.25 million people die as a result of
traffic accidents each year In addition, the cost of these mishaps in recent years has reached
US$518 billion annually, which subtracts 1% to 2% of the global GDP from all nations [5].
Autonomous vehicles have the potential to be significantly safer than the manually operated
vehicles we currently use. This is among the factors that have people enthusiastic about the
creation and adoption of self-driving vehicles. However, self-driving automobiles cannot
guarantee complete safety. This is due to the fact that they will be traveling at a high rate of
speed while dodging unexpected pedestrians, bikers, and human drivers[6].
The Society of Automotive Engineers (SAE) standard outlines five stages of driving automation.
If there are no ADASs helping the driver with steering, accelerating, or deceleration, and
everything is done manually by the driver, a vehicle is classified as level zero. In level one
vehicles, ADASs help the driver handle steering or acceleration/deceleration in some
circumstances with human input. In level two vehicles, ADASs control steering and acceleration
and deceleration with driver input in certain conditions. In lower-level vehicles (levels zero to
two), the driver typically keeps an eye on the road environment. In contrast, higher-level (levels
three to five) automobiles' ADAS analyzes the road environment. Level three vehicles, like the
2016 Tesla Model S, have the most advanced ADASs and handle several safety systems, but the
driver can still take over when necessary. Vehicles of level four may operate in a wider variety of
situations and manage numerous safety systems. The ultimate aim of autonomous driving is
level five automation, where all of the vehicle's systems are controlled by the ADAS in all
conditions (such as snow-covered highways and unlabeled dirt roads) and do not require any
human intervention[5].
In our daily life when we say self-driving cars, we don’t think that philosophy has anything to do
with that, but recent development shows that while working with self-driving cars we have to
deal with problems which directly involves philosophy. Those problems called trolly problems.
Here is one of them - “The presumption is that a startling event occurs, after which there are
two possible courses of action. If no active choice is taken, some people will die, and if a choice
is made, more people will live but fewer will die. The name of the conundrum comes from the
image of a trolley barreling toward a fork in the track yet unable to brake. You have time to
reach a lever that will allow you to cause the trolley to change tracks because you are next to
the track. If you don't do anything, five people will perish. One person will perish if the trolley is
turned by pulling the lever”[7]. This is just one question from the philosophical front.
Firstly the value of all pixels in grayscale images which are in black accumulated. Then they
subtracted from the total of white boxes. Finally the result will be compared to the defined
threshold and if the criteria is met, the feature considers a hit.
In general for each computation in Haar feature my need to obtain each single pixel in the areas
of features and this step can be bypassed by applying integral images that the value of each
pixel is equal to the summation of gray values above and left in the image. Therefore, it only
calculates the pixel value for four pixels lookups from the integral image.
On the first step our algorithm will convert the image taken from the frame, into gray scale
image.
Even though colors on the images are great tool to classify the picture, but for our haar cascade
algorithm it is not useful, they made feature calculations rather computationally expensive. It is
easy to detect objects in a grayscale image. Grayscale images has consistent pattern of objects,
and that is great to detect an object. With this feature grayscale images can beat the features of
RGB images. In OpenCV “COLOR BGR2GRAY” is the command to convert the image into
grayscale.
Integral Image
Haar cascade image classification algorithm is supervised learning algorithm, that’s way
there are images that positive and negative. Haar cascade uses white and black rectangles
where they will compare all part of picture with our positive image. It is necessary to define the
location of the Haar features with determined values of dark and white, prior to finding the
value of the integral image on the features.
The feature, there are rectangular features on some part of the image which are in two forms:
black(dark) and white(bright). Based on those rectangles, the Haar-like feature is calculated.
Next, decide if there are photos or just hundreds of Haar features. Utilizing the formula,
the black area surrounding the white part of the haar characteristics should be diminished.
This is an example of a 10x10 input image that was used.
The value of the second row and second column that is obtained by adding the value of the first
column's first row pixel, the first row pixel's value with the second column's first row pixel, the
first row pixel's value with the second column's first row pixel, the second row pixel's value with
the second row's second column, and the second row pixel's value with the second row's
second column (0,1). Then, using the value (0,7), where 0,1+0,2+0,3+0,1, the pixel value was
determined.
To calculate the value of pixel using integral image, this equation is used
Where D is the value of pixel in right bottom, A is the value in top left pixel D and C is the value
of left pixel of pixel D. This picture down below illustrates that pixels.
Integral Image Search Value of Pixel D on Integral image
After Obtaining all the value of features from haar features and the value of
integral image features, the next step is to determine the features of AdaBoost.
The sub image is processed to determine whether the received features is true or false. If true
feature was the feature that has already been stored in database, then the feature was
identified and it is the face that matches with the feature. If it is not the feature, the feature
would be discarded. It meant that it was not the face that matches with the database or sub
image feature.
For cars, the extracted traits must be distinct, different for each vehicle, and able to completely
characterize the vehicle without being impacted by a change in the vehicle's position [12].
Black and white squares of the same size and placement serve as the foundation for the haar
characteristics[10].
Calculating the difference between the sums of pixels in the black and white rectangle allows us
to first identify a rectangular Haar-like feature[13].
The computer initially produces a first classifier using the positive photos, evaluates it using the
negative images, and then builds a second classifier with greater detection rates [14].
It is made up of edge and line features. The white bar in the grayscale image represents the
pixels closest to the light source[15].
After receiving video frames that need to be converted into grayscale images, we perform the
necessary image processing to turn the colored frame images into grayscale [9].
The integral image can calculate the features of various targets in various positions of the image
regardless of the size of the image, using the same constant time, considerably reducing the
detection time[12].
[1] F. Duarte,C. Ratti, "The impact of autonomous vehicles on cities: A review," Journal of Urban Technology,
vol. 25, no. 4, pp. 3-18, 2018.
[2] S. Karnouskos, "Self-driving car acceptance and the role of ethics," IEEE Transactions on Engineering
Management, vol. 67, no. 2, pp. 252-265, 2018.
[3] W. Payre, J. Cestac,P. Delhomme, "Intention to use a fully automated car: Attitudes and a priori
acceptability," Transportation research part F: traffic psychology and behaviour, vol. 27, pp. 252-263,
2014.
[4] S. Karnouskos, "The role of utilitarianism, self-safety, and technology in the acceptance of self-driving
cars," Cognition, Technology & Work, vol. 23, no. 4, pp. 659-667, 2021.
[5] V. K. Kukkala, J. Tunnell, S. Pasricha et al., "Advanced driver-assistance systems: A path toward
autonomous vehicles," IEEE Consumer Electronics Magazine, vol. 7, no. 5, pp. 18-25, 2018.
[6] S. Nyholm,J. Smids, "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?,"
Ethical theory and moral practice, vol. 19, no. 5, pp. 1275-1289, 2016.
[7] R. Johansson,J. Nilsson, "Disarming the trolley problem–why self-driving cars do not need to choose
whom to kill," in Workshop CARS 2016-Critical Automotive applications: Robustness & Safety, 2016.
[8] M. Daily, S. Medasani, R. Behringer et al., "Self-driving cars," Computer, vol. 50, no. 12, pp. 18-23, 2017.
[9] K. Pavani,P. Sriramya, "Novel vehicle detection in real time road traffic density using Haar cascade
comparing with KNN Algorithm based on accuracy and time mean speed," REVISTA GEINTEC-GESTAO
INOVACAO E TECNOLOGIAS, vol. 11, no. 2, pp. 897-910, 2021.
[10] R. A. Harahap, E. P. Wibowo,R. K. Harahap, "Detection and simulation of vacant parking lot space using
east algorithm and haar cascade," in 2020 Fifth International Conference on Informatics and Computing
(ICIC), 2020: IEEE, pp. 1-5.
[11] I. M. Hakim, D. Christover,A. M. J. Marindra, "Implementation of an image processing based smart
parking system using Haar-cascade method," in 2019 IEEE 9th Symposium on Computer Applications &
Industrial Electronics (ISCAIE), 2019: IEEE, pp. 222-227.
[12] L. Zhang, J. Wang,Z. An, "Vehicle recognition algorithm based on Haar-like features and improved
Adaboost classifier," Journal of Ambient Intelligence and Humanized Computing, pp. 1-9, 2021.
[13] P. Pankajavalli, V. Vignesh,G. Karthick, "Implementation of haar cascade classifier for vehicle security
system based on face authentication using wireless networks," in International Conference on Computer
Networks and Communication Technologies, 2019: Springer, pp. 639-648.
[14] L. T. H. Phuc, H. Jeon, N. T. N. Truong et al., "Applying the Haar-cascade Algorithm for detecting safety
equipment in safety management systems for multiple working environments," Electronics, vol. 8, no. 10,
p. 1079, 2019.
[15] A. B. Shetty,J. Rebeiro, "Facial recognition using Haar cascade and LBP classifiers," Global Transitions
Proceedings, vol. 2, no. 2, pp. 330-335, 2021.
[16] G. Guido, V. Gallelli, D. Rogano et al., "Evaluating the accuracy of vehicle tracking data obtained from
Unmanned Aerial Vehicles," International journal of transportation science and technology, vol. 5, no. 3,
pp. 136-151, 2016.
[16]