MAJOR

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

1.

INTRODUCTION
However, as the performance of self-driving cars improves, the number of sensors
to recognize data is increasing. An increase in these sensors can cause the in- vehicle
overload. Self-driving cars use in-vehicle computers to compute data collected by sensors.
As the amount of the computed data increases, it can affect the speed of judgment and
control because of overload. These problems can threaten the stability of the vehicle. To
prevent the overload, some studies have developed hardware that can perform deep-
running operations inside the vehicle, while others use the cloud to compute the vehicle's
sensor data. On the other hand, collected from vehicles to determine how the vehicle is
driving. This paper proposes a Driving Decision Strategy(DDS) Based on Machine
learning for an autonomous vehicle which reduces the in-vehicle computation by
generatings big data on vehicle driving within the cloud and determines an optimal
driving strategy by taking into account the historical data in the cloud. The proposed DDS
analyzes them to determine the best driving strategy by using a Genetic algorithm stored
in the Cloud

1.1 PROJECT SCOPE

Currently, global companies are developing technologies for advanced self-driving


cars, which is in the 4th stage. Self driving cars are being developed based on various ICT
technologies, and the principle of operation can be classified into three levels of
recognition, judgment and control. The recognition step is to recognize and collect
information about surrounding situations by utilizing various sensors in vehicles such as
GPS, camera, and radar. The judgment step determines the driving strategy based on the
recognized information. Then, this step identifies and analyzes the conditions in which the
vehicle is placed, and determines the driving plans appropriate to the driving environment
and the objectives. The control step determines the speed, direction, etc. about the driving
and the vehicle starts driving on its own. An autonomous driving vehicle performs various
actions to arrive at its destination, repeating the steps of recognition, judgment and control
on its own.

CMRTC 1
1.2 PURPOSE OF THE SYSTEM

The DDS learns a genetic algorithm using sensor data from vehicles stored in the
cloud and determines the optimal driving strategy of an autonomous vehicle. This paper
compared the DDS with MLP and RF neural network models to validate the DDS. In the
experiment, the DDS had a loss rate approximately 5% lower than existing vehicle
gateways and the DDS determined RPM, speed, steering angle and lane changes 40%
faster than the MLP and 22% faster than the RF.

1.3 PROJECT FEATURES

It executes the genetic algorithm based on accumulated data to determine the


vehicle's optimal driving strategy according to the slope and curvature of the road in which
the vehicle is driving and visualizes the driving and consumables conditions of an
autonomous vehicle to provide drivers. To verify the validity of the DDS, experiments
were conducted on the Desoto select an optimal driving strategy by analyzing data from an
autonomous vehicle. Though the DDS has a similar accuracy to the MLP, it determines the
optimal driving strategy 40% faster than it. And the DDS has a higher accuracy of 22%
than RF and determines the optimal driving strategy 20% faster than it.

CMRTC 2
2. LITERATURE SERVEY

Y.N. Jeong, S.R.Son, E.H. Jeong and B.K. Lee, “An Integrated Self- Diagnosis
System for an Autonomous Vehicle Based on an IoT Gateway and Deep Learning, ”
Applied Sciences, vol. 8, no. 7, july 2018
This paper proposes “An Integrated Self-diagnosis System (ISS) for an Autonomous
Vehicle based on an Internet of Things (IoT) Gateway and Deep Learning” that collects
information from the sensors of an autonomous vehicle, diagnoses itself, and the
influence between its parts by using Deep Learning and informs the driver of the result.
The ISS consists of three modules. The first In-Vehicle Gateway Module (In-VGM)
collects the data from the in-vehicle sensors, consisting of media data like a black box,
driving radar, and the control messages of the vehicle, and transfers each of the data
collected through each Controller Area Network (CAN), FlexRay, and Media Oriented
Systems Transport (MOST) protocols to the on-board diagnostics (OBD) or the actuators.
The data collected from the in-vehicle sensors is transferred to the CAN or FlexRay
protocol and the media data collected while driving is transferred to the MOST protocol.
Various types of messages transferred are transformed into a destination protocol message
type. The second Optimized Deep Learning Module (ODLM) creates the Training Dataset
on the basis of the data collected from the in-vehicle sensors and reasons the risk of the
vehicle parts and consumables and the risk of the other parts influenced by a defective
part. It diagnoses the vehicle’s total condition risk. The third Data Processing Module
(DPM) is based on Edge Computing and has an Edge Computing based Self-diagnosis
Service (ECSS) to improve the self-diagnosis speed and reduce the system overhead,
while a V2X based Accident Notification Service (VANS) informs the adjacent vehicles
and infrastructures of the self-diagnosis result analyzed by the OBD. This paper improves
upon the simultaneous message transmission efficiency through the In-VGM by 15.25%
and diminishes the learning error rate of a Neural Network algorithm through the ODLM
by about 5.5%. Therefore, in addition, by transferring the self-diagnosis information and
by managing the time to replace the car parts of an autonomous driving vehicle safely,
this reduces loss of life and overall cost.

Yukiko Kenmochi, Lilian Buzer, Akihiro Sugimoto, Ikuko Shimizu, “Discrete plane
segmentation and estimation from a point cloud using local geometric patterns, ”

CMRTC 3
International Journal of Automation and Computing, Vol. 5, No. 3, pp.246-256,
2008.

This paper presents a method for segmenting a 3D point cloud into planar surfaces using
recently obtained discrete-geometry results. In discrete geometry, a discrete plane is
defined as a set of grid points lying between two parallel planes with a small distance,
called thickness. In contrast to the continuous case, there exist a finite number of local
geometric patterns (LGPs) appearing on discrete planes. Moreover, such an LGP does not
possess the unique normal vector but a set of normal vectors. By using those LGP
properties, we first reject non- linear points from a point cloud, and then classify non-
rejected points whose LGPs have common normal vectors into a planar-surface-point set.
From each segmented point set, we also estimate the values of parameters of a discrete
plane by minimizing its thickness.
Ning Ye, Yingya Zhang, Ruchuan Wang, Reza Malekian, “Vehicle trajectory
prediction based on Hidden Markov Model, ” The KSII Transactions on Internet
and Information Systems, Vol. 10, No. 7, 2017.

In Intelligent Transportation Systems (ITS), logistics distribution and mobile e-commerce,


the real-time, accurate and reliable vehicle trajectory prediction has significant application
value. Vehicle trajectory prediction can not only provide accurate location-based services,
but also can monitor and predict traffic situation in advance, and then further recommend
the optimal route for users. In this paper, firstly, we mine the double layers of hidden
states of vehicle historical trajectories, and then determine the parameters of HMM
(hidden Markov model) by historical data. Secondly, we adopt Viterbi algorithm to seek
the double layers hidden states sequences corresponding to the just driven trajectory.
Finally, we propose a new algorithm (DHMTP) for vehicle trajectory prediction based on
the hidden Markov model of double layers hidden states, and predict the nearest neighbor
unit of location information of the next k stages. The experimental results demonstrate
that the prediction accuracy of the proposed algorithm is increased by 18.3% compared
with TPMO algorithm and increased by 23.1% compared with Naive algorithm in aspect
of predicting the next k phases' trajectories, especially when traffic flow is greater, such as
this time from weekday morning to evening. Moreover, the time performance of DHMTP
algorithm is also clearly improved compared with TPMO algorithm.

CMRTC 4
Li-Jie Zhao, Tian-You Chai, De-Cheng Yuan, “Selective ensemble extreme learning
machine modeling of effluent quality in wastewater treatment plants, ” International
Journal of Automation and Computing, Vol.9, No.6, 2012

Real-time and reliable measurements of the effluent quality are essential to improve
operating efficiency and reduce energy consumption for the wastewater treatment process.
Due to the low accuracy and unstable performance of the traditional effluent quality
measurements, we propose a selective ensemble extreme learning machine modeling
method to enhance the effluent quality predictions. Extreme learning machine algorithm
is inserted into a selective ensemble frame as the component model since it runs much
faster and provides better generalization performance than other popular learning
algorithms. Ensemble extreme learning machine models overcome variations in different
trials of simulations for single model. Selective ensemble based on genetic algorithm is
used to further exclude some bad components from all the available ensembles in order to
reduce the computation complexity and improve the generalization performance. The
proposed method is verified with the data from an industrial wastewater treatment plant,
located in Shenyang, China. Experimental results show that the proposed method has
relatively stronger generalization and higher accuracy than partial least square, neural
network partial least square, single extreme learning machine and ensemble extreme
learning machine model.

D. S. Lee, C. O. Jeon, J. M. Park, K. S. Chang. Hybrid neural network modeling of a


full-scale industrial wastewater treatment process. Biotechnology and
Bioengineering, vol. 78, no. 6, pp. 670–682, 2002.

In recent years, hybrid neural network approaches, which combine mechanistic and neural
network models, have received considerable attention. These approaches are potentially
very efficient for obtaining more accurate predictions of process dynamics by combining
mechanistic and neural network models in such a way that the neural network model
properly accounts for unknown and nonlinear parts of the mechanistic model. In this
work, a full-scale coke-plant wastewater treatment process was chosen as a model system.
Initially, a process data analysis was performed on the actual operational data by using
principal component analysis. Next, a simplified mechanistic model and a neural network
model were developed based on the specific process knowledge and the operational data

CMRTC 5
of the coke-plant wastewater treatment process, respectively. Finally, the neural network
was incorporated into the mechanistic model in both parallel and serial configurations.
Simulation results showed that the parallel hybrid modeling approach achieved much
more accurate predictions with good extrapolation properties as compared with the other
modeling approaches even in the case of process upset caused by, for example, shock
loading of toxic compounds. These results indicate that the parallel hybrid neural
modeling approach is a useful tool for accurate and cost-effective modeling of
biochemical processes, in the absence of other reasonably accurate process models.

Parallel hybrid modeling methods for a full-scale cokes wastewater treatment


plant.Lee DS, Vanrolleghem PA, Park JM.J Biotechnol. 2005 Feb 9;115(3):317-28.
doi: 10.1016/j.jbiotec.2004.09.001.PMID: 15639094

Bird modelling methods are applied to model a full-scale cokes wastewater treatment
plant. Within the hybrid model structure, a mechanistic model specifies the basic
dynamics of the relevant process and non-parametric models compensate for the
inaccuracy of the mechanistic model. Five different non-parametric models - feed-forward
neural network, radial basis function network, linear partial least squares (PLS), quadratic
PLS and neural network PLS - are incorporated in the hybrid model structure. For this
application the hybrid model with neural network PLS as nonparametric model gives a
better performance than the other non-parametric models in terms of the prediction
accuracy, simplicity of use and interpretability.

Use of artificial neural network black-box modeling for the prediction of wastewater
treatment plants performance.

Mjalli FS, Al-Asheh S, Alfadala HE.J Environ Manage. 2007 May;83(3):329-38. doi:
10.1016/j.jenvman.2006.03.004. Epub 2006 Jun 27.PMID: 16806660

A reliable model for any wastewater treatment plant is essential in order to provide a tool
for predicting its performance and to form a basis for controlling the operation of the
process. This would minimize the operation costs and assess the stability of
environmental balance. This process is complex and attains a high degree of nonlinearity
due to the presence of bio-organic constituents that are difficult to model using

CMRTC 6
mechanistic approaches. Predicting the plant operational parameters using conventional
experimental techniques is also a time consuming step and is an obstacle in the way of
efficient control of such processes. In this work, an artificial neural network (ANN) black-
box modeling approach was used to acquire the knowledge base of a real wastewater
plant and then used as a process model. The study signifies that the ANNs are capable of
capturing the plant operation characteristics with a good degree of accuracy. A computer
model is developed that incorporates the trained ANN plant model. The developed
program is implemented and validated using plant-scale data obtained from a local
wastewater treatment plant, namely the Doha West wastewater treatment plant (WWTP).
It is used as a valuable performance assessment tool for plant operators and decision
makers. The ANN model provided accurate predictions of the effluent stream, in terms of
biological oxygen demand (BOD), chemical oxygen demand (COD) and total suspended
solids (TSS) when using COD as an input in the crude supply stream. It can be said that
the ANN predictions based on three crude supply inputs together, namely BOD, COD and
TSS, resulted in better ANN predictions when using only one crude supply input.
Graphical user interface representation of the ANN for the Doha West WWTP data is
performed and presented.

Modeling the performance of "up-flow anaerobic sludge blanket" reactor based


wastewater treatment plant using linear and nonlinear approaches--a case study.
Singh KP, Basant N, Malik A, Jain G.AnalChimActa. 2010 Jan 18;658(1):1-11. doi:
10.1016/j.aca.2009.11.001. Epub 2009 Nov 10.PMID: 20082768

The paper describes linear and nonlinear modeling of the wastewater data for the
performance evaluation of an up-flow anaerobic sludge blanket (UASB) reactor based
wastewater treatment plant (WWTP). Partial least squares regression (PLSR), multivariate
polynomial regression (MPR) and artificial neural networks (ANNs) modeling methods
were applied to predict the levels of biochemical oxygen demand (BOD) and chemical
oxygen demand (COD) in the UASB reactor effluents using four input variables measured
weekly in the influent wastewater during the peak (morning and evening) and non-peak
(noon) hours over a period of 48 weeks. The performance of the models was assessed
through the root mean squared error (RMSE), relative error of prediction in percentage
(REP), the bias, the standard error of prediction (SEP), the coefficient of determination
(R(2)), the Nash-Sutcliffe coefficient of efficiency (E(f)), and the accuracy factor (A(f)),
computed from the measured and model predicted values of the dependent variables

CMRTC 7
(BOD, COD) in the WWTP effluents. Goodness of the model fit to the data was also
evaluated through the relationship between the residuals and the model predicted values
of BOD and COD. Although, the model predicted values of BOD and COD by all the
three modeling approaches (PLSR, MPR, ANN) were in good agreement with their
respective measured values in the WWTP effluents, the nonlinear models (MPR, ANNs)
performed relatively better than the linear ones. These models can be used as a tool for the
performance evaluation of the WWTPs.

Anaerobic treatment of a chemical synthesis-based pharmaceutical wastewater in a


hybrid upflow anaerobic sludge blanket reactor.Oktem YA, Ince O, Sallis P,
Donnelly T, InceBK.Bioresour Technol. 2008 Mar;99(5):1089-96. doi:
10.1016/j.biortech.2007.02.036. Epub 2007 Apr 20.PMID: 17449241

In this study, performance of a lab-scale hybrid up-flow anaerobic sludge blanket (UASB)
reactor, treating a chemical synthesis-based pharmaceutical wastewater, was evaluated
under different operating conditions. This study consisted of two experimental stages:
first, acclimation to the pharmaceutical wastewater and second, determination of
maximum loading capacity of the hybrid UASB reactor. Initially, the carbon source in the
reactor feed came entirely from glucose, applied at an organic loading rate (OLR) 1 kg
COD/m(3) d. The OLR was gradually step increased to 3 kg COD/m(3) d at which point
the feed to the hybrid UASB reactor was progressively modified by introducing the
pharmaceutical wastewater in blends with glucose, so that the wastewater contributed
approximately 10%, 30%, 70%, and ultimately, 100% of the carbon (COD) to be treated.
At the acclimation OLR of 3 kg COD/m(3) d the hydraulic retention time (HRT) was 2
days. During this period of feed modification, the COD removal efficiencies of the
anaerobic reactor were 99%, 96%, 91% and 85%, and specific methanogenic activities
(SMA) were measured as 240, 230, 205 and 231 ml CH(4)/g TVS d, respectively.

CMRTC 8
3. SYSTEM ANALYSIS

SYSTEM ANALYSIS

System Analysis is the important phase in the system development process. The
System is studied to the minute details and analyzed. The system analyst plays an
important role of an interrogator and dwells deep into the working of the present system.
In analysis, a detailed study of these operations performed by the system and their
relationships within and outside the system is done. A key question considered here is,
“what must be done to solve the problem?” The system is viewed as a whole and the
inputs to the system are identified. Once analysis is completed the analyst has a firm
understanding of what is to be done.

3.1 EXISTING SYSTEM

k-NN, RF, SVM and Bayes models are existing methods Although studies have
been done in the medical field with an advanced data exploration using machine learning
algorithms, orthopedic disease prediction is still a relatively new area and must be
explored further for the accurate prevention and cure.it mines the double layers of hidden
states of vehicle historical trajectories, and then selects the parameters of Hidden Markov
Model(HMM) by the historical data. In addition, it uses a Viterbi algorithm to find the
double layers hidden states sequences corresponding to the just driven trajectory. Finally,
it proposes a new algorithm for vehicle trajectory prediction based on the hidden Markov
model of double layers hidden states, and predicts the nearest neighbor unit of location
information of the next k stages.

3.1.1 DISADVANTAGES OF EXISTING SYSTEM

The disadvantages of existing system are :

 Less Efficiency.

 Need more data sets to be explored for prevention.

 More Accidents.

CMRTC 9
3.2 PROPOSED SYSTEM

Here we proposes “A Driving Decision Strategy(DDS) Based on Machine


learning for an autonomous vehicle” which determines the optimal strategy of an
autonomous vehicle by analyzing not only the external factors, but also the internal
factors of the vehicle (consumable conditions, RPM levels etc.). The DDS learns a genetic
algorithm using sensor data from vehicles stored in the cloud and determines the optimal
driving strategy of an autonomous vehicle. This paper compared the DDS with MLP and
RF neural network models to validate the DDS. In the experiment, the DDS had a loss
rate approximately 5% lower than existing vehicle gateways and the DDS determined
RPM, speed, steering angle and lane changes 40% faster than the MLP and 22% faster
than the RF.

3.2.1 ADVANTAGES OF PROPOSED SYSTEM

 These improvements system to control the vehicle based on sensor data.

 Lane changing will be faster.

 Access to disabled and people with reduced Mobility.

3.3 HARDWARE AND SOFTWARE REQUIREMENTS

3.3.1 HARDWARE REQUIREMENTS:

Minimum hardware requirements are very dependent on the particular software


being developed by a given En thought Python / Canopy / VS Code user. Applications
that need to store large arrays/objects in memory will require more RAM, whereas
applications that need to perform numerous calculations or tasks more quickly will
require a faster processor.

 Operating system : Windows,Linux

 Processor : Minimum Intel i3

 Ram : Minimum 4GB

 Hard disk : Minimum 250 GB

CMRTC 10
3.3.2 SOFTWARE REQUIREMENTS:

The functional requirements or the overall description documents include the


product perspective and features, operating system and operating environment, graphics
requirements, design constraints and user documentation.
The appropriation of requirements and implementation constraints gives the
general overview of the project in regards to what the areas of strength and deficit are and
how to tackle them.
 Operating system : Windows XP/7/10

 Development framework : Flask framework

 Programming language : Python idel 3.7 / Anacoda 3.7 /

Jupyter / Google colab


 Front end : Python console.
 IDE : Anaconda prompt

3.4 ALGORITHMS

3.4.1 RANDOM FOREST

Random Forest works in two-phase first is to create the random forest by


combining N decision tree, and second is to make predictions for each tree created in
the first phase.

The Working process can be explained in the below steps and diagram:

Step-1: Select random K data points from the training set.

Step-2: Build the decision trees associated with the selected data points (Subsets).

Step-3: Choose the number N for decision trees that you want to build.

Step-4: Repeat Step 1 & 2.

CMRTC 11
Step-5: For new data points, find the predictions of each decision tree, and assign the
new data points to the category that wins the majority votes.

3.4.2 MULTI LAYER PERCEPTRON (MLP)

A multilayer perceptron (MLP) is a class of feedforward artificial neural


network (ANN). The term MLP is used ambiguously, sometimes loosely to any feed
forward ANN, sometimes strictly to refer to networks composed of multiple layers
of perceptrons (with threshold activation); see § Terminology. Multi layer perceptons
are sometimes colloquially referred to as "vanilla" neural networks, especially when they
have a single hidden layer.An MLP consists of at least three layers of nodes: an input
layer, a hidden layer and an output layer. Except for the input nodes, each node is a
neuron that uses a nonlinear activation function. MLP utilizes a supervised
learning technique called backpropagation for training. Its multiple layers and non-linear
activation distinguish MLP from a linear perceptron. It can distinguish data that is
not linearly separable.

3.5 PROCESS MODULES USED WITH JUSTIFICATION

3.5.1 SYSTEM OVERVIEW

We focus on the decision-making module and its connection to the downstream


planning and control modules. The state estimator (usually a perception module)
constructs a local map by processing the sensory data and combining them with prior
knowledge from the map and rules. The decision module then receives observation and
decides a legal local driving task that steers the car towards the destination. To complete
this task, the planning module searches for an optimal trajectory under enforced physical
constraints. The control module then reactively corrects any errors in the execution of the
planned motion.we adopt minimal settings to stay focused on our learning frameworks.
These capsulated modules could be replaced by arbitrary alternatives.

CMRTC 12
3.5.2 DECESION MAKING MODULE

We assume that a local map centered at the ego vehicle could be constructed from
either the state estimator or an offline global HD map. This local map contains critical
driving information such as the routing guidance to the destination, legal lanes, lane lines,
other vehicle coordinates and velocity in the ego vehicle’s surroundings, as well as ego
vehicle’s speed and acceleration.

To make the decision-making policy more generalizable to different driving


scenarios, the output of the state estimator, also as the input to the neural decision module,
is the local map and traffic state.

Figure 3.4.2 : Illustration of input and output of the decision module

We define the decision as three independent categorical variables, one for lateral,
one for longitudinal and one for velocity. Combining with the local map, each of them is
assigned a specific semantic meaning. The lateral decision variable has three classes as
changing to the left lane, keeping current lane and changing to the right lane.

CMRTC 13
3.5.3 PLANNING & CONTROL MODULE

We introduce a minimal design of the planning module and the control module for
completeness, which is not the main contribution of this work. In classical autonomous
driving system, the planning module calculates the trajectories under constraints such as
collision avoidance. Its optimization objective is to reach the goal state specifified by the
decision module with minimal cost. In our minimal setting, the planning module
processes path and velocity separately. Paths are planned with cubic Bezier curves, while
velocity is planned with Quadratic Programming (QP) to minimize the acceleration, jerk,
and discontinuity in curvature. The control module drives the vehicle to trace the planned
trajectories. And we implement it minimally with PID. Details are provided in Appx. A.

It is flexible to further extend each module of the proposed framework, as long as


the alternatives are deterministic or stationary. Note that while the interface between
decision and planning is the location of a goal point and the speed ego vehicle is expected
to maintain when reaching it, the planning module could be replaced by more
sophisticated search-based or optimization-based trajectory planner, to fulfill more
practical requirements in execution time and constraint enforcement. Similarly, the
model-free controller could be replaced with alternatives like Model Predictive Control
(MPC). Other practical constraints and concerns over the controller such as robustness
and stability could also be taken into consideration if necessary.

3.6 SOFTWARE REQUIREMENT SPECIFICATIONS

3.6.1 FUNCTIONAL REQUIREMENTS

1. Data Collection

2. Data Pre-processing

3. Training and Testing

4. Modiling

5. Predicting

CMRTC 14
3.6.2 NON FUNCTIONALREQUIREMENTS

NON-FUNCTIONAL REQUIREMENT (NFR) specifies the quality attribute of a


software system. They judge the software system based on Responsiveness, Usability,
Security, Portability and other non-functional standards that are critical to the success of
the software system. Example of nonfunctional requirement, “how fast does the website
load?” Failing to meet non-functional requirements can result in systems that fail to
satisfy user needs. Non- functional Requirements allows you to impose constraints or
restrictions on the design of the system across the various agile backlogs. Example, the
site should load in 3 seconds when the number of simultaneous users are> 10000.
Description of non-functional requirements is just as critical as a functional requirement.

 Usability requirement

 Serviceability requirement

 Manageability requirement

 Recover ability requirement

 Security requirement

 Data Integrity requirement

 Capacity requirement

 Availability requirement

 Scalability requirement

 Interoperability requirement

 Reliability requirement

 Maintain ability requirement

 Regulatory requirement

 Environmental requirement

CMRTC 15
4.ARCHITECTURE

4.1 PROJECT ARCHITECTURE

DDS

Figure 4.1.1: System architecture

The DDS searches the relevant information and matches the accurate driving
decision with the learning experiences, and then transmits the decision order to the control
system. These learning experiences refer to the driving decision-making rules in DDS that
are obtained by learning a lot of real driving experience. Then, the control system will
control the actuators (include the steering system, pedals, and automatic gearshift) to
carry on with the corresponding operation.In the whole process of information collection,
transmission, and execution, the DDS plays key role, which is the central system to
control the autonomous vehicle. The types of driving decision DDS outputs include free
driving, car following, and lane changing. Its input variables are obtained through the
preliminary data processing for extracting traffic scenario characteristics as reference
indexes and the further data fusion. The method of data fusion adopted in this paper is
Principal Component Analysis (PCA).

According to these input variables, the DDS searches the relevant information and
matches the accurate driving decision with the learning experiences, and then transmits
the decision order to the control system. These learning experiences refer to the driving
decision-making rules in DDS that are obtained by learning a lot of real driving

CMRTC 16
experience. Then, the control system will control the actuators(include the steering system,
pedals, and automatic gearshift) to carry on with the corresponding operation.

Figure 4.1.2: Driving decision-making process of autonomous vehicle. DDS

Figure 4.1.3: The detailed data processing of data processing program.

The schematic diagram of vehicle states on a road is shown in Figure 4.4. All of the above
obtained reference indexes in Figure 4.3 as described as follows:

CMRTC 17
Figure 4.1.4: Diagrammatic sketch of Vehicles states.

4.2 UML DIAGRAMS

The System Design Document describes the system requirements, operating


environment, system and subsystem architecture, files and database design, input formats,
output layouts, human-machine interfaces, detailed design, processing logic, and external
interfaces.

4.2.1 Global Use Case Diagrams:

Identification of actors:
Actor: Actor represents the role a user plays with respect to the system. An actor interacts
with, but has no control over the use cases.

Graphical representation:

<<<Actor Name>>

Actor

An actor is someone or something that: <<Actor name>>

Interacts with or uses the system.


Provides input to and receives information from the system.

<<Actor
Is external to the system and has no control over the name>>
use cases. Actors are discovered by
examining:

 Who directly uses the system? <<Actor name>>

CMRTC 18
 Who is responsible for maintaining the system?

 External hardware used by the system.

 Other systems that need to interact with the system. Questions to

Identify actors:

 Who is using the system? Or, who is affected by the system? Or,
which groups need help from the system to perform a task?

 Who affects the system? Or, which user groups are needed by the
system to perform its functions? These functions can be both main
functions and secondary functions such as-administration.

 Which external hardware or systems (if any) use the system to perform
tasks?

 What problems does this application solve (that is, for whom)?

 And, finally, how do users use the system (use case)? What are they
doing with the system?

4.2.2 Flow of Events

A flow of events is a sequence of transactions (or events) performed by the


system. They typically contain very detailed information, written in terms of what the
system should do, not how the system accomplishes the task. Flow of events are created
as separate files or documents in your favorite text editor and then attached or linked to a
use case using the Files tab of a model element.

A flow of events should include:


 When and how the use case starts and ends

 Use case/actor interactions

 Data needed by the usecase

 Normal sequence of events for the usecase

 Alternate or exceptional flows Construction of Usecasediagrams

CMRTC 19
4.3 USE CASE DIAGRAM

A use case diagram in the Unified Modeling Language (UML) is a type of


behavioral diagram defined by and created from a Use-case analysis. Its purpose is to
present a graphical overview of the functionality provided by a system in terms of actors,
their goals (represented as use cases), and any dependencies between those use cases. The
main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.

Start

Upload Historical Trajectory


Dataset

Generate Train & Test Model

Run random forest algorithm

User

Run MLP Algorithm

Run DDS with Genetic Algorithm

Accuracy Graph

Predict DDS type

Stop

Figure 4.3.1 : Use Case Diagram

4.4 CLASS DIAGRAM

In software engineering, a class diagram in the Unified Modeling Language


(UML) is a type of static structure diagram that describes the structure of a system by
showing the system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information.

CMRTC 20
Figure 4.4.1 : Class Diagram

4.5 SEQUENCE DIAGRAM


A sequence diagram in Unified Modeling Language (UML) is a kind of
interaction diagram that shows how processes operate with one another and in what order.
It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.

User System

Upload Historical Trajectory Dataset

Generate Train & Test Model

Run random forest algorithm

Run MLP Algorithm

Run DDS with Genetic Algorithm

Accuracy Graph

Predict DDS type

Figure 4.5.1 : Sequence Diagram

CMRTC 21
4.6 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise activities


and actions with support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and operational step-by-
step workflows of components in a system. An activity diagram shows the overall flow of
control.

Figure 4.6.1 : Activity Diagram

4.7 DATA FLOW DIAGRAM

 The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of input data to the system, various processing
carried out on this data, and the output data is generated by this system.

CMRTC 22
 The data flow diagram (DFD) is one of the most important modeling tools. It is used
to model the system components. These components are the system process, the
data used by the process.

 DFD shows how the information moves through the system and how it is modified
by a series of transformations. It is a graphical technique that depicts information
flow and the transformations that are applied as data moves from input to output.

 DFD is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent increasing
information flow and functional detail.

Figure 4.7.1 : Data flow Diagram

CMRTC 23
4.8 FLOW CHART

Figure 4.8.1 : Flow Chart

CMRTC 24

You might also like