Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Method of processing images collected from

self-driving car control camera


Abstract
An autonomous vehicle (AGV) is an intelligent machine capable of knowing
enough to determine its own state of motion based on environmental
conditions. A typical form of self-driving car or robot is the ability to operate
and control without human movement in constantly changing conditions
when traveling on city lanes. In this paper, we present algorithms and
processing methods for self-driving cars, mainly focusing on lane
recognition and handling unexpected situations to bring passengers from
point A to point A. point A.B above. Simple lane-vector image processing
algorithms based on heterogeneous road theory (for lane determination), 2
point minimum solution (for motion estimation) Uncertainty solving system
(URS) (for obstacle detection and handling). The results of the algorithm
study show that the system can handle more than 12 frames per second,
which is suitable for the requirements of precise motion control of
autonomous vehicles without human intervention.
I. Introducation
- Self-driving cars are widely applied in many different fields in practice and
play an important role in traffic environments with many potential risks of
accidents such as ramps, slippery roads or intelligent traffic systems. . In-
vehicle intelligent automatic motion control can help reduce accidents and
traffic jams.
- In principle, a simple AGV system will consist of two main components:
Preprocessor and Controller. The preprocessor uses sensors, radar, GPS, or a
computer vision system with a camera attached to it to acquire information
from the environment such as limit, direction, width, curvature, flatness.
lanes as well as the appearance of obstacles, ... The control unit will control
the car or robot automatically move along its lane limits and can avoid
obstacles in the road if necessary, needs.
- In this study, we propose simple image processing algorithms and methods
to help autonomous vehicles move and control obstacles within a certain
lane limit. By using the concept of lane vectors based on Non-uniform
Curves (NUBS) theory to construct limit lines for the left and right lanes.
Then describe the Uncertainty Resolving System (URS) with Visual
Earthing (VG) model to detect the object mentioned in the command in the
visual scene. Finally, using the motion estimation algorithm Minimum point
solution to give the most suitable motion for the autonomous vehicle
II. Methods
a) Lane vector based on Non-uniform theory Bspline (NUBS) .
b) Modeling a lane using a B-Spline curve
The NUBS curve can be represented by a set of control points on
polynomials, the higher the degree polynomials the more accurate the curve
description will be. The distance between the nodes is equal, we call the
curve a uniform B-Spline curve (uniform), otherwise the non-uniform B-
Spline curve.
c) Determine the boundary.

- First, we consider two horizontal sweep lines in the lane map image
obtained from the edge detection image and define the control points as Al,
Bl, for the left lane; Ar, Br , for the right lane. Construct the lane vectors for
the left and right lanes. Next, we estimate the curvature of the lane boundary
using the formula:

- The detailed content of this algorithm is described as follows: Algorithm to


find control points
+ Step 1: Set up 2 horizontal scan lines at the empty area at the end of the
image, find the first 2 control points for each scan line, left control point and
right control point. The scan start position to find control points is selected
as the midpoint of each scan line.
+ Step 2: Build 2 lane vectors Ll and Lr , then calculate the angle of
inclination of these 2 vectors according to the formula
+ Step 3: Divide the remaining space of the image into 4 parts (based on the
length of the lane map image) using 3 horizontal scan lines

d) Lane curvature estimation algorithm


e) Uncertainty Resolving System (URS)
a) Visual Grounding (VG) model
- Before using URS, we ask a VG model to find the target object introduced
(𝑜 ∗) by the command (𝑐) in a given image 𝐼. We can create a model using
the following statement:

- We create a set of exclusively initialized VG models. To calculate the


adjusted probability distribution for the set, we use the following equation:

where 𝑝𝑒 is the probability output of the second VG model, and 𝑝𝐸 is the


adjusted distribution of the full set.
b) Jointly detecting uncertainty and uncertain objects
c) Point Minimal Solution
- Specifically, the car is performing a circular motion about the
Instantaneous Center of Motion (ICR) with the Ackermann model. The
radius of uniform circular motion goes to infinity when the car is moving in
a straight line. The main goal of motion estimation is to calculate the relative
motion between Vk and Vk + 1.

- where θ is the relative function angle and ρ is the scale of the comparative
translation. Here, the z-axis of Vkis points outwards from the paper. It can
be observed further from the graph that the angle between ρ and the line
perpendicular to the circle at Vk, thus φv = θ/2. We immediately see that the
relative motion between the image Vk and Vk + 1 depends only on 2
parameters - the scale ρ and the tilt angle θ.

III. Results
IV. Conclusions
V. REFERENCES

You might also like