Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Research and Design of Trackless

AGV System Based on Global Vision

Q i n g q i n g W a n g 1 a n d Jiahai L i a n g 2 *
l.College of Information Science and Engineering Zhiqiang Rao3
Guilin University of Technology 3. Urban Rail Transit and Logistics College
Guilin, Guangxi, China Beijing Union University
2. College of Electronics and Information Engineering Beijing, China
Beibu Gulf University rao_hual@163. com
Qinzhou, Guangxi, China
*Corresponding author, 75276089@qq.com

Abstract—To solve the problems of poor flexibility and relatively The positioning accuracy was effectively improved, but the
high cost of railed AGVs, a trackless AGV system based on image stitching process caused other issues on the real-time
global view is proposed. The system consists of the algorithm of guidance. He et al. used Camshift and Kalman filtering
AGV detection and recognition in the global vision and operation algorithms to locate indoor mobile robots to improve the
control. Through projective transformation of real-time video, a real-time positioning [8]. A global positioning scheme based
global map is generated. In order to ensure the real-time on binocular vision calibration board was proposed by Li et
identification and positioning of AGV, an algorithm combining
al., which provided a new technical reference for global visual
the inter-frame difference method and local dynamic tracking is
proposed. With an omni-directional AGV with three Mecanums, positioning, but the positioning accuracy could be affected by
the above method is verified. Experimental results show that corner extraction [9]. For A G V control, Luo et al. proposed to
AGV can complete real-time positioning and guidance in the use position deviation as the control system input for PID
trackless map, and the average positioning accuracy is less than correction algorithm to obtain the differential control of the
5cm. (Abstract) left and right wheels [10]. This system achieved good stability,
however, the two-wheel differential control was limited by the
Keywords-global vision; projective transformation; trackless range of deviation.
AGV; dynamic identification (key words) Considering the problems of the A G V applications in the
market, a trackless A G V system based on global visual
I." INTRODUCTION
guidance is proposed to improve the flexibility and scalability.
A G V vision guidance technology is the essential focus of The system uses global visual guidance to locate and track the
the current A G V manufacturing and application. Whether the A G V mark, and the three mecanum wheels are used for
A G V can meet the requirements of logistics transportation real-time positioning and guidance of the A G V
depended greatly on the performance of the guidance
technology [1-3]. According to different positioning methods, II." A G V POSITIONING AND TRACKING
trackless visual guidance includes global guidance system and
local guidance system. In local guidance, a map is constructed A. Global map construction
by collecting the image of the surrounding environment with a In positioning and guidance, the most important factors
camera. Li and et al. proposed an improved visual are the current position of AGV, the destination and the path
simultaneous positioning and map creation (VSLAM) to reach the destination. Therefore, the perception of the
algorithm for the requirements of high positioning accuracy of operating environment is particularly important. Through
automatic guided vehicles in industrial scenes, but the map projection transformation, a two-dimensional image map of
creation was time-consuming [4]. For global guidance, feature the 3D operating environment is obtained, and pixel
detection and anti-interference become particularly important coordinates are used for positioning and guidance.
for composition and A G V recognition in complex operating
environments. In order to solve the problem of low reliability In order to use pixels instead of physical coordinates for
of A G V visual guidance under complex lighting, Wu et al. positioning, four vertices of the running site are chosen as
proposed an A G V path tracking algorithm based on the shown in Fig. 1. According to the physical length and width
lighting color model to reduce the impact of lighting on path ratio of the selected site p = l/d, the pixel value (px, py) is
extraction[5]. Later Lee et al. used a monocular camera to calculated, and then four sets of coordinates are substituted
identify the color scale indoors to complete the global into the projective transformation formula to calculate the
positioning of the mobile robot, but the light color scale was homography matrix A.
influenced by the lighting conditions. Xu et al. used image
stitching technology to build a global map and proposed a
target detection method based on deep neural network [7].

*Supported by Qinzhou Scientific Research and Development

Project(20177406)

978-1-7281-6313-0/20/$31.00©2020 IEEE

Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on April 18,2022 at 18:53:19 UTC from IEEE Xplore. Restrictions apply.
a a (0,180) (0,177) 0 2.95
a
ii 12 13
[x' y' w']=[u v 1]* a
21
a
22 a
23 (1) (60,180) (58,181) 1.64 -1.03
a a a
31 32 33 (120,180) (121,184) -1.27 -3.53

(180,180) (184,183) -4.19 -2.53


a 1 1 u + a 12 v + a 13 —a 31 ux—a 32 vx = x (240,180) (245,180) -5.08 0.45
(2)
a21u + a22v + a23—a31uy—a32vy = y (0,240) (0,238) 0 2.10

(60,240) (59,243) 0.63 -2.88


where a33=1, (u,v) is the original image pixel coordinates,
(x=x'/w', y=y'/w') is the transformed image pixel coordinates. (120,240) (121,245) -1.27 -5.37

(180,240) (184,244) -4.19 -4.37

(240,240) (245,241) -4.57 -0.88

(0,300) (0,297) 0 2.75

(60,300) (59,303) 1.13 -3.22

(120,300) (120,307) -0.25 -6.71

(180,300) (182,305) -2.16 -4.72

(240,300) (243,300) -2.54 -0.23


Figure 1. Real-time global map

According to Fig. 1, the actual physical coordinates of 30


B. Method of positioning and tracking
calibration points (60 cm apart) in the global map are
measured with (0,0) as the calibration starting point. The error During the operation of AGV, each frame of the recorded
in X and Y directions is shown in Table 1. After statistical images needs to be identified and positioned. The detection
calculation, the average error between the global positioning and tracking of mark are closely related, and there are high
point and the actual measured value in Table I in the X requirements for the real-time nature of the detection and
direction is 1.83 cm, and the average error in the Y direction is tracking methods. Considering that there may be the problems
2.26 cm. The error of the calibration point is small, which can of occlusion and different lighting in the global map, which
meet the positioning requirements of the AGV in practical may cause tracking loss, the target detection algorithm is used
applications. to obtain the search box of AGV mark. This search box is
used as an initialization window for local tracking based on
TABLE I . THE POSITIONING ERROR the RGB model to achieve fully automatic AGV tracking. The
three most commonly used detection methods are inter-frame
Calibration Measured X-error Y-error
difference method, optical flow method and background
difference method respectively. The frame difference method
(LX,LY) (ix,iy) is usually used to detect and extract the image differences
(0,0) (0,0) 0 0 between consecutive frames in the video sequence with
relatively efficient calculation complexity for real-time
(60,0) (58,0) 2.15 0 requirement. The local search box is used instead of the global
(120,0) (117,0) 2.78 0 traversal, and the RGB recognition range is continuously
updated during the tracking process to improve
(180,0) (178,0) 1.89 0 anti-interference.
(240,0) (237,0) 2.52 0
The detailed algorithm implementation is listed as
(0,60) (0,58) 0 2.14 follows:
(60,60) (58,58) 1.64 1.64 © Input the generated global map video sequence.
(120,60) (119,60) 0.75 -0.34 ® The continuous inter-frame difference method [11-13]
(180,60) (182,60) -1.65 0.15 is used to obtain the area of the AGV contour by obtaining the
frame difference before and after the movement. Record
(240,60) (242,60) -1.52 1.15 several consecutive frames fk-i(x,y)> fk(x, y)> fk+1(x,y)
(0,120) (0,117) 0 2.79 and calculate the difference between two adjacent frames,
denoting as
(60,120) (59,119) 1.13 1.30
(x,
(120,120) (121,121) -0.76 -1.19 d
(k+1,k) = | f k+1 ( x , y ) — f k y)|
(180,120) (184,121) -3.68 -0.69 (3)
(k,k-1) =|fk (x,y)—fk-1 (x,y)|
(240,120) (244,119) -3.55 1.30

Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on April 18,2022 at 18:53:19 UTC from IEEE Xplore. Restrictions apply.
The sum and average of the differences are: Frame 20 40 60 80

run time 0.048s 0.0512s 0.0491s 0.0731s


£ D
= d
(k+i,k) + d
(k,k-i) (4)
Frame 100 120 140 160
avr = £ D / 2 (5) run time 0.0501 0.0632s 0.0481s 0.0491s
After setting the threshold, the detection result of the
AGV area based on the inter-frame difference method is
shown in Fig. 2(a). After the morphological calculation of
erode and dilate, the results are shown in Fig. 2(b) and (c).
Further, all pixels in Fig. 2(c) are scanned in rows and
columns to obtain initial search box formed by a white area
with a pixel value of 255. The result is shown in Fig. 2(d):

Figure 3. Real-time location tracking renderings

Figure 2. (a) Inter-frame difference method; (b) and (c) Process result III." A G V GUIDANCE CONTROL
baesd on morphology; (d) Initial search box
The polyline path of the trackless map is formed by using
the pixel coordinate point set P1, P2, ..., Pn as the running end
© The rectangular parameters are used as the initial
points in turn.
search box of the next target tracking algorithm [14]. Set the
initial RGB large range of the mark Rmin~Rmax, A. AGV kinematics model
Gmin~Gmax, Bmin~Bmax, and the adjustable radius RA of
the mark. The system uses the AGV of three Mecanum[15] wheels,
and the kinematic model is shown in Fig. 4. In this model, the
@ Traverse the RGB values of the entire search box and current real-time positioning coordinates is C(X, Y), the target
record the first feature point CP(x,y) that matches the color point coordinates is (tx,ty), the AGV coordinates from
range. If the distance between the next feature point and the previous moment is O(x,y), Vc is the vehicle running
previous feature point is less than RA, the same mark color direction, and Target is the target direction.
block is added to the adjacent points; if it is not the adjacent
point, it will be added to another mark color block. Y Target(tx,ty)

© Calculate the coordinates P (x, y) of the center point of


the mark, which is the real-time positioning coordinates of the
AGV, such as,

£ CPi(x,y)+CP2 (x,y)+'"+CP n (x,y)


P(x,y)= n
(6)

© With P(x,y) as the center, output a search box larger


than the radius of RA and a new color range centered on the
RGB value of P(x,y), then return to @ to locate the next O X

frame; if there is no center point coordinate, then return to ©


to reacquire the AGV motion area. Figure 4. AGV kinematic model diagram

The positioning recognition time of the AGV mark is


shown in Table II below, and the positioning effect is shown In this way, the distance d between the AGV and the
in Fig. 3. target point and the angular deviation 0 between the current
running direction and the target direction can be calculated.
The formula can be given as,
TABLE II . LOCATION RECOGNITION TIME

Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on April 18,2022 at 18:53:19 UTC from IEEE Xplore. Restrictions apply.
2) Differential control
d
=J(tx—X) 2 + (ty—Y) 2 (7) Due to the influence of factors such as friction and
accumulated positioning error, differential control is used to
dynamically adjust the state of the linear travel. Two wheels
0 =tan- 1 (v t y — Y ) - tan- 1v( Y — y ) (8) are used for the differential control. The center of motion is
tx—X ' X—x '
not fixed, so only the clockwise and counterclockwise angle
By calculating the range of 0, the operating state of AGV adjustments are calculated.
can be determined as,
0<RA Vx>Vy Clockwise deflection
0>RA ||0<36O°-RA Rotation state 0<2 vx=vy Go straight
0<RA || 0>36O°-RA Differential control 0>360-RA Vx<Vy Counterclockwise deflection
0-unknown Go straight
When the AGV enters the hunt mode, it will recognize
According to the analysis of the AGV kinematics model, that the included angle belongs to the positive/negative
Fig. 5 shows the flow chart of the AGV operation control. direction. And the left wheel speed Vx and the right wheel
speed Vy are sequentially increased/decreased respectively. In
order to avoid a large speed difference between Vx and Vy
that causes a looped S-curve path, the Euclidean distances d
and dr at the current time and the previous time will be
calculated respectively. When d = dr-Vc * t, the AGV moves
in a straight line. By setting the maximum value Vcmax of the
two-wheel speed and the maximum value Vmax of the single
wheel, the vehicle speed and direction of the AGV can be
limited.

N IV." SYSTEM DESIGN AND TEST

The basic parameters of the experimental platform using


three wheels of omni-directional AGV are shown in Table III:

TABLE III . AGV BASIC PARAMETERS

Basic parameters Corresponding value


End
Weight 5kg

Figure 5. Flow chart of AGV operation control Size 30cmx30cmx15cm

Vmax 10cm/s
B. AGV control method
Vcmax 12cm/s
1) Dynamic rotation A polygonal path is used to test the tracking effect of
In order to reduce the turning radius and line-finding AGV on a straight path with different angle turns. In the
range, the rotation speed and time are controlled by the angle generated global map, the test path is P1, P2, ..., Pn. The
deviation 0. Among them, Vx, Vy, and Vz are the speed of the preset path and running trajectory are shown in Fig. 6, the
three mecanum wheels on AGV respectively. It is defined that global positioning error is shown in Table IV, and the global
when Vx and Vy are the same value with opposite directions, path deviation is shown in Fig. 7 and Fig. 8 respectively.
it is the running direction of the AGV. The uniform forward
speed is denoted as Vc, and when the speed and direction of
the three wheels are the same, the rotation in place can be
achieved.
The relationship between angular deviation A0 and the
speed can be given as,

V V
0 = AT= x+Vy+Vz AT
A
R 3R
(9)

After a straight-going mode, Vx and Vy take the value of


Vc, then AGV enters the rotation mode, and Vz = Vc. When
the differential control enters the rotation mode, at this time
A0 is relatively small, so the smaller speed value is taken as
the rotation speed. Figure 6. The path of a random preset path

Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on April 18,2022 at 18:53:19 UTC from IEEE Xplore. Restrictions apply.
TABLE IV. GLOBAL PATH-POINT POSITIONING ERROR According to the above experimental results, through the
inter-frame difference method and the local tracking based on
Target (LX,LY) x-Error y-Error RGB, the response time of positioning recognition is about
(p x, py) (ix,iy)
0.05s, satisfying the real-time requirements in the guidance
Point process. The positioning accuracy and path deviation of AGV
P1 (656,657) (169,31) (165,26) 4.90 5.53 are within 5cm, within the accuracy requirements of basic
application scenarios.
P2 (754,507) (216,101) (212,97) 4.68 4.11

P3 (829,324) (252,185) (247,182) 5.47 3.99 V." CONCLUSION

P4 (729,175) (204,255) (200,249) 4.74 6.09 In this system, a trackless AGV system based on global
view was developed by recording a global map for AGV
P5 (480,134) (85,274) (85,268) 0.90 6.11
positioning and guidance control. This system functions with
P6 (343,304) (20,195) (23,192) -2.47 3.26 adjustable running route to improve the performance of the
P7 (365,499) (31,104) (38,101) -6.97 3.82 traditional tracked AGV. The use of vision guided positioning
has the advantages of low cost, easy development and strong
scalability and the local tracking method can also greatly
improve the accuracy and anti-interference of positioning
The Table IV record the positioning pixel values (px, py), recognition. The real-time problem in visual guidance system
corresponding to the physical coordinates (lx, ly) and the can be effectively solved, and the positioning accuracy of the
actual AGV physical coordinates (LX, LY) of the target AGV can be improved.
endpoints P1-P7. The average errors in the X and Y directions
are 4.31cm and 4.70cm respectively, including the effect of REFERENCES
AGV height on pixel error.
[1]" Chun C. Lai,Kuo L. Su, "Development of an intelligent mobile robot
localization system using Kinect RGB-D mapping and neural network,"
Computers and Electrical Engineering,2018,67.
[2]" Chen Q, "Research on AGV Vision Guidance Algorithm ,"Jiangnan
University, 2019.
[3]" Zheng S H, "Research on AGV positioning and path planning technology
for visual navigation," South China University of Technology, 2016.
[4]" Li Y H, Zhu S Q, Yu Y Q, "Improved visual SLAM algorithm in factory
environment ," Robot, 2019, 41(01): 95-103.
[5]" Wu X, Zhang Y, Li L H, Lou P H, He Z, "Path extraction method of
vision-guided AGV under complex illumination conditions," Journal of
Agricultural Machinery, 2017, 48(10): 15-24.
[6]" Lee, S. , Tewolde, G. S. , Lim, J. , & Kwon, J., "Vision based localization
for multiple mobile robots using low-cost vision sensor." 2015 IEEE
Figure 7. X-Deviation of global path International Conference on Electro/Information Technology (EIT) IEEE,
2015.
[7]" Xu H, "Design of target detection and tracking system based on global
field of vision ," Harbin Institute of Technology, 2018.
[8]" He J, "Positioning and distributed control technology of indoor
multi-robot system positioning and distributed control technology ,"
Nanjing University of Science and Technology, 2017.
[9]" Li P, Zhang Y Y, "Global localization for indoor mobile robot based on
binocular vision," Progress in Laser and Optoelectronics,
2020,57(04):254-261.
[10]Luo Z, Liu H P, Hu X F, Xu W, "Research of vision-guided AGV
deviation-rectifying algorithm," Computer Simulation, 2016, 33(01):
373-377.
[11]Xu Y L, Zuo J Z, Zhang X, Bi D Y, "A video segmentation algorithm
based on accumulated frame differences," Optoelectronic Engineering,
2004(07): 69-72.
[12]Zuo F Y, Gao S F, Han J Y, "Moving object detection and tracking based
Figure 8. Y-Deviation of global path on weighted accumulative difference," Computer Engineering, 2009,
35(22): 159-161.
In this experiment, the randomly selected 40 path marking [13]Qu J J, Xin Y H, "Combined continuous frame difference with
points are compared with the X and Y directions on the actual background difference method for moving object detection," Acta
running path of the AGV. The maximum errors are 10.65cm Photonica Sinica, 2014, 43(07): 219-226.
and 8.97cm respectively, the minimum error is 0cm, and the [14]Zhao W Q, Kuang X J, Li M F, "Moving object tracking based on
average errors are 3.61cm and 2.90cm respectively. Because improved Camshift algorithm,"Information Technology, 2012, 36(07):
165-169.
the trackless map only considers the destination, there will be
large deviation between the AGV and destination. [15] Wang X S, "Introduction of Mecanum-wheels based Omni-directional
mobile robots with applications," Machinery Manufacturing and
Automation, 2014, 43(03): 1-6.

Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on April 18,2022 at 18:53:19 UTC from IEEE Xplore. Restrictions apply.

You might also like