Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Research on Collaborative Object Detection and Recognition of Autonomous

Underwater Vehicle Based on YOLO Algorithm


TANG Leisheng' , XU Hongli-, WU Han3 , TAN Dongxir', Gao Lei s
I. Shengya ng In stitnte of Automation, Chine se Aca demy of Science, Shenyang 1100 l o.China: SR ENYANG JIANZRU
UNIVERSITY, Information & Control Engineering Facul ty, Shenyang 110 I68,China
E-mail : tangleisheng@sia .cn
2. Northeastern University, Shenya ng 110000, China
E-m ail: xuhongli@m ail.neu .edu .cn
3. Shengy ang In stitute of Automation, Chine se Academy of Science, Shenyang 110016, China
Esmail : wulk1n@sia .cn
4. Shenyang Li gon g University, School of Automation and Electrical Engineering, Shen yang 110000, China
E-mail : tandongxu@sia .cn
5. Shengy ang Institute of Automation, Chine se Academy of Science, Shenyang 11001 6, China
E-mail: gaolei @sia.cn
Abstract: Aut onomous underwater vehicle (AU V) is an important tool for hum an to explore and research marine
resources. Due to the influence of underwater environment, it is impossible to realize the rapid detection and recognition
of AUV and the reco gnition of cooperative objects. To solve thi s problem, underwater light vision and underwater ima ge
enh ancement technology are proposed to reali ze the rapid detection and reco gnition of underwater vehicles, so as to
2021 33rd Chinese Control and Decision Conference (CCDC) | 978-1-6654-4089-9/21/$31.00 ©2021 IEEE | DOI: 10.1109/CCDC52312.2021.9601610

realize the reco gnition of cooperative objects. In thi s paper, YOLO algoritlun which has a good effect on target detection
precision and the recognition speed is used to recognize the underwater vehicle on the Yolo network. For the original
data set, histogram equ alization and CLARE algoritlun are used to enhance the image. They are respectively loaded into
the same training model and trained on the yolov2 and yolov3 networks. The analysis of the experimental result s show s
that the underwater image enhancement technology based on the YOLOv3 network and the CLARE algorithm can meet
the requirements of rapid reco gnition for underwater vehicle detection and reco gnition.
Key Words: Underwater Vehicle, YOLO, Neural Network, Real-time Detection, Target Recognition

which leads to the problem that the cooperative object


1 INTRODUCTION cannot be identified . In this paper , the recognition and
Since Autonomous Underwater Vehicles can adapt to the detection of underwater vehicle in underwater environment
harsh ocean environment , they can replace human beings to are realized by using machine vision and image enhance-
complete a variety of underwater tasks, so they gradually ment technology.
become an important tool for human to explore marine 1.1 Experimental platform - undenvater vehicle
resources. It has been widely used in the fields of marine
environment observation, marine resources investigation, Shkurti et al 181and Karim Koreitern et al 191from McGill
marine security and defense (l.ll.Computer vision is widely University in Canada designed the Aqua small underwater
used in the tasks of underwater vehicle, and it can assist the vehicle, and they proposed a visnal recognition and tracking
vehicle to complete various tasks, such as underwater map system for underwater vehicle , in which the rear vehicle can
drawing, 3D scene reconstruction, underwater target recognize and track the front vehicle through vision.
recognition and tracking, underwater pipeline inspection According to the experimental requirements , this paper
and positioning et a1 131. designs a kind of small underwater vehicle- "Dolphin I"
When light travels under water, it is scaltered by particles AUV based on real-time computer vision. This vehicle has
and impurities in water and absorbed by water medium, the functions of fixed-depth navigation, directional naviga-
resulting in strong attenuauon t' <l.As a result, the acquired tion, image acquisition and processing . It can realize auto-
underwater images cannot directly meet the needs of human nomous and remote control movement under water .
beings, and to some extent affect the accuracy and The AUV structural design of Dolphin I is shown in Fig
1. OpenMV camera and main camera Sony FDR-X 3000
efficiency of target recognition 171. And the underwater
vehicle takes advantage of the underwater environment and camera are installed on it, as well as the image processing
integrates it with the environment, making it difficult to version integrated with GPU(Graphics Processing Unit
recognize the underwater vehicle from the underwater (GPU)) computing module (Nvidia Jetson TX2).Three
environment. tasks can be performed on this platform : I) Vehicle forward
environment monitoring .Z) Target identification;3) Posit-
Due to the influence of the environment, the underwater ioning .
vehicle cannot be detected and recognized in real time,

Tins work is supported by Independent project of State Key Laboratory


of Robo tics 2016-Z08V

978-1-6654-4089-9121/$31.00 @202 1 IEEE 1664

Authorized licensed use limited to: Visvesvaraya Technological University Belagavi. Downloaded on May 20,2024 at 06:21:38 UTC from IEEE Xplore. Restrictions apply.
underwater targets of interest. In this paper, histogram
equalization and CLARE algorithm are used to enhance the
collected underwater images.
Nvidia Jctson TX2
1) Histogram Equalization
The realization process of image enhancement process-
.~~SOIlY FDR-X3000
Camera ing by histogram equalization is as follows :(l) Determine
the gray level ofthe image and convert the color image into
a gray level image with a gray level of 0-255 ; (2) Calculate

/
the probability of the original histogram, and calculate the
proportion of each gray level in the total pixel of the original
Depth Gauge Vertical Thruster Counter Weight
image , denoted as Pi; (3)Calculate the histogram of
Figl . Overall structure of "Dolphin I" AUV probability cumulative value SCi), until finally a grayscale,
The Dolphin I AUV (shown in Fig . 2) measures 520mm sum to 1:
by 430mm by 160nnn, weighs 13kg, uses two horizontal

I I
k k
and one vertical thrusters. It has six degrees of freedom and
operates at a depth of 30 meters with a maximum speed of s, = T(rk) = Pr (1J ) = ~
j =O j =O
2.5 knots.
(4) According to the formula to calculate the pixel
mapping,
SS(i) = int(max(pix) - min (pix)) * SCi) + 0.5
pix refers to grayscale, SSO) refers to maximum grayscale
- minimum grayscale) * accumulative probability rounded
after + 0 .5.
The image enhancement operation of histogram equaliz-
ation is carried out on the collected image data , and the
effect diagram is shown in Fig3 . By observing the histog-
ram changes, it can be seen that the gray level in the original
image is concentrated in the middle, and there are basically
no dark and bright pixels in the image , After equalized
histogram, the pixel distribution is more uniform, the darker
and brighter pixels in the image are prominent. The image
is brighter as a whole, but the performance of interesting
Fig2 . AUV undenvater motion diagram of "Dolphin I" features in the image is poor.
2 Relevant Work
2) CLARE Algorithm
According to the process of underwater target recogni-
tion and detection, the key technologies of undenvater The implementation process of the CLARE algorithm for
target recognition and detection based on optical image can image enhancement processing is as follows : (1) Divide the
be divided into two parts: underwater image preprocessing image into several N*N size small pieces; (2) Cut out the
and underwater target recognition and detection. gray histogram of each small block with clipping value of
2.1 Image Preprocessing Clipimit, and find out the part and total Excess higher than
this value in the histogram, and complete the balance
The scholars at home and abroad have also done in-depth through operation; (3) To carry out the linear difference
research on undenvater image preprocessing technology . between blocks, it is necessary to traverse and operate each
Because of the influence of light absorption and scatter- image block, and finally obtain the transformed value of
ing in water, the undenvater light image presents a blue- each pixel ; (4) Do layer color filter mixing operation with
green color. Moreover, due to the influence of light the original image to get the enhanced color image .
scattering from impurity particles in water, the underwater As shown in Figure 3, the image data collected was
light image will appear blurred and atomi zed. In order to enhanced by the CLAHE algorithm. By observing the
see the underwater target of interest more clearly, it is histogram changes, it can be seen that the histogram
necessary to enhance the underwater image processing, so enhanced by CLAHE algorithm moves the pixel
as to improve the accuracy of underwater target recognition distribution to the left , and the image is dark on the whole.
and detection, color constancy theory (Retinex) was pro- However, it has an obvious effect on the defogging of the
posed by Land (101 and gradually developed into MCR, underwater image , and the features of interest in the image
MSRCR and other methods for image enhancement. are more obvious.
Carlevaris-Biano et al (Il( used attenuation errors between
channels of RGB color to eliminate image blurring. 2.2 Target Recognition and Detection
Histogram Equalization and Adaptive Histogram Equaliz- To realize the recognition of the cooperative object of the
ation (CLAHE) algorithm (l 2( is able to better improve the undenvater vehicle by vision, the first step is to complete
contrast of underwater images, and can more clearly see the target detection and recognition of the underwater

2021 33rd Chinese Control and Decision Conference (CCDC) 1665

Authorized licensed use limited to: Visvesvaraya Technological University Belagavi. Downloaded on May 20,2024 at 06:21:38 UTC from IEEE Xplore. Restrictions apply.
vehicle. At present , convolutional neural networks are
mainly used in the field of target detection. The more
mature networks mainly include :
• 10~

(a) Original image (b) Original image histogram

Il2 113 0.6 0,7

(c) The image after the histogram equalization operation (d) The histogram after the histogram equalization
operation

. ' "' '. I I I~ I [ 1111 . , ~~


."' . "' ....
111111""h'IfiE:
0_.8

(e) The image after CLAHE algorithm operation (f) The histogram after CLAHE algorithm operation
Fig3. Underwater image enhancement effect drawing by histogram Equalization and CLAHE algorithm

R-CNN network proposed by Girshic et al [l3].The Fast- set processed by CLAHE algorithm, the experi-mental
RCNN network model proposed by Girshic et al [14]. Ren [15] results were analyzed . The flow chart of YOLO detection
et al proposed the Faster-RCNN model. Redmon J et al [16] target is shown in Fig4.
proposed the YOLO model, which is an efficient target
detection model. Instead of the original stage of candidate
region generation, the network can simultaneously detect
and predict multiple categories of targets, and can directly
train the network end-to-end . Liu W et al [17] proposed the
SSD network model.
In the underwater environment, due to the influence of
the environment, it is impossible to detect and recognize the
underwater vehicle in real time, which leads to the problem
cooperative object cannot be identified. This paper
proposes to use machine vision, image enllancement
technology and YOLO algorithm to realize the recognition
and detection of the underwater vehicle in the underwater Class probability map

enviromnent. The YOLO network was mainly used to Fig4. Flow chart of YOLO detection target
perform the experiment. By comparing the recognition YOLOv3 uses feature fusion and multi-scale detection
effect ofYOLOv2 [IS] and YOLOv3 [19] between the original methods to effectively improve the accuracy and speed of
image data set, the histogram equalized data set and the data target detection. The YOLOv3 network structure has evolv-

1666 2021 33rd Chinese Control and Decision Conference (CCDC)

Authorized licensed use limited to: Visvesvaraya Technological University Belagavi. Downloaded on May 20,2024 at 06:21:38 UTC from IEEE Xplore. Restrictions apply.
ed from YOLOv2 Darknet-19 to Darknet-53. YOLOv3
Network draws on Residual Network Residual Network,
each Residual component has two convolution layers and a

LLI~bj [t( en +
B
shortcut link. Darknet-53 (2 + 1*2 + 1 + 2*2 + 1 + 8*2 + 1
S2

+ 8*2 + 1 + 4*2 + 1 = 53) was adopted in the network loge (1 - t()log(l - e()] -
structure adjustment. The last connected layer was the full i=O j =O

LL [t( ef) +
S2 B
connection layer, which was also counted as the convolu-
tion layer, and there were 53 in total. The network structure Anoobj IJoob
j
loge (1 - t()log(l - e()] -
i=O j =O
of the algorithm was shown in Fig 5.
LI~bj L [p/ +
S2

'~ "'~~~e8eXQx'e
log(P/) (1 - P/)log(l - p/)]
i =O cEcl asses
means that the target appears in the ith grid
Obj
Where, li
cell; li~bj represents the jth bounding box predictor in the ith
grid cell.
Corw. loy.... ( ",,,,,.ln ye,, ( """.laren C<>
nn. l oye. (""". Loye'
~;;:6~~
M "~ poo l laro r
C~:;·~~;'t
Maxf'OOl l0l"' <
":b~rs"
3d.256 1:~:;~~ } ~4 3~;~1oi~ }X2 ~:~::~~~ 2.3 Weight Training of Neural Network
2 ~2 ....2 2.1 +2 l. h 2S6 hl.512 ~.3.IOU
-3_30.:;12 3.3 . '02 ~ 3d.IQ2.-j-2
Ma.<POollaY'"'
2x2.\.o2
Ma. p~l llrl'e.
2..2....2 In the experiment , Jiamin VlRB XE undenvater motion
Fig5. Network structure of YOLO algoritlun camera was used to shoot the undenvater motion state of the
AUV, and 2274 effective motion photos of the AUV were
The YOLOv3 algorithm unifies candidate box extraction, extracted . Labelimg was used to calibrate the data sets for
feature extraction, target classification and other functions training and testing, among which the training set consisted
in a deep neural network. In YOLO, the input image is first of the original training set, the training set processed by
divided into SxS grids of equal size, and each Grid is called histogram equalization, and the training set processed by
a Grid Cell. Subsequent detection processes are related to CLAHE algoritlun. The sample image of the vehicle's
the grid information. The prediction box of the target object undenvater motion part was shown in Fig . 6. Table 1 shows
is given based on the center point of the grid, and the size the number of training sets and test sets.
of the prediction box can be customized. Each grid predicts Table l . Calibration of data set
B Bounding Boxes, each Bounding Box has four coordi- Label I Training I Testing
nates and one confidence , and the final prediction result is robot 1800 474
SxS x(B*5+C) vectors . YOLOV3 sets 3 candidate boxes for
each grid cell. Each candidate Box has 5 basic parameters The computer used for training and testing in the
(x, y, w, h, confidence) , where (x, y) is the offset of the experiment uses Core i7 processor, NVIDIA GeForce GTX
center of the Bounding Box relative to the cell, and (w, h) 1660 Ti, and CUDAIO.1.
is the ratio of the Bounding Box relative to the whole photo . In this paper, a small batch gradient descent method is
All parameters are normalized . Whether the confidence used to update the gradient in training . For accelerated
response includes objects and the accuracy of the position network training, set the Batch at 16 and the Subdivisions
under the condition that objects are included are defined as: at4 with a learning rate of 0.001.
In the training process, the learning rate decays 10 times
after 1000 and 8000 iterations . The training weight is saved
Pr(Object ) x IOU~~Z~th, Pr(Obje ct) E [0,1] once every 1000 iterations, and the weight file at the end of
Where, Pr(Object ) represents the probability that the the algoritlun is used to detect the network training results.
detection Boundary Box contains the target object , and
IOU~~Z~th is the overlap degree of two Boundary Boxes . 3 Analysis of experimental results
Finally , the class probability Ci = P(ClassdObject) of The YOLO algoritlun is used to complete the detection
each grid prediction is calculated. and recognition of the undenvater vehicle , and the change
The loss function of YOLOv3 was modified on the basis curve of the loss function is drawn . The final expectation is
of YOLOv2 . YOLOv3 eliminated Softmax and replaced it O. The loss change curve ofYOLOv2 algoritlun is shown in
with Logistic , replacing the classification loss with Figure 7. Due to the large value of loss function in the first
dichotomous cross entropy. The smaller the cross entropy 700 iterations , the Loss change curve should be drawn from
is, the closer the two probability distributions are. 700 times. From the observation of the change curve of loss,
H(p,q) = -l>(x;)logq(x;) it can be seen that after the completion of 1000 training
iterations, the loss value dropped to 2.0; after the comple-
Where p is the distribution of correct answers, q is the tion of 2000 iterations , the loss value dropped to 1.0; after
distribution of predictions , and log base e. the completion of 7000 iterations, the loss value basically
The loss function of YOLOv3 is: stabilized at 0.3; Loss value did not decline and tended to
S2 B
be stable.
_ "\' "\' obj [( A j )2
Loss - Acoord LL I ij Xi - Xi + ( Yi - A j )2 ]
Yi + The loss change curve ofYOLOv3 algorithm is shown in
i=O j =O Figure 8.The Loss curve is also plotted starting from 700
iterations . According to the curve, when the iteration reach-
es 1000 times, the loss value drops to 0.2 ; After 2000 iterati-

2021 33rd Chinese Control and Decision Conference (CCDC) 1667

Authorized licensed use limited to: Visvesvaraya Technological University Belagavi. Downloaded on May 20,2024 at 06:21:38 UTC from IEEE Xplore. Restrictions apply.
ons, the loss value decreased to 0.05. After 7000 iterations, After the completion of YOLO network training , the
the loss is basically stable at 0.01. final training weight was used to verify the test set. The
The loss curves
YOLOv2 weights were used to verify the test set, and the
I- .V9_'O" I test results were shown in Figl1. According to the
verification results, it can be seen that YOLOv2 algorithm
has a poor recognition effect on the vehicle that is far from
the fixed camera and relatively fuzzy, it is basically unable
to detect and identify the vehicle .
5.0

4.0

' .0
'.0 0.8
1.0

!~f-------J:::-~~~~~~~~~~#-d
o 2000 4000 flOOD 8000 100 00 12QOO 14000 IflQOO
0.'
batches

Fig7. Loos curve of the YOLOv2 algorithm 04

The loss curve s


02
- avgJo ss

1‫סס‬oo zeeoe )OQOO .&0000 50000


0 80

Fig 10. Avg IOU variation curve ofYOLOv3 algorithm


The test set was validated with YOLOv3 weights , and the
0.60

0.40 test results were shown in Fig12 . According to the test


results, the YOLOV3 algorithm has a better recognition
0)5
0.30
OJ>
0.20
0 15 effect on vehicles with a long distance and relatively fuzzy
010
00'
-~ ---+-:- distance, and achieves the expected result.
000 I r"
] 000 4000 6 000 IW OO 100 00
batches

FigS. Loss change curve of YOLOv3 algorithm


The IOU value represents the ratio of the intersection and
union between the candidate box and the real marker
border, and the IOU value finally approaches 1. The Avg
IOU change curve of YOLOv2 algoritlun is shown in
Figure 9. It can be seen from the observation curve that the
IOU value keeps rising with the increasing number of
iterations . When the iterations reach 10,000 times, the IOU
value is basically close to 0.9, and when the iterations reach
15,000 times, the IOU value is basically stable at 0.9 and Figl1. YOLOv2 algoritlun test recognition result
basically stable. diagram
1.0 ,---- _ _-""'-""".=.;=-=:..=-:.=-_----,

0.'

10000 20000 3QO<lO 40000 soeoe Fig 12. YOLOv3 algorithm recognition result diagram
batches
By observation of the recognition results, it can be seen
Fig9. Avg IOU variation curve ofYOLOv2 algoritlun
that YOLOv3 algorithm has a high recognition accuracy for
The Avg IOU change curve of YOLOv3 algorithm is
the AUV, and YOLOv2 algorithm will miss detection for
shown in Figure 10. It can be seen from the observation
images with a long distance and blurred target pixels. The
curve that the IOU value keeps rising with the increasing
recognition results of YOLOv2 , histogram equalization
number of iterations. When the iteration reaches 5000
+YOLOv2, CLAHE+YOLOv2, YOLOv3, histogram
times, the IOU value is basically close to 1 and basically
equalization+YOLOv3, and CLAHE+YOLOv3 algoritluns
stable.
in the test set were counted, and the accuracy, recall rate and

1668 2021 33rd Chinese Control and Decision Conference (CCDC)

Authorized licensed use limited to: Visvesvaraya Technological University Belagavi. Downloaded on May 20,2024 at 06:21:38 UTC from IEEE Xplore. Restrictions apply.
average accuracy of the statistical results were observed , as Mobile Networks and Applications, 2017 :1-8. Doi :
shown in Table 2. 10.1007/s110360 1708634 .
[4] Zhao Xinwei. The Research on Undenvater Imaging,
Undenvater Image Enhancement and Relevant Applic-
Table 2 Statistical table of identification results ations [D]. Hangzhou: Zhejiang University.Zn l S.
Average [5] GUO Q W, XUE L L, TANG R C, et al. Undenvater
Detection Accuracy Recall image enhancement based on the dark channel prior and
Accuracy/
Algorithm Rate/% Rate/% attenuation compensation [J]. Journal of Ocean University
%
YOLOv2 86.49 84.35 82.12 of China, 2017 ,16(5):757-765.
Histogram [6] LIN M X, DAI C G, DONG X, et al. Survey of
Equalization + 86.86 85.42 84.35 Undenvater Image Processing Technology [J]. Measure-
YOLOv2 ment & Control Technology, 2020 , 39(8):7-20.
CLARE+YOLOv2 87.21 86.75 85.55 [7] Hou Guojia . Research on Undenvater Image Enhance-
YOLOv3 93.32 92.16 92.76 ment and Object Recognition Algorithms [D]. OCEAN
Histogram UNIVERSITY OF CHINA, 2015 .
Equalization + 94.23 93.84 93.33 [8] Florian Shkurti , Wei-Di Chang , Peter Henderson, et al.
YOLOv3 Undenvater multi-robot convoying using visual tracking by
CLAHE+YOLOv3 95.53 95.74 94.68 detection.20 17 IEEEIRSJ International Conference on Inte-
lligent Robots and Systems(IROS). IEEE , pp.4189-4196.
From the data m the above table, It can be seen that
[9] Karim Koreitem, Jimmy Li, et al. Synthetically Train-
CLAHE + YOLOv3 algorithm has high detection accuracy
ed 3D Visual Tracker of Undenvater Vehicles . OCEANS
and excellent recognition effect for the AUV , and the
2018 MTS /IEEE Charleston, 10 January 2019 . DOl:
average detection speed reaches 0.032s, achieving the
10.1109/ OCEANS. 2018. 8604597 .
expected real-time recognition effect.
[10] Land E H, McCann 1. Lightness and retune theory [J].
4 Conclusion JOSA, 1971,61(1) :1-11.
[11] Carlevaris-Bianco N, Mohan A, Eustice R M. Initial
This paper uses machine vision and undenvater image Results in Undenvater Single Image Dehazing [C]. 2010:
enhancement technology, based on YOLOv2 and YOLOv3 Seattle , WA,USA,8.
algorithm to realize the detection and recognition of [12] Reza A M, Realization of the contrast limited adaptive
undenvater vehicle. The accuracy of YOLOv2 , histogram histogram equalizeation (CLARE) for real-time image
equalization+YOLOv2 and CLAHE+YOLOv2 were enhancement [J]. The Journal of VLSI Signal Processing-
86.86% and 87.21% respectively. However, the accuracy of Systems for Signal, Image , and Video Technology, 2004 ,
YOLO-v3 algorithm reached 93.32%, histogram 38(1):35-44 .
equalization+ YOLOv3 reached 94.23%, and [13] Girshick R, Donahue J, Darrell T, et al. Region-based
CLAHE+YOLOv3 reached 95.53%. Thus, it can be seen convolutional networks for accurate object detection and
that the CLARE algorithm is used to realize undenvater segmentation [J]. IEEE transactions on pattern analysis and
image enhancement, and the YOLOv3 algorithm can realize machine intelligence, 2016 ,38(1) :142-158 .
the detection and recog-nition of the AUV in the weak [14] Girshick R. Fast r-cnn[C] . Proceedings of the IEEE
undenvater environment, and achieve the desired effect. international conference on computer vision. 2015 :1440-
The next step will be based on the CLAHE+YOLOv3 1448.K.
algorithm to realize the detection and recognition of the rear [15] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards
vehicle to the front moving vehicle during the dynamic real-time object detection with region proposal
movement of the two under-water vehicles. It lays a networks[C] . 'Proceedings of the IEEE conference on
foundation for the future research of realizing the computer vision and pattern recognition.20 16:779-788 .
cooperative operation and clustering of undenvater vehicles [16] Redmon J, Divvala S, Girshick R, et al. You only look
based on vision. once: Unified, real-time object detection[C] .
REFERENCES [17] Liu W, Anguelov D, Erhan D, et al. SSD: Single shot
multibox detector[C]. European conference on computer
[I] HUANG Yan, LI Yan, YU Jiancheng, LI Shuo, vision. Springer, Cham, 2016:21-37.
FENGXisheng.State-of-the-Art and Development Trends [18] Redmon J, Farhadi A, YOL09000: better, faster, stro-
of AUV Intelligence [J].ROBOT,2020,42(02) :215-231. nger[C] . Proceedings of the IEEE conference on computer
[2] FENG Xisheng, LI Yiping, XU Hongli. The Next vision and pattern recognition.20 I 7:7263-7271.
Generation of Urnnanned Marine Vehicles Dedicated to the [19] Redmon J, Farhadi A, YOLOv3: An Incremental
50 Anniversary of the Human World Record Diving Improvement [J].https://arxiv .org/abs/1804 .02767 .
10912m [J]. ROBOT,20 11,33(01): 113-118.
[3] LU H M, LI Y J, ZHANG Y D, et al. Undenvater
optical image processing: a comprehensive review [J].

2021 33rd Chinese Control and Decision Conference (CCDC) 1669

Authorized licensed use limited to: Visvesvaraya Technological University Belagavi. Downloaded on May 20,2024 at 06:21:38 UTC from IEEE Xplore. Restrictions apply.

You might also like