Session 2 - 25

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Detection of Flood Damage on Roads using Deep Learning and Elevation Data

Jun Sakamoto1*
1 Kochi university

Abstract Identifying an inundation area after a flood event is essential for


planning emergency rescue operations. Aerial photographs can provide a
broader range of damage information in a smaller sample than land
photographs. Computer vision (CV) has been widely used to investigate
disaster extent. In this study, we propose a method to automatically determine
inundated road segments by floods using image recognition technology, a deep
learning model, and elevation data. The algorithm trained in this study is you
only look once version 3 (YOLOv3). First, we develop a training model using
aerial photographs captured during a flood event. Then, the model is applied to
aerial photographs captured during another flood event. The model visualizes the
inundation status of roads on a 100-m mesh- by-mesh basis using aerial
photographs and integrating the information on whether the mesh includes targeted
road segments. Our results showed that the F-score was higher, 89%–91%, when
we targeted only road segments with 15 m or less. Moreover, visualizing in GIS
facilitated the classification of inundated roads, even within the same 100-m
mesh, which is a relevant finding that complements deep learning object
detection.

Keywords. Deep learning, YOLOv3, Disrupted Section, Aerial photograph, GIS,


Fundamental Geospatial Data
* Corresponding author. E-mail address: jsak@kochi-u.ac.jp
1 Introductio
n

In the initial response to a heavy rainfall disaster, the extent of flooding is essential
information for planning rescue operations. Aerial photographs can provide a broader
range of damage information in a smaller sample than land photographs. Previous studies
have considered many methods to identify the extent of damage from remote
sensing images. The existing methods can be divided into multitemporal and single-
temporal evaluation methods. Computer vision (CV) has been widely used to investigate
disaster extent (Demir I. and Koperski, K. (2018)). There are two categories of CV:
traditional and deep learning (DL) methods. The advances in DL have brought it into
the spotlight of damage detection. Convolutional neural networks (CNN) can
facilitate disaster damage detection. CNN can process low-level characteristics through
deep structures to obtain high-level semantic information.
Although there have been several efforts to detect disasters automatically, there are still
challenges. One example is that methods using aerial photographs cannot distinguish a
road disruption if it is covered by water because it is unknown whether a road exists on the
map. Another example is that because road segment data do not generally include elevation
data, even if a DL model can determine flood damage on a mesh-by-mesh basis, it cannot
decide which roads are undamaged.
In this study, we propose a novel method using aerial photographs and elevation data to
determine roads damaged by flooding. The proposed method applies a DL model
developed using aerial photographs of the flooded area to those of other sites and verifies the
model’s fit. Then, the flood damage is visualized on a mesh-by-mesh basis. Finally,
undamaged roads are shown by selecting target road segments considering elevation data.

2 Methodolog
y

Figure 1 shows an overview of the proposed method. We assume the following scenario.
Analyzers such as road administrators have prepared road segments and an inundation status
discrimination model in advance. When a heavy rain disaster occurs, they apply the model to
aerial photographs captured immediately after the disaster to automatically identify and
visualize damaged road segments in the disaster area.
This study demonstrates the above situation using past events. The aerial photographs for
model training used in the study are from the Kuji River basin in Ibaraki Prefecture and the
Toki River basin in Saitama Prefecture which were affected by Typhoon Hagibis in 2019.
The photographs for verifying the accuracy of the inundation discrimination model are from
Takahashi River basin in Okayama Prefecture (damaged by the heavy rain event in July 2018)
and the Kashima River basin in Chiba Prefecture (damaged by Typhoon Hagibis in 2019).
Geospatial Information Authority of Japan (GSI) published the above aerial photographs
immediately after the disaster.
We downloaded data by selecting a range to include inundated/non-inundated areas, keeping in
mind to obtain sufficient training data for validation. Then, the aerial photographs of
each mesh divided into 100-m mesh units were manually classified into “inundated
mesh” and
“non-inundated mesh” by referring to the inundation estimation map published by GSI. Figure 2
shows an example of this classification. Before selecting the 100-m mesh size, we
experimented with different sizes.

The results confirmed that a larger mesh unit tends to shorten the computation time required for
automatic discrimination but increases misclassification.
The road segment data are vector data of road sections. It is divided into 100-m mesh
units
using the union function of GIS. In this study, road segment vector data are from the
Conservation GIS Consortium Japan. In addition, we added them to the road segment data
because the original road segment data do not include elevation data. The elevation data are
from Fundamental Geospatial Data, published by GSI.
The algorithm trained in this study is you only look once version 3 (YOLOv3), developed in
2018. We developed our training model using Google Colaboratory, which can make a
model at high speed by GPU.
We evaluate models using the following indicators: Precision, Recall, Specificity, and F1-
score (Karacı, A. (2022)). True Positive (TP) and True Negative (TN) are the indicators
of correctly detected objects by a model and correctly missed objects by the model,
respectively. False Positive (FP) and False Negative (FN) are the number of wrongly detected
objects by the model and the number of wrongly missed objects by the model.

3 Result and
Discussion

Table 1 shows the discrimination and actual results by target area and targeted road
segment. Overall, the accuracy of the Takahashi River basin is higher than that of the
Kashima River basin. Focusing on all road segments, the F1-score was 85% and 91% for
the Takahashi and Kashima River basins. Comparison by targeted road segment shows that
F1-score tends to be higher in the Takahashi River basin when targeting only road segments
of 15 m or less. By contrast, there is no change in the F1- score of the Kashima River basin.
Figure 3 shows a GIS visualization of the discrimination results in the “ALL” case in Table
1. The total number of meshes in the photograph of the Takahashi River basin is 2,108 (= 34 ×
62), of which 1,494 meshes have road segments. Of the 1,494 meshes, the numbers of TP,
TN, FP, and FN are 739, 485, 183, and 97, respectively. Compared with the photograph of the
Kashima River basin, hat of the Takahashi River basin has a wide river and forest, indicating
that many meshes are out of target of the analysis. The white dashed circles above the figure
show where FP increases in the “ALL” case. When we target only meshes with road sections
of 15 m or less, most meshes in this location of the FP are out of target from discrimination.
Figure 4 shows the results of only road segments with an elevation of 15 m or less. It
indicates that suburban residential complexes on higher ground are out of target from the
analysis. Even in the central city area, a distinction is made between inundated roads in low-
lying areas and non-inundated embankment roads.

Takahashi river basin (Okayama Pref.) Kashima river basin (Chiba Pref.)

Prediction
TP FP
Out of Target
TN FN 20~25m 25m~
15~20m
Figure 4. Visualization of verification results by road segment (Target: Less than 15 m)

4 Conclusion

In this study, using YOLOv3, a DL technique, we proposed a method to automatically detect


inundated road sections after flood events rapidly. Our experimental results confirmed that the
proposed method could automatically identify inundated and non-inundated areas with high
accuracy. In addition, targeting road segments considering elevation data showed more accurate
results. Therefore, if analyzers prepare training models and road sections in 100-m mesh units in
advance, all that is required is to upload aerial photographs to Google Colaboratory for identification.

Acknowledgments

JDC Foundation Inc. supported this work.

References

Conservation GIS-consortium Japan, http://cgisj.jp


Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Raskar, R. (2018). A
challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops, 172-181.
Karacı, A. VGGCOV19-NET (2022). automatic detection of COVID-19 cases from X-ray images
using modified VGG19 CNN ar-chitecture and YOLO algorithm. Neural Comput, 34, 8253–8274.

You might also like