Professional Documents
Culture Documents
BATCH 7 FINAL REPORT Updated
BATCH 7 FINAL REPORT Updated
PROJECT REPORT
Submitted by
VIJAYARAGAVAN KTS(20BIT003)
DEEPAK A(20BIT031)
ROSHAN KARTHICK T(20BIT039)
Bachelor of Technology
in
Information Technology
APRIL 2024
Dr. Mahalingam College of Engineering and Technology
Pollachi - 642003
An Autonomous Institution
BONAFIDE CERTIFICATE
Certified that this project report, “AI MAPPING
FOR RAPID DISASTER ASSESSMENT”
is the bonafide work of
Dr.L.Meenachi
Project Guide & HOD
Associate Professor
Information Technology
Dr. Mahalingam College of Engineering
and Technology, NPTC-MCET Campus
Pollachi – 642003 India
ABSTRACT
Our research addresses the significant risks posed to infrastructure, human life, and
social stability by disasters, proposing a novel strategy leveraging deep learning- based
image analysis techniques to enhance catastrophe management. The methodology
comprises three key components: identifying the type of disaster, assessing the extent of
building damage, and detecting the occurrence of disasters. Our approach harnesses
Convolutional Neural Networks (CNNs) to accurately identify catastrophic events from
images, enabling swift mitigation and response measures. Moreover, our methodology
incorporates the use of MobileNet V2 for precise classification of various disaster types,
facilitating customized response plans and resource allocation strategies. Additionally, we
employ the U-Net architecture for building damage level assessment, aiding in prioritizing
rescue efforts and infrastructure restoration endeavors. By integrating these models into a
cohesive system, we provide a comprehensive approach to catastrophe management,
empowering stakeholders with valuable insights for efficient response coordination.The
methodology outlined in this study comprises three fundamental components, each tailored
to address crucial aspects of disaster management: Firstly, we focus on the identification
and classification of various types of disasters, leveraging the power of Convolutional
Neural Networks (CNNs) and the precision of MobileNet V2 to analyze images and
accurately discern catastrophic eventsecondly, we employ the U-Net architecture to conduct
comprehensive assessments of building damage, allowing for nuanced evaluations of
structural integrity and informed decision-making regarding resource allocation and
prioritization of rescue efforts. By providing detailed insights into the extent and severity of
damage, this component aids in optimizing response strategies and enhancing the efficiency
of post-disaster recovery efforts. Lastly, our methodology encompasses the determination
of disaster occurrence, wherein a combination of CNNs and advanced image analysis
techniques is utilized to detect and confirm the presence of disasters in real-time.
i
ACKNOWLEDGEMENT
ACKNOWLEDGMENT
Apart from the efforts of us, the success of this project depends largely on the
encouragement and guidelines of many others. We take this opportunity to praise the Almighty
and express our gratitude to the people who have been instrumental in the successful completion
of our project.
We wish to acknowledge with thanks for the excellent encouragement given by the
management of our college and we thank Dr. C. Ramaswamy, M.E., Ph.D., FIV, Secretary,
NIA Educational Institutions for providing us with a plethora of facilities in the campus to
complete our project successfully.
We wish to express our hearty thanks to Dr. P. Govindasamy, M.E., Ph.D., Principal
of our college, for his constant motivation regarding our project work.
We extend our heartfelt gratitude to Dr.L.Meenachi, M.E., Ph.D., Associate Professor &
HoD – IT for her tremendous support and assistance in the completion of my project. We feel
motivated and encouraged every time we attend her meeting. And the guidance from her broadens
our minds to do the project with interest and enhanced knowledge to know more.
It is our primary duty to thank our Project Coordinator, Ms. S. Soundariya, M.E.,
Assistant Professor (SS), Information Technology who is the backbone of all our project
activities, for her consistent guidance and encouragement, which kept us fast and proactive in our
work. It’s her enthusiasm and patience that guided us through the right path.
Finally, we extend our heartfelt thanks to the enriched motivation and encouragement of
our parents, friends, and faculty members. The facilities received from our institutions made
our work easier. We are grateful to every one who has constantly helped and supported us to
complete the project enthusiastically and successfully.
VIJAYARAGAVAN KTS
DEEPAK A
ROSHAN KARTHICK T
ii
TABLE OF CONTENTS
TABLE OF CONTENTS
Abstract i
Acknowledgment ii
List of Figures iii
List of Abbreviations iv
1 INTRODUCTION 1
1. 1 Problem Definition 5
1. 2 Project Overview/ Specifications 5
1. 3 Hardware Specification 6
1. 4 Software Specification 6
2 EXISTING SYSTEM 7
2.1 Existing System 8
2.2 Limitations in Existing System 9
3 PROPOSED SYSTEM 10
3.1 Proposed Methodology 11
3.2 Process flow diagram 14
4 RESULTS 15
4.1 Experimental Results 16
5 CONCLUSIONS 21
5.1 Future Works 23
6 REFERENCES 24
6.1 References 25
7 APPENDICES 28
7.1 Appendix-1 28
7.2 Appendix-2 30
iv
LIST OF FIGURES
iii
iv
LIST OF FIGURES
iii
V
LIST OF ABBREVIATIONS
V
LIST OF ABBREVIATIONS
AI - Artificial Intelligence
ML - Machine Learning
iv
CHAPTER 1
INTRODUCTION
1
1. INTRODUCTION
The overarching goal of this project is threefold: to develop robust models for disaster
occurrence detection, disaster type identification, and building damage level assessment
using satellite and aerial imagery datasets. These models will serve as integral components
of a unified disaster management system, providing decision-makers, responders, and
stakeholders with actionable insights and real-time information to guide effective response
strategies [4]. The first objective focuses on leveraging CNNs for disaster occurrence
detection. CNNs are renowned for their ability to extract intricate features from images and
detect patterns that are indicative of specific phenomena. In the context of disaster
management, CNNs can analyze satellite and aerial imagery to identify visual cues
associated with disasters such as wildfires, floods, earthquakes, and storms [7]. By detecting
these occurrences early, we can trigger timely alerts, activate response protocols, and
mobilize resources proactively, thereby minimizing the impact on lives and infrastructure.
2
The second objective revolves around employing MobileNet V2 for disaster type
identification. MobileNet V2 is a lightweight CNN architecture designed for efficient
classification tasks, making it ideal for scenarios where computational resources are
limited. In disaster management, accurately identifying the type of disaster is paramount
for tailoring response efforts and resource allocation strategies [5]. MobileNet V2's ability
to classify disasters with high accuracy enables us to differentiate between various types of
emergencies, ranging from natural disasters to human-made incidents, facilitating targeted
and effective interventions. The third objective focuses on utilizing the U-Net architecture
for building damage level assessment. U-Net is specifically tailored for semantic
segmentation tasks, enabling precise delineation of objects and regions within images. In
the context of disaster response, U-Net can analyze imagery to assess the extent and severity
of damage to buildings and infrastructure [9]. By categorizing damage levels (e.g., no
damage, minor damage, major damage, destroyed), responders can prioritize rescue
operations, allocate resources efficiently, and plan recovery initiatives based on accurate
assessments of infrastructure resilience.
The integration of these deep learning models into a unified disaster management system
promises to revolutionize how we approach disaster preparedness, response, and recovery.
By enhancing situational awareness, improving response coordination, and expediting
recovery efforts, we aim to bolster resilience, mitigate impacts, and ultimately save lives in
the face of increasingly complex and dynamic disaster scenarios. Through collaborative
efforts, innovative solutions, and a commitment to leveraging technology for societal
benefit, this project represents a significant step towards building more resilient and
adaptive communities in the midst of adversity.
Next, a MobileNet V2 model is employed for disaster type identification. This model
categorizes the type of disaster depicted in the imagery, providing valuable insights for
tailored response strategies and resource allocation [5]. By accurately identifying disaster
types, responders can prioritize actions based on the specific needs and challenges posed
by each type of disaster. The third component of the project focuses on building damage
level assessment using the U-Net architecture. This model is trained on annotated imagery
depicting buildings within disaster-affected areas, along with corresponding ground truth
labels indicating the extent of building damage. The U-Net architecture enables precise
segmentation of building structures and classification of damage levels, allowing
responders to prioritize rescue efforts and direct resources towards areas with the most
significant structural damage [7].
The integration of these models into a unified disaster management system offers several
advantages. Firstly, it enhances early warning capabilities through rapid and accurate
detection of disaster occurrences. Secondly, it improves response coordination by providing
detailed insights into the type and severity of disasters. Lastly, it facilitates efficient
resource allocation by prioritizing actions based on the level of building damage and the
specific needs of affected areas. Overall, the project represents a significant step forward in
leveraging advanced technologies to address complex challenges in disaster management.
By combining state-of-the-art deep learning models with real-time data analysis, the project
aims to empower stakeholders with actionable insights and enhance resilience in the face
of disasters.
4
1. 1. Problem Definition
The occurrence of disasters, ranging from earthquakes and floods to wildfires and industrial
accidents, can result in devastating consequences, including loss of lives, displacement of
communities, destruction of property, and disruption of essential services. In addition to the
immediate impact on individuals and communities, disasters often have far-reaching
consequences, exacerbating existing vulnerabilities, straining resources, and impeding
long-term recovery efforts. The objective of this project is to create a complete catastrophe
management system by utilizing deep learning techniques.
Through the integration of these various deep learning models into a cohesive system, we
aim to enhance situational awareness, optimize resource allocation, and improve the
efficacy of disaster response and recovery activities.
5
1.3 HARDWARE SPECIFICATION:
4. Library : TensorFlow
6
CHAPTER 2
EXISTING SYSTEM
7
2. EXISTING SYSTEM
8
2.2. Limitations in Existing System
9
CHAPTER 3
PROPOSED SYSTEM
10
3. PROPOSED SYSTEM
3.1 Proposed Methodology
The proposed disaster management system aims to overcome the limitations of existing
frameworks by leveraging advanced technologies, particularly deep learning
methodologies and computer vision techniques. Comprising three main components—
disaster occurrence determination, disaster type identification, and building damage level
assessment—the system integrates cutting-edge algorithms to enhance situational
awareness, optimize resource allocation, and improve the efficacy of response and recovery
efforts.
The training dataset for the CNN model encompasses a diverse collection of annotated images
illustrating various types of disasters, ranging from fires and floods to earthquakes and storms.
Supervised learning techniques are utilized for model training, wherein input images are
associated with corresponding labels signifying the presence or absence of disasters.To bolster
the model's robustness and generalization capability, data augmentation methods like rotation,
scaling, and flipping are integrated into the training process. Furthermore, transfer learning is
leveraged by initializing the CNN model with weights pre-trained on extensive image datasets,
allowing the model to harness knowledge acquired from generic image features.
11
optimizer are employed for model training, with careful adjustment of learning rate scheduling
to ensure stable convergence. Performance evaluation during training involves validation data,
with hyperparameters fine-tuned to optimize key metrics including F1 Score, accuracy,
precision, and recall. Once Training concludes, the CNN model is equipped to efficiently
analyze new aerial and satellite imagery in real-time, accurately pinpointing regions potentially
affected by disasters. This capability empowers emergency responders and disaster
management authorities to swiftly prioritize and allocate resources for effective disaster
response and mitigation efforts.
To enhance generalization and robustness, we employ Methods for augmenting data involving
techniques like flipping, rotation, and scaling. Moreover, transfer learning is employed by fine-
tuning the pre-trained MobileNet V2 model on a localized dataset of disaster images, tailored
to the specific region or context. Once trained, the MobileNet V2 model proficiently classifies
unseen images depicting different disaster types. This classification capability empowers
12
emergency responders and policymakers to prioritize response efforts and allocate resources
effectively, contingent upon the nature and severity of the disaster. In summary, the integration
of MobileNet V2 augments the situational awareness of our disaster management system,
facilitating informed decision-making and proactive response measures to mitigate the adverse
impacts of disasters on affected communities.
The encoder part of the U-Net model extracts features from input images, gradually reducing
spatial resolution through a series of convolutional and pooling layers. These extracted features
are then propagated to the decoder part of the network through skip connections, allowing for
precise localization of damage within buildings. During training, the model learns to map input
images to corresponding damage level labels using techniques such as stochastic gradient
descent with backpropagation. In training, the loss function often incorporates metrics like
categorical cross-entropy or dice coefficient, which gauge the likeness between predicted and
actual segmentation masks. Once trained, the U-Net model can accurately segment building
structures and classify the severity of damage within them, providing valuable insights for
disaster response and recovery efforts. This enables responders to prioritize areas with
significant structural damage for immediate attention, facilitating efficient allocation of
resources and aiding in the timely restoration of infrastructure.
13
Dataset:
The xBD dataset, comprising a comprehensive collection of satellite photos capturing
various natural disasters, serves as the training and evaluation dataset for the proposed
system. With detailed annotations for building structures and damage levels, the xBD
dataset facilitates the development of robust models for disaster management tasks.
14
CHAPTER 4
RESULTS
15
4. Experimental Results
The results obtained from the implementation of deep learning models, including CNN for
disaster occurrence detection, MobileNet V2 for disaster type identification, and U-Net for
building damage assessment, have shown promising outcomes in enhancing disaster
management capabilities. Firstly, the CNN model demonstrated high accuracy in detecting
occurrences of disasters from satellite and aerial imagery. Through extensive training on
diverse datasets, the model achieved robustness in identifying visual cues indicative of
various disaster types, including wildfires, floods, earthquakes, and storms. The real-time
analysis provided by the CNN contributed to early warning systems, enabling timely
responseactions and resource allocation during disaster events.
The evaluation of the U-Net model for building damage level assessment was conducted
using an independent test dataset comprised of aerial and satellite imagery depicting
disaster-affected regions. The findings demonstrate the efficacy of the model in accurately
segmenting building structures and classifying the extent of damage. Quantitative measures
such as Intersection over Union and Dice coefficient were utilized to gauge the
segmentation accuracy achieved by the U-Net model. Additionally, qualitative analysis of
the model's predictions showcased precise localization of damage, distinguishing between
undamaged, partially damaged, and severely damaged regions within buildings. Visual
inspection of the segmentation masks corroborated these findings, with the model's
predictions closely aligning with ground truth annotations.
The models' effectiveness in determining the degrees of building damage across several
categories is shown by the evaluation measures. The precision score of 0.89 for "No Damage"
indicates a great ability to categories intact buildings. However, lower precision, recall, and
F1 scores indicate that the models have difficulty detecting minor, major, and destroyed
damage categories. This indicates that more work has to be done to improve the models'
capacity to distinguish between different degrees of building damage. This could involve
expanding the dataset or adjusting the model's parameters. If successful, this would increase
16
the models' usefulness in disaster response and recovery operations. The success of the U-Net
model underscores the potential of deep learning techniques in building damage assessment
for disaster management applications. By leveraging the U-Net architecture's ability to
preserve spatial information and capture fine-grained details, the model demonstrates
exceptional accuracy in quantifying the severity of building damage. The high level of
agreement between the model's predictions and ground truth annotations highlights its
reliability and utility in informing decision-making processes during disaster response efforts.
By providing timely and accurate insights into the spatial distribution and severity of building
damage, the U-Net model facilitates effective resource allocation and response coordination,
ultimately aiding in the mitigation of disaster impacts and the restoration of affected
communities.
17
metrics such as Intersection over Union (IoU) and Dice coefficient confirmed the model's
effectiveness in accurately delineating damaged areas within buildings.
18
Figure 4: Accuracy Measure
Comparing the performance metrics of our models (CNN, MobileNet, U-Net) with AlexNet,
it is clear that our models outperform AlexNet in all dimensions.In terms of accuracy, our
model exhibits significantly higher accuracy compared to AlexNet. This indicates that our
models are good at correctly classifying damaged and undamaged buildings in satellite
images, leading to accurate damage assessments overall. Similarly, our models exhibit higher
accuracy, recall, and F1 scores compared to AlexNet. This means that our models achieve a
19
good balance between the true positive rate (recall) and the false positive rate (precision),
enabling reliable and consistent identification of damaged buildings. Overall, the average
performance of our models in all metrics is particularly better than AlexNet, indicating the
effectiveness of our CNN, MobileNet, and U-Net algorithms in destroying buildings used to
identify damage in satellite imagery for disaster recovery efforts.
Overall, the results and outcomes of integrating deep learning models in the disaster
management project include improved early detection of disasters, accurate classification
of disaster types, and precise assessment of building damages. These advancements
contribute to more proactive, data-driven, and effective disaster response and recovery
strategies, ultimately enhancing resilience and mitigating the impacts of disasters on
communities and infrastructure. Ongoing research, collaboration among stakeholders, and
continuous refinement of models are essential for further advancing the field of disaster
management and improving overall preparedness and response capabilities.
20
CHAPTER 5
CONCLUSION
21
5. CONCLUSION
CNN models excel in deciphering patterns and features within satellite and aerial imagery,
boasting exceptional accuracy in detecting various types of disasters. By analyzing imagery data
in real-time, CNN models enable the development of early warning systems, empowering
authorities to issue timely evacuation orders, mobilize emergency personnel, and allocate
resources effectively. This swift identification of disaster occurrences significantly reduces
response time and mitigates the impact on communities and infrastructure.
MobileNet V2, with its lightweight architecture, plays a crucial role in classifying disaster types
based on visual cues extracted from imagery data. Its ability to perform rapid inference on edge
devices facilitates on-the-ground decision-making and resource allocation. By accurately
identifying the type of disaster, such as wildfires, floods, or earthquakes, MobileNet V2 enables
responders to tailor their efforts accordingly, optimizing the utilization of available resources and
personnel.
Meanwhile, the U-Net architecture facilitates precise segmentation of building damages and
accurate classification of damage levels. By segmenting satellite imagery into individual building
structures and analyzing pixel-level information, the U-Net model aids in prioritizing rescue
efforts and streamlining recovery processes. Emergency responders can swiftly identify severely
damaged structures, facilitating targeted search and rescue operations and expediting the
restoration of critical infrastructure post-disaster.
22
5.1 Future Work
Future work in the realm of disaster management using deep learning models presents exciting
opportunities for advancing the field and addressing emerging challenges. Several areas of focus
can be identified for future research and development to enhance the effectiveness, efficiency, and
resilience of disaster management systems. Firstly, the integration of multi-modal data sources
and sensor networks presents a promising avenue for enhancing disaster detection and response
capabilities. Incorporating data from various sources such as remote sensing satellites, drones, IoT
devices, and social media platforms canprovide a comprehensive and real-time understanding of
disaster events. we can build a future where communities are better prepared, more resilient, and
more equipped to withstand and recover from the myriad challenges posed by disasters.
23
CHAPTER 6
REFERENCES
24
6. REFERENCES
[1] Xia, Haobin, Jianjun Wu, Jiaqi Yao, Hong Zhu, Adu Gong, Jianhua Yang, Liuru
Hu, and Fan Mo. "A Deep Learning Application for Building Damage Assessment
Using Ultra-High-Resolution Remote Sensing Imagery in Turkey
Earthquake." International Journal of Disaster Risk Science 14, no. 6 (2023): 947-
962.
[2] Xu, Joseph Z., Wenhan Lu, Zebo Li, Pranav Khaitan, and Valeriya Zaytseva.
"Building damage detection in satellite imagery using convolutional neural
networks." arXiv preprint arXiv:1910.06444 (2019).
[3] Kim, Danu, Jeongkyung Won, Eunji Lee, Kyung Ryul Park, Jihee Kim, Sangyoon
Park, Hyunjoo Yang, Sungwon Park, Donghyun Ahn, and Meeyoung Cha.
"Disaster assessment using computer vision and satellite imagery: Applications in
water-related building damage detection." (2023).
[4] Wang, Ying, Alvin Wei Ze Chew, and Limao Zhang. "Building damage detection
from satellite images after natural disasters on extremely imbalanced
datasets." Automation in Construction 140 (2022): 104328.
[5] Bande, Swapnil, and Virendra V. Shete. "Smart flood disaster prediction system
using IoT & neural networks." In 2017 International Conference On Smart
Technologies For Smart Nation (SmartTechCon), pp. 189-194. Ieee, 2017.
[6] Huang, Di, Shuaian Wang, and Zhiyuan Liu. "A systematic review of prediction
methods for emergency management." International Journal of Disaster Risk
Reduction 62 (2021): 102412.
[7] Gupta, Ritwik, Bryce Goodman, Nirav Patel, Ricky Hosfelt, Sandra Sajeev, Eric
Heim, Jigar Doshi, Keane Lucas, Howie Choset, and Matthew Gaston. "Creating
xBD: A dataset for assessing building damage from satellite imagery."
In Proceedings of the IEEE/CVF conference on computer vision and pattern
recognition workshops, pp. 10-17. 2019.
25
[8] Gupta, Ritwik, Richard Hosfelt, Sandra Sajeev, Nirav Patel, Bryce Goodman, Jigar
Doshi, Eric Heim, Howie Choset, and Matthew Gaston. "xbd: A dataset for
assessing building damage from satellite imagery." arXiv preprint
arXiv:1911.09296 (2019).
[9] Bai, Yanbing, Junjie Hu, Jinhua Su, Xing Liu, Haoyu Liu, Xianwen He, Shengwang
Meng, Erick Mas, and Shunichi Koshimura. "Pyramid pooling module-based semi-
siamese network: A benchmark model for assessing building damage from xBD
satellite imagery datasets." Remote Sensing 12, no. 24 (2020): 4055.
[10] Wajid, Mohd Anas, Aasim Zafar, Hugo Terashima-Marín, and Mohammad Saif
Wajid. "Neutrosophic-CNN-based image and text fusion for multimodal
classification." Journal of Intelligent & Fuzzy Systems 45, no. 1 (2023): 1039-1055.
[11] Song, Huaxiang, and Yong Zhou. "Simple is best: A single-CNN method for
classifying remote sensing images." Networks & Heterogeneous Media 18, no. 4
(2023).
[12] Sambandam, Shreeram Gopal, Raja Purushothaman, Rahmath Ulla Baig, Syed
Javed, Vinh Truong Hoang, and Kiet Tran-Trung. "Intelligent surface defect
detection for submersible pump impeller using MobileNet V2 architecture." The
International Journal of Advanced Manufacturing Technology 124, no. 10 (2023):
3519-3532.
[13] Li, Yanyu, Ju Hu, Yang Wen, Georgios Evangelidis, Kamyar Salahi, Yanzhi Wang,
Sergey Tulyakov, and Jian Ren. "Rethinking vision transformers for mobilenet size
and speed." In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pp. 16889-16900. 2023.
[14] Jin, Ge, Yanghe Liu, Peiliang Qin, Rongjing Hong, Tingting Xu, and Guoyu Lu.
"An End-to-End Steel Surface Classification Approach Based on EDCGAN and
MobileNet V2." Sensors 23, no. 4 (2023): 1953.
26
[15] Sun, Haixia, Shujuan Zhang, Rui Ren, and Liyang Su. "Maturity classification of
“Hupingzao” jujubes with an imbalanced dataset based on improved mobileNet
V2." Agriculture 12, no. 9 (2022): 1305.
[16] Sutaji, Deni, and Oktay Yıldız. "LEMOXINET: Lite ensemble MobileNetV2 and
Xception models to predict plant disease." Ecological Informatics 70 (2022):
101698.
[18] Anand, Vatsala, Sheifali Gupta, Deepika Koundal, and Karamjeet Singh. "Fusion
of U-Net and CNN model for segmentation and classification of skin lesion from
dermoscopy images." Expert Systems with Applications 213 (2023): 119230.
[19] Singla, Danush, Furkan Cimen, and Chandrakala Aluganti Narasimhulu. "Novel
artificial intelligent transformer U-NET for better identification and management
of prostate cancer." Molecular and cellular biochemistry 478, no. 7 (2023): 1439-
1445.
[20] Wang, Xiuhua, Guangcai Feng, Lijia He, Qi An, Zhiqiang Xiong, Hao Lu, Wenxin
Wang et al. "Evaluating urban building damage of 2023 Kahramanmaras, Turkey
earthquake sequence using SAR change detection." Sensors 23, no. 14 (2023):
6342.
[21] Hong, Zhonghua, Hongyang Zhang, Xiaohua Tong, Shijie Liu, Ruyan Zhou,
Haiyan Pan, Yun Zhang, Yanling Han, Jing Wang, and Shuhu Yang. "Rapid fine-
grained Damage Assessment of Buildings on a Large Scale: A Case Study of the
February 2023 Earthquake in Turkey." IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing (2024)
27
7.APPENDICES
7. 1. APPENDIX- 1
SCREEN SHOTS
28
Figure 7 : Sample Output 2
29
7. 2. APPENDIX- 2
CODING
30
happened']))
print(class_names)
['disaster happened' 'no disaster happened']
def get_label(file_path, type):
parts = file_path.split(os.path.sep)
if "pre" in parts[7]:
damage = 'no disaster happened'
else: # "post"
damage = 'disaster happened'
label = damage == class_names
one_hot = np.zeros(len(class_names), dtype=np.uint8)
one_hot[label] = 1
return one_hot
def get_label_from_one_hot(array):
return class_names[np.where(array == 1)]
train_X = np.zeros((train_datasize, img_height, img_width, 3),
dtype=np.uint8)
train_Y = np.zeros((train_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(train_files)):
img= PIL.Image.open(train_files[i])
train_X[i] = np.array(img)
train_Y[i] = get_label(train_files[i], "train")
print("train")
print(train_X.shape)
print(train_Y.shape)
test_X = np.zeros((test_datasize, img_height, img_width, 3),
dtype=np.uint8)
test_Y = np.zeros((test_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(test_files)):
img= PIL.Image.open(test_files[i])
test_X[i] = np.array(img)
test_Y[i] = get_label(test_files[i], "test")
print("test")
print(test_X.shape)
print(test_Y.shape)
train
(1500, 1024, 1024, 3)
(1500, 2)
test
(500, 1024, 1024, 3)
(500, 2)
plt.figure(figsize=(10,10))
for i in range(25):
31
plt.subplot(5,5,i+1)
choice = random.randint(0, train_datasize-1)
plt.title(get_label_from_one_hot(train_Y[choice]))
plt.imshow(train_X[choice])
plt.tight_layout()
plt.show()
import gradio as gr
ImageFile.LOAD_TRUNCATED_IMAGES = True
import pathlib
dataset_path = "./xBD-dataset"
train_data_dir = pathlib.Path(dataset_path+"/train/images")
test_data_dir = pathlib.Path(dataset_path+"/test/images")
train_files = glob.glob(r""+dataset_path+"/train/images/*.png")
train_files = list(filter(lambda x: "post" in x, train_files))
train_files = random.sample(train_files, 1500)
train_datasize = len(train_files)
print("training data:", len(train_files))
test_files = glob.glob(r""+dataset_path+"/test/images/*.png")
test_files = list(filter(lambda x: "post" in x, test_files))
test_files = random.sample(test_files, 500)
test_datasize = len(test_files)
print("test data:", len(test_files))
training data: 1500
test data: 500
images = list(train_data_dir.glob('*'))
random_image = random.choice(images)
im = PIL.Image.open(str(random_image))
width, height = im.size
print(width)
print(height)
im.resize((300, 300)).show()
1024
1024
img_height = 1024
img_width = 1024
print(class_names)
['earthquake' 'fire' 'flooding' 'tsunami' 'volcano' 'wind']
def get_label(file_path, type):
parts = file_path.split(os.path.sep)
path = dataset_path+'/train/labels/'
if type == "test":
path = dataset_path+'/test/labels/'
f = open(path + parts[7].split('.')[0] +'.json')
data = json.load(f)
32
disaster_type = data['metadata']['disaster_type']
f.close()
label = disaster_type == class_names
one_hot = np.zeros(len(class_names), dtype=np.uint8)
one_hot[label] = 1
return one_hot
def get_label_from_one_hot(array):
return class_names[np.where(array == 1)]
train_X = np.zeros((train_datasize, img_height, img_width, 3),
dtype=np.uint8)
train_Y = np.zeros((train_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(train_files)):
img= PIL.Image.open(train_files[i])
train_X[i] = np.array(img)
train_Y[i] = get_label(train_files[i], "train")
print("train")
test_X = np.zeros((test_datasize, img_height, img_width, 3),
dtype=np.uint8)
test_Y = np.zeros((test_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(test_files)):
img= PIL.Image.open(test_files[i])
test_X[i] = np.array(img)
test_Y[i] = get_label(test_files[i], "test")
print("test")
train
(1500, 1024, 1024, 3)
(1500, 6)
test
(500, 1024, 1024, 3)
(500, 6)
choice = random.randint(0, train_datasize-1)
plt.title(get_label_from_one_hot(train_Y[choice]))
plt.imshow(train_X[choice])
plt.tight_layout()
plt.show()
33
34