Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

AI MAPPING FOR RAPID DISASTER ASSESSMENT

PROJECT REPORT

Submitted by

VIJAYARAGAVAN KTS(20BIT003)
DEEPAK A(20BIT031)
ROSHAN KARTHICK T(20BIT039)

in partial fulfillment for the award of the degree of

Bachelor of Technology

in

Information Technology

Dr. Mahalingam College of Engineering and


Technology Pollachi - 642003
An Autonomous Institution

Affiliated to Anna University, Chennai - 600 025

APRIL 2024
Dr. Mahalingam College of Engineering and Technology
Pollachi - 642003
An Autonomous Institution

Affiliated to Anna University, Chennai - 600 025

BONAFIDE CERTIFICATE
Certified that this project report, “AI MAPPING
FOR RAPID DISASTER ASSESSMENT”
is the bonafide work of

VIJAYARAGAVAN KTS (20BIT003)


DEEPAK A (20BIT031)
ROSHAN KARTHICK T (20BIT039)

who carried out the project work under my supervision.

Dr.L.Meenachi
Project Guide & HOD
Associate Professor
Information Technology
Dr. Mahalingam College of Engineering
and Technology, NPTC-MCET Campus
Pollachi – 642003 India

Submitted for the Autonomous End Semester Examination


Project Viva-voce held on

INTERNAL EXAMINER EXTERNAL EXAMINER


Dr. Mahalingam College of Engineering and Technology
Pollachi -642003

Technology Readiness Level (TRL) Certificate

Project Title: AI MAPPING FOR RAPID DISASTER ASSESSMENT

Course Code: 19ITPN6801 PROJECT

Students Names and Roll Numbers:

VIJAYARAGAVAN KTS (20BIT003)


DEEPAK A (20BIT031)
ROSHAN KARTHICK T (20BIT039)

Guide Name: Dr. L. Meenachi, Associate Professor & HOD / IT

Technology Readiness Level* (TRL) of this Project: ___________________

Signature of the Guide HoD

Internal Examiner External Examiner


ABSTRACT
AI MAPPING FOR RAPID
DISASTER ASSESSMENT

ABSTRACT

Our research addresses the significant risks posed to infrastructure, human life, and
social stability by disasters, proposing a novel strategy leveraging deep learning- based
image analysis techniques to enhance catastrophe management. The methodology
comprises three key components: identifying the type of disaster, assessing the extent of
building damage, and detecting the occurrence of disasters. Our approach harnesses
Convolutional Neural Networks (CNNs) to accurately identify catastrophic events from
images, enabling swift mitigation and response measures. Moreover, our methodology
incorporates the use of MobileNet V2 for precise classification of various disaster types,
facilitating customized response plans and resource allocation strategies. Additionally, we
employ the U-Net architecture for building damage level assessment, aiding in prioritizing
rescue efforts and infrastructure restoration endeavors. By integrating these models into a
cohesive system, we provide a comprehensive approach to catastrophe management,
empowering stakeholders with valuable insights for efficient response coordination.The
methodology outlined in this study comprises three fundamental components, each tailored
to address crucial aspects of disaster management: Firstly, we focus on the identification
and classification of various types of disasters, leveraging the power of Convolutional
Neural Networks (CNNs) and the precision of MobileNet V2 to analyze images and
accurately discern catastrophic eventsecondly, we employ the U-Net architecture to conduct
comprehensive assessments of building damage, allowing for nuanced evaluations of
structural integrity and informed decision-making regarding resource allocation and
prioritization of rescue efforts. By providing detailed insights into the extent and severity of
damage, this component aids in optimizing response strategies and enhancing the efficiency
of post-disaster recovery efforts. Lastly, our methodology encompasses the determination
of disaster occurrence, wherein a combination of CNNs and advanced image analysis
techniques is utilized to detect and confirm the presence of disasters in real-time.
i
ACKNOWLEDGEMENT
ACKNOWLEDGMENT

Apart from the efforts of us, the success of this project depends largely on the
encouragement and guidelines of many others. We take this opportunity to praise the Almighty
and express our gratitude to the people who have been instrumental in the successful completion
of our project.

We wish to acknowledge with thanks for the excellent encouragement given by the
management of our college and we thank Dr. C. Ramaswamy, M.E., Ph.D., FIV, Secretary,
NIA Educational Institutions for providing us with a plethora of facilities in the campus to
complete our project successfully.

We wish to express our hearty thanks to Dr. P. Govindasamy, M.E., Ph.D., Principal
of our college, for his constant motivation regarding our project work.

We heartily express our extreme gratefulness to Dr. S. Ramakrishnan, M.E., Ph.D.,


Senior Professor-IT & Dean–RI for his instrumental in providing such wonderful support to us.

We extend our heartfelt gratitude to Dr.L.Meenachi, M.E., Ph.D., Associate Professor &
HoD – IT for her tremendous support and assistance in the completion of my project. We feel
motivated and encouraged every time we attend her meeting. And the guidance from her broadens
our minds to do the project with interest and enhanced knowledge to know more.

It is our primary duty to thank our Project Coordinator, Ms. S. Soundariya, M.E.,
Assistant Professor (SS), Information Technology who is the backbone of all our project
activities, for her consistent guidance and encouragement, which kept us fast and proactive in our
work. It’s her enthusiasm and patience that guided us through the right path.

Finally, we extend our heartfelt thanks to the enriched motivation and encouragement of
our parents, friends, and faculty members. The facilities received from our institutions made
our work easier. We are grateful to every one who has constantly helped and supported us to
complete the project enthusiastically and successfully.

VIJAYARAGAVAN KTS
DEEPAK A
ROSHAN KARTHICK T
ii
TABLE OF CONTENTS
TABLE OF CONTENTS

Chapter Title Page No.


No.

Abstract i
Acknowledgment ii
List of Figures iii
List of Abbreviations iv
1 INTRODUCTION 1
1. 1 Problem Definition 5
1. 2 Project Overview/ Specifications 5
1. 3 Hardware Specification 6
1. 4 Software Specification 6
2 EXISTING SYSTEM 7
2.1 Existing System 8
2.2 Limitations in Existing System 9
3 PROPOSED SYSTEM 10
3.1 Proposed Methodology 11
3.2 Process flow diagram 14
4 RESULTS 15
4.1 Experimental Results 16
5 CONCLUSIONS 21
5.1 Future Works 23
6 REFERENCES 24
6.1 References 25
7 APPENDICES 28
7.1 Appendix-1 28
7.2 Appendix-2 30

iv
LIST OF FIGURES

iii

iv
LIST OF FIGURES

Figure No. Title Page No.

Fig 1 Architecture Diagram 13

Fig 2 Precision, Recall, F1 Score Bar Chart 17

Fig 3 Accuracy of damage assessment 17

Fig 4 Accuracy Measure 18

Fig 5 Pre and Post Disaster Occurred Satellite Images 27

Fig 6 Sample Output 1 28

Fig 7 Sample Output 2 29

iii

V
LIST OF ABBREVIATIONS

V
LIST OF ABBREVIATIONS

AI - Artificial Intelligence

CNN - Convolution Neural Network

IoT - Internet Of Things

IoU - Intersection over Union

ML - Machine Learning

SGD - Stochastic Gradient Descent

xBD - Building Damage Dataset

iv
CHAPTER 1
INTRODUCTION

1
1. INTRODUCTION

Disasters, whether natural or human-made, represent some of the most formidable


challenges that societies face globally. These catastrophic events, ranging from earthquakes
and hurricanes to industrial accidents and pandemics, not only endanger lives but also inflict
severe damage on infrastructure, economies, and ecosystems [1]. The rapid onset and
unpredictable nature of disasters necessitate effective and efficient disaster management
strategies to mitigate their impacts and facilitate swift recovery. However, traditional
approaches to disaster management often encounter limitations in terms of early detection,
accurate classification, and optimized resource allocation, leading to delays in response and
inadequate support for affected communities [2]. In response to these challenges, this project
embarks on a journey to revolutionize disaster management through the application of
cutting-edge deep learning techniques. Deep learning, a subset of artificial intelligence (AI),
has emerged as a powerful tool for processing and analyzing large volumes of complex data,
particularly in the realm of image analysis [3]. By leveraging advanced deep learning
models, specifically Convolutional Neural Networks (CNNs), MobileNet V2, and U-Net
architectures, we aim to address critical gaps in disaster management and enhance
preparedness, response, and recovery efforts.

The overarching goal of this project is threefold: to develop robust models for disaster
occurrence detection, disaster type identification, and building damage level assessment
using satellite and aerial imagery datasets. These models will serve as integral components
of a unified disaster management system, providing decision-makers, responders, and
stakeholders with actionable insights and real-time information to guide effective response
strategies [4]. The first objective focuses on leveraging CNNs for disaster occurrence
detection. CNNs are renowned for their ability to extract intricate features from images and
detect patterns that are indicative of specific phenomena. In the context of disaster
management, CNNs can analyze satellite and aerial imagery to identify visual cues
associated with disasters such as wildfires, floods, earthquakes, and storms [7]. By detecting
these occurrences early, we can trigger timely alerts, activate response protocols, and
mobilize resources proactively, thereby minimizing the impact on lives and infrastructure.
2
The second objective revolves around employing MobileNet V2 for disaster type
identification. MobileNet V2 is a lightweight CNN architecture designed for efficient
classification tasks, making it ideal for scenarios where computational resources are
limited. In disaster management, accurately identifying the type of disaster is paramount
for tailoring response efforts and resource allocation strategies [5]. MobileNet V2's ability
to classify disasters with high accuracy enables us to differentiate between various types of
emergencies, ranging from natural disasters to human-made incidents, facilitating targeted
and effective interventions. The third objective focuses on utilizing the U-Net architecture
for building damage level assessment. U-Net is specifically tailored for semantic
segmentation tasks, enabling precise delineation of objects and regions within images. In
the context of disaster response, U-Net can analyze imagery to assess the extent and severity
of damage to buildings and infrastructure [9]. By categorizing damage levels (e.g., no
damage, minor damage, major damage, destroyed), responders can prioritize rescue
operations, allocate resources efficiently, and plan recovery initiatives based on accurate
assessments of infrastructure resilience.

The integration of these deep learning models into a unified disaster management system
promises to revolutionize how we approach disaster preparedness, response, and recovery.
By enhancing situational awareness, improving response coordination, and expediting
recovery efforts, we aim to bolster resilience, mitigate impacts, and ultimately save lives in
the face of increasingly complex and dynamic disaster scenarios. Through collaborative
efforts, innovative solutions, and a commitment to leveraging technology for societal
benefit, this project represents a significant step towards building more resilient and
adaptive communities in the midst of adversity.

The project is a comprehensive endeavor aimed at revolutionizing disaster management


through the integration of advanced deep learning techniques. The overarching goal is to
enhance situational awareness, response effectiveness, and resource allocation in disaster-
affected areas by developing and deploying cutting-edge models for disaster occurrence
detection, disaster type identification, and building damage level assessment.The project
begins with the development of a Convolutional Neural Network (CNN) model for disaster
occurrence detection. This model is trained using a diverse dataset of satellite and aerial
3
imagery capturing various types of disasters such as fires, floods, earthquakes, and storms.
The CNN is designed to analyze visual cues indicative of disaster events, such as smoke
plumes, flooding patterns, and structural damage, enabling rapid and accurate detection of
disasters in real-time.

Next, a MobileNet V2 model is employed for disaster type identification. This model
categorizes the type of disaster depicted in the imagery, providing valuable insights for
tailored response strategies and resource allocation [5]. By accurately identifying disaster
types, responders can prioritize actions based on the specific needs and challenges posed
by each type of disaster. The third component of the project focuses on building damage
level assessment using the U-Net architecture. This model is trained on annotated imagery
depicting buildings within disaster-affected areas, along with corresponding ground truth
labels indicating the extent of building damage. The U-Net architecture enables precise
segmentation of building structures and classification of damage levels, allowing
responders to prioritize rescue efforts and direct resources towards areas with the most
significant structural damage [7].

The integration of these models into a unified disaster management system offers several
advantages. Firstly, it enhances early warning capabilities through rapid and accurate
detection of disaster occurrences. Secondly, it improves response coordination by providing
detailed insights into the type and severity of disasters. Lastly, it facilitates efficient
resource allocation by prioritizing actions based on the level of building damage and the
specific needs of affected areas. Overall, the project represents a significant step forward in
leveraging advanced technologies to address complex challenges in disaster management.
By combining state-of-the-art deep learning models with real-time data analysis, the project
aims to empower stakeholders with actionable insights and enhance resilience in the face
of disasters.

4
1. 1. Problem Definition

Disasters, whether natural or man-made, pose significant threats to human life,


infrastructure, and socioeconomic stability worldwide. Prompt and efficient disaster
management is essential to minimize casualties, property damage, and the disruption of
livelihoods. However, traditional disaster management approaches often face challenges in
quickly and accurately assessing the type of disaster, evaluating the extent of building
damage, and determining the occurrence of disasters. This lack of timely and precise
information can hinder effective response and recovery efforts, leading to prolonged human
suffering and economic losses.

The occurrence of disasters, ranging from earthquakes and floods to wildfires and industrial
accidents, can result in devastating consequences, including loss of lives, displacement of
communities, destruction of property, and disruption of essential services. In addition to the
immediate impact on individuals and communities, disasters often have far-reaching
consequences, exacerbating existing vulnerabilities, straining resources, and impeding
long-term recovery efforts. The objective of this project is to create a complete catastrophe
management system by utilizing deep learning techniques.

1. 2. Project Overview and Specifications

We aim to develop a comprehensive catastrophe management system comprising three


major components: disaster type identification, building damage assessment, and disaster
occurrence determination. By harnessing the power of Convolutional Neural Networks
(CNNs), MobileNet V2, and U-Net architectures, we seek to provide stakeholders involved
in disaster response and recovery operations with actionable insights to facilitate rapid and
effective decision-making.

Through the integration of these various deep learning models into a cohesive system, we
aim to enhance situational awareness, optimize resource allocation, and improve the
efficacy of disaster response and recovery activities.
5
1.3 HARDWARE SPECIFICATION:

1. CPU type : Intel core i5 processor


2. Clock speed : 3.0 GHz
3. RAM size : 8 GB
4. Hard disk capacity : 500 GB
5. Keyboard type : Internet Keyboard
6. CD -drive type : 52xmax

1.4 SOFTWARE REQUIREMENTS:

1. Operating System : Windows 10


2. Programming Language : PYTHON
3. IDE : Jupyter Notebook

4. Library : TensorFlow

6
CHAPTER 2
EXISTING SYSTEM

7
2. EXISTING SYSTEM

2.1. Existing System

The current landscape of disaster management systems reflects a nuanced interplay of


traditional methodologies and emerging technological innovations, highlighting both
strengths and areas for improvement [1]. Historically, disaster management has relied on
established protocols, manual data collection methods, and hierarchical command
structures to coordinate responses and allocate resources during crisis events [2]. While
these systems have demonstrated resilience and adaptability in many scenarios, they also
face inherent limitations in terms of scalability, agility, and real-time data integration.
Challenges such as information silos, communication bottlenecks, and delayed decision-
making processes can impede the effectiveness of response efforts, particularly in complex
and rapidly evolving disaster scenarios [4]. Moreover, the reliance on manual processes for
data collection, analysis, and dissemination can introduce errors, inconsistencies, and
inefficiencies, hindering the overall response capabilities of organizations and agencies
involved in disaster management.

Similarly, the existing system of disaster management relies on established protocols,


manual data collection processes, and traditional decision-making frameworks [6]. Data
collection involves on-site assessments, surveys, and reports from first responders and local
authorities. Assessment of disaster impacts heavily depends on human observation and
judgment, with field teams dispatched to gather information and assess the situation [8].
Resource mobilization and coordination involve the deployment of personnel, equipment,
and supplies to affected areas, with communication channels used for coordination among
response agencies [9]. Decision-making is guided by protocols and SOPs developed by
emergency management agencies, with planning based on historical data and risk
assessments [10]. While technology such as GIS and communication systems are used,
integration into the system is limited. Challenges include labor-intensive data collection,
limited scalability, subjectivity in decision-making, inadequate technology utilization,
fragmented data sources, and limited predictive capabilities [12].

8
2.2. Limitations in Existing System

1. Manual Data Collection: Labor-intensive processes in existing systems lead to delays in


critical information acquisition during disasters.
2. Limited Scalability: Traditional methods struggle to manage resources efficiently in
large-scale disasters or across multiple areas simultaneously.
3. Subjectivity and Bias: Human judgment in disaster management systems can introduce
biases, affecting decision-making accuracy.
4. Inadequate Technology Utilization: Existing systems often underutilize advanced
technologies like AI and machine learning for disaster response.
5. Fragmented Data Sources: Data collection and sharing inefficiencies hinder timely
information exchange among stakeholders.
6. Resource Allocation Challenges: Optimal resource allocation in disaster management is
hindered by a lack of methodologies for real-time data-driven decision-making.

9
CHAPTER 3
PROPOSED SYSTEM

10
3. PROPOSED SYSTEM
3.1 Proposed Methodology

The proposed disaster management system aims to overcome the limitations of existing
frameworks by leveraging advanced technologies, particularly deep learning
methodologies and computer vision techniques. Comprising three main components—
disaster occurrence determination, disaster type identification, and building damage level
assessment—the system integrates cutting-edge algorithms to enhance situational
awareness, optimize resource allocation, and improve the efficacy of response and recovery
efforts.

Disaster Occurrence Determination:

Disaster occurrence determination plays a pivotal role in effective disaster management,


facilitating prompt response and mitigation efforts. This study adopts Convolutional Neural
Network (CNN) models to analyze aerial and satellite imagery, aiming to detect potential
signs of disasters. The CNN model employed for this purpose comprises multiple
convolutional layers followed by Pooling layers that are crafted to strategically extract
relevant features from input images. These features undergo processing through fully
connected layers to generate predictions regarding the presence or absence of disasters.

The training dataset for the CNN model encompasses a diverse collection of annotated images
illustrating various types of disasters, ranging from fires and floods to earthquakes and storms.
Supervised learning techniques are utilized for model training, wherein input images are
associated with corresponding labels signifying the presence or absence of disasters.To bolster
the model's robustness and generalization capability, data augmentation methods like rotation,
scaling, and flipping are integrated into the training process. Furthermore, transfer learning is
leveraged by initializing the CNN model with weights pre-trained on extensive image datasets,
allowing the model to harness knowledge acquired from generic image features.

Gradient-based optimization algorithms such as Stochastic Gradient Descent (SGD) or Adam

11
optimizer are employed for model training, with careful adjustment of learning rate scheduling
to ensure stable convergence. Performance evaluation during training involves validation data,
with hyperparameters fine-tuned to optimize key metrics including F1 Score, accuracy,
precision, and recall. Once Training concludes, the CNN model is equipped to efficiently
analyze new aerial and satellite imagery in real-time, accurately pinpointing regions potentially
affected by disasters. This capability empowers emergency responders and disaster
management authorities to swiftly prioritize and allocate resources for effective disaster
response and mitigation efforts.

Disaster Type Identification:


Disaster-type identification plays a pivotal role in effective disaster management, necessitating
tailored response strategies and resource allocations. In our proposed disaster management
system, we advocate for the incorporation of MobileNet V2, a cutting-edge image
classification model, to address this critical challenge. MobileNet V2, renowned for its
lightweight architecture optimized for efficient image classification tasks, It achieves a
harmony between model intricacy and computational effectiveness, making it suitable for
deployment on limited-resource platforms such as mobile phones or edge devices.

The MobileNet V2 architecture is characterized by depthwise separable convolutions, a design


choice that markedly reduces the number of parameters compared to traditional convolutional
layers, thereby achieving high accuracy while minimizing computational overhead. During the
training phase, the MobileNet V2 model is trained on labeled image datasets encompassing
various disaster types, including wildfires, floods, earthquakes, and hurricanes. Through this
process, the model learns to extract pertinent features from input images and categorize them
into predefined disaster categories.

To enhance generalization and robustness, we employ Methods for augmenting data involving
techniques like flipping, rotation, and scaling. Moreover, transfer learning is employed by fine-
tuning the pre-trained MobileNet V2 model on a localized dataset of disaster images, tailored
to the specific region or context. Once trained, the MobileNet V2 model proficiently classifies
unseen images depicting different disaster types. This classification capability empowers

12
emergency responders and policymakers to prioritize response efforts and allocate resources
effectively, contingent upon the nature and severity of the disaster. In summary, the integration
of MobileNet V2 augments the situational awareness of our disaster management system,
facilitating informed decision-making and proactive response measures to mitigate the adverse
impacts of disasters on affected communities.

Building Damage Level Assessment:

Building damage level assessment is a critical aspect of disaster management, enabling


responders to prioritize rescue efforts and allocate resources effectively. In this project, we
employ the U-Net architecture, a convolutional neural network (CNN) commonly used for
semantic segmentation tasks, to assess the damage levels of buildings within disaster-affected
areas. The U-Net architecture consists of an encoder-decoder network with skip connections,
enabling precise segmentation of objects in images while preserving spatial information. For
building damage level assessment, the U-Net model is trained on a dataset comprising aerial
or satellite images of disaster-affected areas, along with corresponding ground truth labels
indicating the extent of building damage.

The encoder part of the U-Net model extracts features from input images, gradually reducing
spatial resolution through a series of convolutional and pooling layers. These extracted features
are then propagated to the decoder part of the network through skip connections, allowing for
precise localization of damage within buildings. During training, the model learns to map input
images to corresponding damage level labels using techniques such as stochastic gradient
descent with backpropagation. In training, the loss function often incorporates metrics like
categorical cross-entropy or dice coefficient, which gauge the likeness between predicted and
actual segmentation masks. Once trained, the U-Net model can accurately segment building
structures and classify the severity of damage within them, providing valuable insights for
disaster response and recovery efforts. This enables responders to prioritize areas with
significant structural damage for immediate attention, facilitating efficient allocation of
resources and aiding in the timely restoration of infrastructure.

13
Dataset:
The xBD dataset, comprising a comprehensive collection of satellite photos capturing
various natural disasters, serves as the training and evaluation dataset for the proposed
system. With detailed annotations for building structures and damage levels, the xBD
dataset facilitates the development of robust models for disaster management tasks.

Figure 1: Architecture Diagram

In summary, the proposed system offers a comprehensive approach to disaster


management, leveraging state-of-the-art technologies to enhance situational awareness,
optimize resource allocation, and mitigate the adverse impacts of disasters on affected
communities. Through the integration of advanced deep learning algorithms and computer
vision techniques, the system empowers responders to make informed decisions and take
proactive measures to mitigate the impact of disasters effectively.

14
CHAPTER 4
RESULTS

15
4. Experimental Results

The results obtained from the implementation of deep learning models, including CNN for
disaster occurrence detection, MobileNet V2 for disaster type identification, and U-Net for
building damage assessment, have shown promising outcomes in enhancing disaster
management capabilities. Firstly, the CNN model demonstrated high accuracy in detecting
occurrences of disasters from satellite and aerial imagery. Through extensive training on
diverse datasets, the model achieved robustness in identifying visual cues indicative of
various disaster types, including wildfires, floods, earthquakes, and storms. The real-time
analysis provided by the CNN contributed to early warning systems, enabling timely
responseactions and resource allocation during disaster events.

The evaluation of the U-Net model for building damage level assessment was conducted
using an independent test dataset comprised of aerial and satellite imagery depicting
disaster-affected regions. The findings demonstrate the efficacy of the model in accurately
segmenting building structures and classifying the extent of damage. Quantitative measures
such as Intersection over Union and Dice coefficient were utilized to gauge the
segmentation accuracy achieved by the U-Net model. Additionally, qualitative analysis of
the model's predictions showcased precise localization of damage, distinguishing between
undamaged, partially damaged, and severely damaged regions within buildings. Visual
inspection of the segmentation masks corroborated these findings, with the model's
predictions closely aligning with ground truth annotations.

The models' effectiveness in determining the degrees of building damage across several
categories is shown by the evaluation measures. The precision score of 0.89 for "No Damage"
indicates a great ability to categories intact buildings. However, lower precision, recall, and
F1 scores indicate that the models have difficulty detecting minor, major, and destroyed
damage categories. This indicates that more work has to be done to improve the models'
capacity to distinguish between different degrees of building damage. This could involve
expanding the dataset or adjusting the model's parameters. If successful, this would increase

16
the models' usefulness in disaster response and recovery operations. The success of the U-Net
model underscores the potential of deep learning techniques in building damage assessment
for disaster management applications. By leveraging the U-Net architecture's ability to
preserve spatial information and capture fine-grained details, the model demonstrates
exceptional accuracy in quantifying the severity of building damage. The high level of
agreement between the model's predictions and ground truth annotations highlights its
reliability and utility in informing decision-making processes during disaster response efforts.
By providing timely and accurate insights into the spatial distribution and severity of building
damage, the U-Net model facilitates effective resource allocation and response coordination,
ultimately aiding in the mitigation of disaster impacts and the restoration of affected
communities.

Disaster Occurrence Determination:


The CNN model achieved high accuracy in detecting disaster events from aerial and
satellite imagery, with performance metrics such as recall, F1 Score, accuracy, and
Precision indicating its effectiveness in accurately identifying disaster-affected regions.
Real-world testing scenarios demonstrated the model's ability to swiftly pinpoint disaster
events, enabling rapid response and mitigation actions.

Disaster Type Identification:


MobileNet V2 exhibited robust performance in classifying different types of disasters
depicted in images, including wildfires, floods, earthquakes, and hurricanes. The model's
classification accuracy and efficiency were evaluated using metrics such as precision,
recall, and F1 score, demonstrating its reliability in categorizing disaster events and guiding
response strategies.

Building Damage Level Assessment:


The U-Net architecture demonstrated exceptional accuracy in assessing the degree of
damage incurred by buildings within disaster-affected areas. Through precise segmentation
of building structures and classification of damage severity, the model provided valuable
insights for prioritizing rescue efforts and guiding infrastructure repair work. Evaluation

17
metrics such as Intersection over Union (IoU) and Dice coefficient confirmed the model's
effectiveness in accurately delineating damaged areas within buildings.

Figure 2 : Precision, Recall, F1 Score Bar Chart

Figure 3 : Accuracy of damage assessment

18
Figure 4: Accuracy Measure

Categories CNN,Mobile- AlexNet CNN


Net,U-Net

Precision 0.75 0.74 0.75


Recall 0.77 0.76 0.77

F1 Score 0.79 0.78 0.72

Accuracy 0.89 0.85 0.70

Table 1 : Comparing with other Existing Model

Comparing the performance metrics of our models (CNN, MobileNet, U-Net) with AlexNet,
it is clear that our models outperform AlexNet in all dimensions.In terms of accuracy, our
model exhibits significantly higher accuracy compared to AlexNet. This indicates that our
models are good at correctly classifying damaged and undamaged buildings in satellite
images, leading to accurate damage assessments overall. Similarly, our models exhibit higher
accuracy, recall, and F1 scores compared to AlexNet. This means that our models achieve a

19
good balance between the true positive rate (recall) and the false positive rate (precision),
enabling reliable and consistent identification of damaged buildings. Overall, the average
performance of our models in all metrics is particularly better than AlexNet, indicating the
effectiveness of our CNN, MobileNet, and U-Net algorithms in destroying buildings used to
identify damage in satellite imagery for disaster recovery efforts.

Overall, the results and outcomes of integrating deep learning models in the disaster
management project include improved early detection of disasters, accurate classification
of disaster types, and precise assessment of building damages. These advancements
contribute to more proactive, data-driven, and effective disaster response and recovery
strategies, ultimately enhancing resilience and mitigating the impacts of disasters on
communities and infrastructure. Ongoing research, collaboration among stakeholders, and
continuous refinement of models are essential for further advancing the field of disaster
management and improving overall preparedness and response capabilities.

20
CHAPTER 5
CONCLUSION

21
5. CONCLUSION

The integration of deep learning models in disaster management systems represents a


significant leap forward, particularly with the utilization of cutting-edge architectures like
Convolutional Neural Networks (CNNs), MobileNet V2, and U-Net. These models, powered by
sophisticated deep learning algorithms, have showcased remarkable efficacy in enhancing
situational awareness, refining response strategies, and bolstering resilience in disaster-prone
areas.

CNN models excel in deciphering patterns and features within satellite and aerial imagery,
boasting exceptional accuracy in detecting various types of disasters. By analyzing imagery data
in real-time, CNN models enable the development of early warning systems, empowering
authorities to issue timely evacuation orders, mobilize emergency personnel, and allocate
resources effectively. This swift identification of disaster occurrences significantly reduces
response time and mitigates the impact on communities and infrastructure.

MobileNet V2, with its lightweight architecture, plays a crucial role in classifying disaster types
based on visual cues extracted from imagery data. Its ability to perform rapid inference on edge
devices facilitates on-the-ground decision-making and resource allocation. By accurately
identifying the type of disaster, such as wildfires, floods, or earthquakes, MobileNet V2 enables
responders to tailor their efforts accordingly, optimizing the utilization of available resources and
personnel.

Meanwhile, the U-Net architecture facilitates precise segmentation of building damages and
accurate classification of damage levels. By segmenting satellite imagery into individual building
structures and analyzing pixel-level information, the U-Net model aids in prioritizing rescue
efforts and streamlining recovery processes. Emergency responders can swiftly identify severely
damaged structures, facilitating targeted search and rescue operations and expediting the
restoration of critical infrastructure post-disaster.

22
5.1 Future Work

Future work in the realm of disaster management using deep learning models presents exciting
opportunities for advancing the field and addressing emerging challenges. Several areas of focus
can be identified for future research and development to enhance the effectiveness, efficiency, and
resilience of disaster management systems. Firstly, the integration of multi-modal data sources
and sensor networks presents a promising avenue for enhancing disaster detection and response
capabilities. Incorporating data from various sources such as remote sensing satellites, drones, IoT
devices, and social media platforms canprovide a comprehensive and real-time understanding of
disaster events. we can build a future where communities are better prepared, more resilient, and
more equipped to withstand and recover from the myriad challenges posed by disasters.

23
CHAPTER 6
REFERENCES

24
6. REFERENCES
[1] Xia, Haobin, Jianjun Wu, Jiaqi Yao, Hong Zhu, Adu Gong, Jianhua Yang, Liuru
Hu, and Fan Mo. "A Deep Learning Application for Building Damage Assessment
Using Ultra-High-Resolution Remote Sensing Imagery in Turkey
Earthquake." International Journal of Disaster Risk Science 14, no. 6 (2023): 947-
962.

[2] Xu, Joseph Z., Wenhan Lu, Zebo Li, Pranav Khaitan, and Valeriya Zaytseva.
"Building damage detection in satellite imagery using convolutional neural
networks." arXiv preprint arXiv:1910.06444 (2019).

[3] Kim, Danu, Jeongkyung Won, Eunji Lee, Kyung Ryul Park, Jihee Kim, Sangyoon
Park, Hyunjoo Yang, Sungwon Park, Donghyun Ahn, and Meeyoung Cha.
"Disaster assessment using computer vision and satellite imagery: Applications in
water-related building damage detection." (2023).

[4] Wang, Ying, Alvin Wei Ze Chew, and Limao Zhang. "Building damage detection
from satellite images after natural disasters on extremely imbalanced
datasets." Automation in Construction 140 (2022): 104328.

[5] Bande, Swapnil, and Virendra V. Shete. "Smart flood disaster prediction system
using IoT & neural networks." In 2017 International Conference On Smart
Technologies For Smart Nation (SmartTechCon), pp. 189-194. Ieee, 2017.

[6] Huang, Di, Shuaian Wang, and Zhiyuan Liu. "A systematic review of prediction
methods for emergency management." International Journal of Disaster Risk
Reduction 62 (2021): 102412.

[7] Gupta, Ritwik, Bryce Goodman, Nirav Patel, Ricky Hosfelt, Sandra Sajeev, Eric
Heim, Jigar Doshi, Keane Lucas, Howie Choset, and Matthew Gaston. "Creating
xBD: A dataset for assessing building damage from satellite imagery."
In Proceedings of the IEEE/CVF conference on computer vision and pattern
recognition workshops, pp. 10-17. 2019.

25
[8] Gupta, Ritwik, Richard Hosfelt, Sandra Sajeev, Nirav Patel, Bryce Goodman, Jigar
Doshi, Eric Heim, Howie Choset, and Matthew Gaston. "xbd: A dataset for
assessing building damage from satellite imagery." arXiv preprint
arXiv:1911.09296 (2019).

[9] Bai, Yanbing, Junjie Hu, Jinhua Su, Xing Liu, Haoyu Liu, Xianwen He, Shengwang
Meng, Erick Mas, and Shunichi Koshimura. "Pyramid pooling module-based semi-
siamese network: A benchmark model for assessing building damage from xBD
satellite imagery datasets." Remote Sensing 12, no. 24 (2020): 4055.

[10] Wajid, Mohd Anas, Aasim Zafar, Hugo Terashima-Marín, and Mohammad Saif
Wajid. "Neutrosophic-CNN-based image and text fusion for multimodal
classification." Journal of Intelligent & Fuzzy Systems 45, no. 1 (2023): 1039-1055.

[11] Song, Huaxiang, and Yong Zhou. "Simple is best: A single-CNN method for
classifying remote sensing images." Networks & Heterogeneous Media 18, no. 4
(2023).

[12] Sambandam, Shreeram Gopal, Raja Purushothaman, Rahmath Ulla Baig, Syed
Javed, Vinh Truong Hoang, and Kiet Tran-Trung. "Intelligent surface defect
detection for submersible pump impeller using MobileNet V2 architecture." The
International Journal of Advanced Manufacturing Technology 124, no. 10 (2023):
3519-3532.

[13] Li, Yanyu, Ju Hu, Yang Wen, Georgios Evangelidis, Kamyar Salahi, Yanzhi Wang,
Sergey Tulyakov, and Jian Ren. "Rethinking vision transformers for mobilenet size
and speed." In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pp. 16889-16900. 2023.

[14] Jin, Ge, Yanghe Liu, Peiliang Qin, Rongjing Hong, Tingting Xu, and Guoyu Lu.
"An End-to-End Steel Surface Classification Approach Based on EDCGAN and
MobileNet V2." Sensors 23, no. 4 (2023): 1953.

26
[15] Sun, Haixia, Shujuan Zhang, Rui Ren, and Liyang Su. "Maturity classification of
“Hupingzao” jujubes with an imbalanced dataset based on improved mobileNet
V2." Agriculture 12, no. 9 (2022): 1305.

[16] Sutaji, Deni, and Oktay Yıldız. "LEMOXINET: Lite ensemble MobileNetV2 and
Xception models to predict plant disease." Ecological Informatics 70 (2022):
101698.

[17] Williams, Christopher, Fabian Falck, George Deligiannidis, Chris C. Holmes,


Arnaud Doucet, and Saifuddin Syed. "A Unified Framework for U-Net Design and
Analysis." Advances in Neural Information Processing Systems 36 (2024).

[18] Anand, Vatsala, Sheifali Gupta, Deepika Koundal, and Karamjeet Singh. "Fusion
of U-Net and CNN model for segmentation and classification of skin lesion from
dermoscopy images." Expert Systems with Applications 213 (2023): 119230.

[19] Singla, Danush, Furkan Cimen, and Chandrakala Aluganti Narasimhulu. "Novel
artificial intelligent transformer U-NET for better identification and management
of prostate cancer." Molecular and cellular biochemistry 478, no. 7 (2023): 1439-
1445.

[20] Wang, Xiuhua, Guangcai Feng, Lijia He, Qi An, Zhiqiang Xiong, Hao Lu, Wenxin
Wang et al. "Evaluating urban building damage of 2023 Kahramanmaras, Turkey
earthquake sequence using SAR change detection." Sensors 23, no. 14 (2023):
6342.

[21] Hong, Zhonghua, Hongyang Zhang, Xiaohua Tong, Shijie Liu, Ruyan Zhou,
Haiyan Pan, Yun Zhang, Yanling Han, Jing Wang, and Shuhu Yang. "Rapid fine-
grained Damage Assessment of Buildings on a Large Scale: A Case Study of the
February 2023 Earthquake in Turkey." IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing (2024)

27
7.APPENDICES
7. 1. APPENDIX- 1
SCREEN SHOTS

Figure 5 : Pre and Post Disaster Occurred Satellite Images.

Figure 6 : Sample Output 1

28
Figure 7 : Sample Output 2

29
7. 2. APPENDIX- 2
CODING

import matplotlib.pyplot as plt


import numpy as np
import os
import PIL
from PIL import Image, ImageFile
import tensorflow as tf
import json
import glob
import random
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
import gradio as gr
ImageFile.LOAD_TRUNCATED_IMAGES = True
import pathlib
dataset_path = "./xBD-dataset"
train_data_dir = pathlib.Path(dataset_path+"/train/images")
test_data_dir = pathlib.Path(dataset_path+"/test/images")
train_files = glob.glob(r""+dataset_path+"/train/images/*.png")
train_files = random.sample(train_files, 1500)
train_datasize = len(train_files)
print("training data:", len(train_files))
test_files = glob.glob(r""+dataset_path+"/test/images/*.png")
test_files = random.sample(test_files, 500)
test_datasize = len(test_files)
print("test data:", len(test_files))
training data: 1500
test data: 500
images = list(train_data_dir.glob('*'))
random_image = random.choice(images)
im = PIL.Image.open(str(random_image))
width, height = im.size
print(width)
print(height)
im.resize((300, 300)).show()
1024
1024
img_height = 1024
img_width = 1024
class_names = np.array(sorted(['disaster happened', 'no disaster

30
happened']))
print(class_names)
['disaster happened' 'no disaster happened']
def get_label(file_path, type):
parts = file_path.split(os.path.sep)
if "pre" in parts[7]:
damage = 'no disaster happened'
else: # "post"
damage = 'disaster happened'
label = damage == class_names
one_hot = np.zeros(len(class_names), dtype=np.uint8)
one_hot[label] = 1
return one_hot
def get_label_from_one_hot(array):
return class_names[np.where(array == 1)]
train_X = np.zeros((train_datasize, img_height, img_width, 3),
dtype=np.uint8)
train_Y = np.zeros((train_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(train_files)):
img= PIL.Image.open(train_files[i])
train_X[i] = np.array(img)
train_Y[i] = get_label(train_files[i], "train")
print("train")
print(train_X.shape)
print(train_Y.shape)
test_X = np.zeros((test_datasize, img_height, img_width, 3),
dtype=np.uint8)
test_Y = np.zeros((test_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(test_files)):
img= PIL.Image.open(test_files[i])
test_X[i] = np.array(img)
test_Y[i] = get_label(test_files[i], "test")
print("test")
print(test_X.shape)
print(test_Y.shape)
train
(1500, 1024, 1024, 3)
(1500, 2)

test
(500, 1024, 1024, 3)
(500, 2)
plt.figure(figsize=(10,10))
for i in range(25):

31
plt.subplot(5,5,i+1)
choice = random.randint(0, train_datasize-1)
plt.title(get_label_from_one_hot(train_Y[choice]))
plt.imshow(train_X[choice])
plt.tight_layout()
plt.show()
import gradio as gr
ImageFile.LOAD_TRUNCATED_IMAGES = True
import pathlib
dataset_path = "./xBD-dataset"
train_data_dir = pathlib.Path(dataset_path+"/train/images")
test_data_dir = pathlib.Path(dataset_path+"/test/images")
train_files = glob.glob(r""+dataset_path+"/train/images/*.png")
train_files = list(filter(lambda x: "post" in x, train_files))
train_files = random.sample(train_files, 1500)
train_datasize = len(train_files)
print("training data:", len(train_files))
test_files = glob.glob(r""+dataset_path+"/test/images/*.png")
test_files = list(filter(lambda x: "post" in x, test_files))
test_files = random.sample(test_files, 500)
test_datasize = len(test_files)
print("test data:", len(test_files))
training data: 1500
test data: 500
images = list(train_data_dir.glob('*'))
random_image = random.choice(images)
im = PIL.Image.open(str(random_image))
width, height = im.size
print(width)
print(height)
im.resize((300, 300)).show()
1024
1024
img_height = 1024
img_width = 1024

print(class_names)
['earthquake' 'fire' 'flooding' 'tsunami' 'volcano' 'wind']
def get_label(file_path, type):
parts = file_path.split(os.path.sep)
path = dataset_path+'/train/labels/'
if type == "test":
path = dataset_path+'/test/labels/'
f = open(path + parts[7].split('.')[0] +'.json')
data = json.load(f)

32
disaster_type = data['metadata']['disaster_type']
f.close()
label = disaster_type == class_names
one_hot = np.zeros(len(class_names), dtype=np.uint8)
one_hot[label] = 1
return one_hot
def get_label_from_one_hot(array):
return class_names[np.where(array == 1)]
train_X = np.zeros((train_datasize, img_height, img_width, 3),
dtype=np.uint8)
train_Y = np.zeros((train_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(train_files)):
img= PIL.Image.open(train_files[i])
train_X[i] = np.array(img)
train_Y[i] = get_label(train_files[i], "train")
print("train")
test_X = np.zeros((test_datasize, img_height, img_width, 3),
dtype=np.uint8)
test_Y = np.zeros((test_datasize, len(class_names)), dtype=np.uint8)
for i in range(len(test_files)):
img= PIL.Image.open(test_files[i])
test_X[i] = np.array(img)
test_Y[i] = get_label(test_files[i], "test")
print("test")

train
(1500, 1024, 1024, 3)
(1500, 6)
test
(500, 1024, 1024, 3)
(500, 6)
choice = random.randint(0, train_datasize-1)
plt.title(get_label_from_one_hot(train_Y[choice]))
plt.imshow(train_X[choice])
plt.tight_layout()
plt.show()

33
34

You might also like