Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

DISASTER PREDICTION:

I. Background:
- Predicting Natural Disasters
Disaster prediction for natural events isn’t an exact science. Despite advances in technology, no
one can tell with complete accuracy when a volcano will erupt, or how powerful a hurricane will
be on landfall.

However, observation and data are powerful tools, so prediction is getting better and faster.

Throughout history, people have known about the potential for natural disasters. Storytelling
cultures handed down the “data” from previous events to keep communities aware. For
example, Pacific Northwest oral traditions tell of a great earthquake and resulting tsunami to
warn people about what to do in future events. Recently, Simeuluean islanders escaped
harm during the tsunami following the 2004 Indian Ocean earthquake because of similar stories
handed down through the generations.
Today, instrumentation and digital data supplement visual observation and oral histories.
Analysts use many sources to gather data.

a. Earthquakes and Tsunamis

Seismic instruments measure shaking in the Earth’s crust as geological fault lines slip. In the
deep ocean, sensors monitor volume displacements and seabed deformation. Increasing activity
can predict an earthquake.

Along one of the world’s most famous fault lines, the San Andreas Fault in California, the U.S.
Geological Survey (USGS) collects data from tilt meters and creep meters that precisely measure
earth movement; however, strain meters and pressure sensors embedded into the rock warn of
pressure building up before a slip.
The National Oceanic and Atmospheric Administration (NOAA) collects data from a chain of
early-warning oceanic devices including wave-height sensors and Deep-ocean Assessment and
Reporting of Tsunamis (DART) buoys. These seabed sensors monitor seismic events beneath the
waves then transmit the alerts back to land via surface buoys for coastal tsunami alerts.
b. Storm Watching

Meteorologists recognize patterns in weather data collected through remote sensing and on-the-
ground observation that show hurricanes or tornadoes developing. Satellites give weather
scientists a “big picture” overview of weather development, which is useful in storm tracking.

Tornadoes, however, are trickier to predict since they are much faster-moving
events; meteorologists use Doppler radar to measure moving objects such as hail or rain within
developing supercell clouds. Improvements in radar have increased advanced warnings from
around three minutes in the late 1980s to 14 minutes’ notice in 2012.

Evolution and Future


Instrument upgrades and new technology are helping to improve natural-disaster prediction.

 NOAA is deploying new, robust and easily launched DARTs for more widespread


tsunami monitoring.
 Meteorologists explore phased-array radar for storm watching; the multiple beams reduce
scanning time for gathering data.
 NASA geostationary and orbital satellites gather information on storms and other weather
systems. They are also useful in conjunction with GPS and topographical scanning to
show horizontal and vertical landmass shifts due to fault line activity.
 Back on Earth, tornado scientists gather data within the storm itself by planting
protected radar arrays in tornado country.
 Volcanologists listen to volcanoes to predict eruptions. Infrasound waves at frequencies
lower than what human ears can detect show when a volcano is “grumbling” from magma
expansion and lava-cone collapse.
 Satellite imagery and seismic data play a role in future landslide prediction; identifying
and exploring events not usually detected in remote areas helps scientists predict risk in
similar geographies.
Data and More Data
Prediction science is only as good as the data collected, but it is often difficult to gather it safely;
for this reason, remote sensing is a valuable tool.

Automated vehicles can capture data right at the heart of a disaster. By dropping such aircraft
into developing storms, scientists gain valuable information on hurricane formation, for example,
that predicts behavior and damage on landfall.

RQ-4

Developed for remote eye-in-the-sky missions, the Northrop Grumman unmanned aerial
reconnaissance vehicle RQ-4 Global Hawk is currently involved in storm-sensing missions.
Flying the aircraft as part of a NASA project from the U.S. East Coast gave scientists more fly
time over Atlantic storms. Since the Global Hawk operates over long ranges without refueling, it
reaches remote weather systems faster to gather continuous data over longer periods than
manned flights. This type of data collection is highly valuable.
Using different sensors to measure the temperature, pressure, relative humidity, amount of water
vapor and liquid water, and wind speed and direction in a hurricane, unmanned vehicles such as
the Global Hawk also collect valuable data after a disaster since they are often the only way to
reach remote or isolated areas.

“This information is transmitted near-real time from the aircraft back to the Global Hawk
Operations Center and then disseminated to the NOAA, National Hurricane Center (NHC) and
the scientific community,” says Mick Jaggers, Northrop Grumman’s vice president and program
manager for the Global Hawk program.

Both the Global Hawk and the MQ-4C Triton have engaged in disaster surveillance. Global
Hawk took part in Operation Tomodachi to relay vital visuals on earthquake and tsunami damage
to towns and the Fukushima nuclear plant in 2011. Information like this is also valuable for
predicting future disasters; building damage recorded after the Haitian earthquake in 2010 could
help efforts to rebuild to a safer code.

Cathedral in Haiti: Before and After the 2010 Earthquake (British Geological Survey)

Jaggers offers further detail on the program’s success in acquiring crucial information: “During
the 2016 campaign, the Global Hawk flew nine flights over hurricanes Gaston, Hermine,
Matthew and Nicole and Tropical Storm Karl,” says Jaggers. “Based on Global Hawk data from
then-Tropical Storm Gaston, the NHC upgraded the storm to hurricane status, the first time that
data from an unmanned aircraft had been used to change the category of a storm.

“NOAA scientists determined that the Global Hawk data provided statistically significant
improvements in their hurricane prediction models; in addition, Global Hawk data over the
Atlantic Ocean were shown to improve weather predictions for the Pacific Ocean,” he adds. “The
long endurance, long range and high altitude of the aircraft, coupled with the team’s flexibility,
make the Global Hawk an important part of future airborne weather science missions.”
Researchers also seek better ways to analyze and present data for faster and more accurate
predictions. Combined systems like EDSS (Environmental Decision Support System; Northrop
Grumman) aggregate data sets, then present them in an easily accessible manner to answer key
questions — where? when? why? what?
Improving analysis also boosts predictive power; USGS researchers are examining fractal
mathematics for improved data analysis. Compared to traditional statistical methods, fractals
gave more information on past hurricane events. For future hurricanes, in terms of predictive
power, fractal-based predictions are much more precise.

II. FLOOD PREDICTION:

An Inside Look at Flood Forecasting


Sella Nevo, Senior Software Engineer, Google Research, Tel Aviv

Several years ago, we identified flood forecasts as a unique opportunity to improve people’s
lives, and began looking into how Google’s infrastructure and machine learning expertise can
help in this field. Last year, we started our flood forecasting pilot in the Patna region, and since
then we have expanded our flood forecasting coverage, as part of our larger AI for Social
Good efforts. In this post, we discuss some of the technology and methodology behind this
effort.

The Inundation Model


A critical step in developing an accurate flood forecasting system is to develop inundation
models, which use either a measurement or a forecast of the water level in a river as an input,
and simulate the water behavior across the floodplain.
A 3D visualization of a hydraulic model simulating various river conditions.

This allows us to translate current or future river conditions, to highly spatially accurate risk
maps - which tell us what areas will be flooded and what areas will be safe. Inundation models
depend on four major components, each with its own challenges and innovations:

Real-time Water Level Measurements


To run these models operationally, we need to know what is happening on the ground in real-
time, and thus we rely on partnerships with the relevant government agencies to receive timely
and accurate information. Our first governmental partner is the Indian Central Water
Commission (CWC), which measures water levels hourly in over a thousand stream
gauges across all of India, aggregates this data, and produces forecasts based on upstream
measurements. The CWC provides these real-time river measurements and forecasts, which are
then used as inputs for our models.
CWC employees measuring water level and discharge near Lucknow, India.

Elevation Map Creation


Once we know how much water is in a river, it is critical that the models have a good map of the
terrain. High-resolution digital elevation models (DEMs) are incredibly useful for a wide range
of applications in the earth sciences, but are still difficult to acquire in most of the world,
especially for flood forecasting. This is because meter-wide features of the ground conditions can
create a critical difference in the resulting flooding (embankments are one exceptionally
important example), but publicly accessible global DEMs have resolutions of tens of meters. To
help address this challenge, we’ve developed a novel methodology to produce high resolution
DEMs based on completely standard optical imagery.

We start with the large and varied collection of satellite images used in Google Maps.
Correlating and aligning the images in large batches, we simultaneously optimize for satellite
camera model corrections (for orientation errors, etc.) and for coarse terrain elevation. We then
use the corrected camera models to create a depth map for each image. To make the elevation
map, we optimally fuse the depth maps together at each location. Finally, we remove objects
such as trees and bridges so that they don’t block water flow in our simulations. This can be done
manually or by training convolutional neural networks that can identify where the terrain
elevations need to be interpolated. The result is a roughly 1 meter DEM, which can be used to
run hydraulic models.

A 30m SRTM-based DEM of the Yamuna river compared to a Google-generated 1m DEM of


the same area.

Hydraulic Modeling
Once we have both these inputs - the riverine measurements and forecasts, and the elevation map
- we can begin the modeling itself, which can be divided into two main components. The first
and most substantial component is the physics-based hydraulic model, which updates the
location and velocity of the water through time based on (an approximated) computation of the
laws of physics. Specifically, we’ve implemented a solver for the 2D form of the shallow-
water Saint-Venant equations. These models are suitably accurate when given accurate inputs
and run at high resolutions, but their computational complexity creates challenges - it is
proportional to the cube of the resolution desired. That is, if you double the resolution, you’ll
need roughly 8 times as much processing time. Since we’re committed to the high-resolution
required for highly accurate forecasts, this can lead to unscalable computational costs, even for
Google!

To help address this problem, we’ve created a unique implementation of our hydraulic model,
optimized for Tensor Processing Units (TPUs). While TPUs were optimized for neural networks
(rather than differential equation solvers like our hydraulic model), their highly parallelized
nature leads to the performance per TPU core being 85x times faster than the performance per
CPU core. For additional efficiency improvements, we’re also looking at using machine learning
to replace some of the physics-based algorithmics, extending data-driven discretization to two-
dimensional hydraulic models, so we can support even larger grids and cover even more people.

A snapshot of a TPU-based simulation of flooding in Goalpara, mid-event.

As mentioned earlier, the hydraulic model is only one component of our inundation forecasts.
We’ve repeatedly found locations where our hydraulic models are not sufficiently accurate -
whether that’s due to inaccuracies in the DEM, breaches in embankments, or unexpected water
sources. Our goal is to find effective ways to reduce these errors. For this purpose, we added a
predictive inundation model, based on historical measurements. Since 2014, the European Space
Agency has been operating a satellite constellation named Sentinel-1 with C-band Synthetic-
Aperture Radar (SAR) instruments. SAR imagery is great at identifying inundation, and can do
so regardless of weather conditions and clouds. Based on this valuable data set, we correlate
historical water level measurements with historical inundations, allowing us to identify
consistent corrections to our hydraulic model. Based on the outputs of both components, we can
estimate which disagreements are due to genuine ground condition changes, and which are due
to modeling inaccuracies.

Flood warnings across Google’s interfaces.

Looking Forward
We still have a lot to do to fully realize the benefits of our inundation models. First and foremost,
we’re working hard to expand the coverage of our operational systems, both within India and to
new countries. There’s also a lot more information we want to be able to provide in real time,
including forecasted flood depth, temporal information and more. Additionally, we’re
researching how to best convey this information to individuals to maximize clarity and
encourage them to take the necessary protective actions.
Computationally, while the inundation model is a good tool for improving the spatial resolution
(and therefore the accuracy and reliability) of existing flood forecasts, multiple governmental
agencies and international organizations we’ve spoken to are concerned about areas that do not
have access to effective flood forecasts at all, or whose forecasts don’t provide enough lead time
for effective response. In parallel to our work on the inundation model, we’re working on some
basic research into improved hydrologic models, which we hope will allow governments not
only to produce more spatially accurate forecasts, but also achieve longer preparation time.

Hydrologic models accept as inputs things like precipitation, solar radiation, soil moisture and
the like, and produce a forecast for the river discharge (among other things), days into the future.
These models are traditionally implemented using a combination of conceptual models
approximating different core processes such as snowmelt, surface runoff, evapotranspiration and
more.
The core processes of a hydrologic model. Designed by Daniel Klotz, JKU Institute for Machine
Learning.

These models also traditionally require a large amount of manual calibration, and tend to
underperform in data scarce regions. We are exploring how multi-task learning can be used to
address both of these problems — making hydrologic models both more scalable, and more
accurate. In research collaboration with JKU Institute For Machine Learning group under Sepp
Hochreiter on developing ML-based hydrologic models, Kratzert et al. show
how LSTMs perform better than all benchmarked classic hydrologic models.
The distribution of NSE scores on basins across the United States for various models, showing
the proposed EA-LSTM consistently outperforming a wide range of commonly used models.

Though this work is still in the basic research stage and not yet operational, we think it is an
important first step, and hope it can already be useful for other researchers and hydrologists. It’s
an incredible privilege to take part in the large eco-system of researchers, governments, and
NGOs working to reduce the harms of flooding. We’re excited about the potential impact this
type of research can provide, and look forward to where research in this field will go.
III. DROUGHT PREDICTION

Studies over the past century have shown that meteorological drought is never the result of a
single cause. It is the consequence of complex individual interactions. Some of the following
may be components of a drought event:

Global Weather Patterns

A great deal of research has been conducted in recent years on the role of interacting systems, or
teleconnections, in explaining regional and even global patterns of climatic variability. These
patterns tend to recur periodically with enough frequency and with similar characteristics over a
sufficient length of time that they offer opportunities to improve our ability for long-range
climate prediction, particularly in the tropics. One such teleconnection is the El Niño/Southern
Oscillation (ENSO).
High Pressure

The immediate cause of drought is the predominant sinking motion of air (subsidence) that
results in compressional warming or high pressure, which inhibits cloud formation and results in
lower relative humidity and less precipitation. Regions under the influence of semipermanent
high pressure during all or a major portion of the year are usually deserts, such as the Sahara and
Kalahari deserts of Africa and the Gobi Desert of Asia. Most climatic regions experience varying
degrees of dominance by high pressure, often depending on the season. Prolonged droughts
occur when large-scale anomalies in atmospheric circulation patterns persist for months or
seasons (or longer). The extreme drought that affected the United States and Canada during 1988
resulted from the persistence of a large-scale atmospheric circulation anomaly.

Too Many Variables

Scientists don’t know how to predict drought a month or more in advance for most locations.
Predicting drought depends on the ability to forecast two fundamental meteorological surface
parameters, precipitation and temperature. From the historical record we know that climate is
inherently variable. We also know that anomalies of precipitation and temperature may last from
several months to several decades. How long they last depends on air–sea interactions, soil
moisture and land surface processes, topography, internal dynamics, and the accumulated
influence of dynamically unstable synoptic weather systems at the global scale.
The potential for improved drought predictions in the near future differs by region, season, and
climatic regime.

The Tropical Outlook

In the tropics, for example, meteorologists have made significant advances in understanding the
climate system. Specifically, it is now known that a major portion of the atmospheric variability
that occurs on time scales of months to several years is associated with variations in tropical sea
surface temperatures. The Tropical Ocean Global Atmosphere (TOGA) project has produced
results that suggest that it may now be possible to predict certain climatic conditions associated
with ENSO events more than a year in advance. For those regions whose climate is greatly
influenced by ENSO events, TOGA project results may help produce more reliable
meteorological forecasts that can reduce risks in those economic sectors most sensitive to climate
variability and, particularly, extreme events such as drought.
The Temperate Zone Outlook

In the extratropical regions, current long-range forecasts are of very limited reliability. The
ability that does exist is primarily the result of empirical and statistical relationships. In the
tropics, empirical relationships have been demonstrated to exist between precipitation and ENSO
events, but few such relationships have been confirmed above 30 north latitude. Meteorologists
do not believe that reliable forecasts are attainable for all regions a season or more in advance.

Global Forecast Drought Tool

https://iridl.ldeo.columbia.edu/maproom/Global/World_Bank/Drought_Monitor/index3.html?
gmap=%5B2.749068304578767%2C15.584944338858076%2C3%5D
This tool displays maps of meteorological drought risk using the standardized precipitation index
SPI. It allows the user to choose between maps of either the predicted drought severity for a user-
specified likelihood or the risk of a certain magnitude of drought level happening.

The timescale presented here for demonstration is the 6-month Standardized Precipitation Index
(SPI6). The SPI6 drought forecast combines the prior 3 months of observed precipitation and
forecasted upcoming 3 months of seasonal rainfall. The menu Map Type presents two options of
display: Drought Severity or Drought Risk.

 For example, the Forecasted Drought Severity SPI6 for a six-month period ending in


March is based on the observations of rainfall during the months of October to December
and on the forecast rainfall totals made at the end of December, for the period of January
to March. For this type of map, the user can choose a Probability of Drier Conditions (for
example: 90%) and the map will represent the SPI6 value forecast. It is 90% likely that
the SPI6 observed over that 6-month period will be drier than the value presented in the
map. This information can help decision-makers by providing them the probability of
rainfall deficit or surplus. It also can be used in conjunction with recent drought
observations (Standardized Precipitation Index for multiple monthly accumulation
periods) to indicate whether drought conditions are likely to develop, worsen or improve.
This can be valuable information particularly for agricultural and water resources
planning.
 The Drought Risk map shows the probabilities that the forecast SPI6 value will be equal
to or lower than a user-selected drought severity level. Probabilities are displayed on a
scale between 0% and 100%. The user can select a value of Drought Severity Levels in
the dropdown menu. This level of drought corresponds to a SPI Threshold as described in
the table below. The map will display the likelihood of a drought as severe or worse than
the level selected, according to the SPI threshold chosen. 
SPI6 Value Drought Severity Frequency
2.0 Severe Wetness 1 in 43-year event
1.5   Intermediate Wetness     1 in 23-year event  
1.0 Moderate Wetness 1 in 11-year event
0.0 Normal 2 in 3-year event
-1.0 Moderate Dryness 1 in 11-year event
-1.5 Intermediate Dryness 1 in 23-year event
-2.0 Severe Dryness 1 in 43-year event
These two versions of the information are complementary. In one case, the consideration is what
is the drought severity indicated at a given level of confidence. In the other case, the
consideration is what is the likelihood that drought will be at a given level of severity or worse.

References:

https://ai.googleblog.com/2019/09/an-inside-look-at-flood-forecasting.html

https://now.northropgrumman.com/tsunamis-hurricanes-eruptions-predicting-a-natural-
disaster/

https://drought.unl.edu/Education/DroughtIn-depth/Predicting.aspx

You might also like