Professional Documents
Culture Documents
Disaster Prediction:: - Predicting Natural Disasters
Disaster Prediction:: - Predicting Natural Disasters
I. Background:
- Predicting Natural Disasters
Disaster prediction for natural events isn’t an exact science. Despite advances in technology, no
one can tell with complete accuracy when a volcano will erupt, or how powerful a hurricane will
be on landfall.
However, observation and data are powerful tools, so prediction is getting better and faster.
Throughout history, people have known about the potential for natural disasters. Storytelling
cultures handed down the “data” from previous events to keep communities aware. For
example, Pacific Northwest oral traditions tell of a great earthquake and resulting tsunami to
warn people about what to do in future events. Recently, Simeuluean islanders escaped
harm during the tsunami following the 2004 Indian Ocean earthquake because of similar stories
handed down through the generations.
Today, instrumentation and digital data supplement visual observation and oral histories.
Analysts use many sources to gather data.
Seismic instruments measure shaking in the Earth’s crust as geological fault lines slip. In the
deep ocean, sensors monitor volume displacements and seabed deformation. Increasing activity
can predict an earthquake.
Along one of the world’s most famous fault lines, the San Andreas Fault in California, the U.S.
Geological Survey (USGS) collects data from tilt meters and creep meters that precisely measure
earth movement; however, strain meters and pressure sensors embedded into the rock warn of
pressure building up before a slip.
The National Oceanic and Atmospheric Administration (NOAA) collects data from a chain of
early-warning oceanic devices including wave-height sensors and Deep-ocean Assessment and
Reporting of Tsunamis (DART) buoys. These seabed sensors monitor seismic events beneath the
waves then transmit the alerts back to land via surface buoys for coastal tsunami alerts.
b. Storm Watching
Meteorologists recognize patterns in weather data collected through remote sensing and on-the-
ground observation that show hurricanes or tornadoes developing. Satellites give weather
scientists a “big picture” overview of weather development, which is useful in storm tracking.
Tornadoes, however, are trickier to predict since they are much faster-moving
events; meteorologists use Doppler radar to measure moving objects such as hail or rain within
developing supercell clouds. Improvements in radar have increased advanced warnings from
around three minutes in the late 1980s to 14 minutes’ notice in 2012.
Automated vehicles can capture data right at the heart of a disaster. By dropping such aircraft
into developing storms, scientists gain valuable information on hurricane formation, for example,
that predicts behavior and damage on landfall.
RQ-4
Developed for remote eye-in-the-sky missions, the Northrop Grumman unmanned aerial
reconnaissance vehicle RQ-4 Global Hawk is currently involved in storm-sensing missions.
Flying the aircraft as part of a NASA project from the U.S. East Coast gave scientists more fly
time over Atlantic storms. Since the Global Hawk operates over long ranges without refueling, it
reaches remote weather systems faster to gather continuous data over longer periods than
manned flights. This type of data collection is highly valuable.
Using different sensors to measure the temperature, pressure, relative humidity, amount of water
vapor and liquid water, and wind speed and direction in a hurricane, unmanned vehicles such as
the Global Hawk also collect valuable data after a disaster since they are often the only way to
reach remote or isolated areas.
“This information is transmitted near-real time from the aircraft back to the Global Hawk
Operations Center and then disseminated to the NOAA, National Hurricane Center (NHC) and
the scientific community,” says Mick Jaggers, Northrop Grumman’s vice president and program
manager for the Global Hawk program.
Both the Global Hawk and the MQ-4C Triton have engaged in disaster surveillance. Global
Hawk took part in Operation Tomodachi to relay vital visuals on earthquake and tsunami damage
to towns and the Fukushima nuclear plant in 2011. Information like this is also valuable for
predicting future disasters; building damage recorded after the Haitian earthquake in 2010 could
help efforts to rebuild to a safer code.
Cathedral in Haiti: Before and After the 2010 Earthquake (British Geological Survey)
Jaggers offers further detail on the program’s success in acquiring crucial information: “During
the 2016 campaign, the Global Hawk flew nine flights over hurricanes Gaston, Hermine,
Matthew and Nicole and Tropical Storm Karl,” says Jaggers. “Based on Global Hawk data from
then-Tropical Storm Gaston, the NHC upgraded the storm to hurricane status, the first time that
data from an unmanned aircraft had been used to change the category of a storm.
“NOAA scientists determined that the Global Hawk data provided statistically significant
improvements in their hurricane prediction models; in addition, Global Hawk data over the
Atlantic Ocean were shown to improve weather predictions for the Pacific Ocean,” he adds. “The
long endurance, long range and high altitude of the aircraft, coupled with the team’s flexibility,
make the Global Hawk an important part of future airborne weather science missions.”
Researchers also seek better ways to analyze and present data for faster and more accurate
predictions. Combined systems like EDSS (Environmental Decision Support System; Northrop
Grumman) aggregate data sets, then present them in an easily accessible manner to answer key
questions — where? when? why? what?
Improving analysis also boosts predictive power; USGS researchers are examining fractal
mathematics for improved data analysis. Compared to traditional statistical methods, fractals
gave more information on past hurricane events. For future hurricanes, in terms of predictive
power, fractal-based predictions are much more precise.
Several years ago, we identified flood forecasts as a unique opportunity to improve people’s
lives, and began looking into how Google’s infrastructure and machine learning expertise can
help in this field. Last year, we started our flood forecasting pilot in the Patna region, and since
then we have expanded our flood forecasting coverage, as part of our larger AI for Social
Good efforts. In this post, we discuss some of the technology and methodology behind this
effort.
This allows us to translate current or future river conditions, to highly spatially accurate risk
maps - which tell us what areas will be flooded and what areas will be safe. Inundation models
depend on four major components, each with its own challenges and innovations:
We start with the large and varied collection of satellite images used in Google Maps.
Correlating and aligning the images in large batches, we simultaneously optimize for satellite
camera model corrections (for orientation errors, etc.) and for coarse terrain elevation. We then
use the corrected camera models to create a depth map for each image. To make the elevation
map, we optimally fuse the depth maps together at each location. Finally, we remove objects
such as trees and bridges so that they don’t block water flow in our simulations. This can be done
manually or by training convolutional neural networks that can identify where the terrain
elevations need to be interpolated. The result is a roughly 1 meter DEM, which can be used to
run hydraulic models.
Hydraulic Modeling
Once we have both these inputs - the riverine measurements and forecasts, and the elevation map
- we can begin the modeling itself, which can be divided into two main components. The first
and most substantial component is the physics-based hydraulic model, which updates the
location and velocity of the water through time based on (an approximated) computation of the
laws of physics. Specifically, we’ve implemented a solver for the 2D form of the shallow-
water Saint-Venant equations. These models are suitably accurate when given accurate inputs
and run at high resolutions, but their computational complexity creates challenges - it is
proportional to the cube of the resolution desired. That is, if you double the resolution, you’ll
need roughly 8 times as much processing time. Since we’re committed to the high-resolution
required for highly accurate forecasts, this can lead to unscalable computational costs, even for
Google!
To help address this problem, we’ve created a unique implementation of our hydraulic model,
optimized for Tensor Processing Units (TPUs). While TPUs were optimized for neural networks
(rather than differential equation solvers like our hydraulic model), their highly parallelized
nature leads to the performance per TPU core being 85x times faster than the performance per
CPU core. For additional efficiency improvements, we’re also looking at using machine learning
to replace some of the physics-based algorithmics, extending data-driven discretization to two-
dimensional hydraulic models, so we can support even larger grids and cover even more people.
As mentioned earlier, the hydraulic model is only one component of our inundation forecasts.
We’ve repeatedly found locations where our hydraulic models are not sufficiently accurate -
whether that’s due to inaccuracies in the DEM, breaches in embankments, or unexpected water
sources. Our goal is to find effective ways to reduce these errors. For this purpose, we added a
predictive inundation model, based on historical measurements. Since 2014, the European Space
Agency has been operating a satellite constellation named Sentinel-1 with C-band Synthetic-
Aperture Radar (SAR) instruments. SAR imagery is great at identifying inundation, and can do
so regardless of weather conditions and clouds. Based on this valuable data set, we correlate
historical water level measurements with historical inundations, allowing us to identify
consistent corrections to our hydraulic model. Based on the outputs of both components, we can
estimate which disagreements are due to genuine ground condition changes, and which are due
to modeling inaccuracies.
Looking Forward
We still have a lot to do to fully realize the benefits of our inundation models. First and foremost,
we’re working hard to expand the coverage of our operational systems, both within India and to
new countries. There’s also a lot more information we want to be able to provide in real time,
including forecasted flood depth, temporal information and more. Additionally, we’re
researching how to best convey this information to individuals to maximize clarity and
encourage them to take the necessary protective actions.
Computationally, while the inundation model is a good tool for improving the spatial resolution
(and therefore the accuracy and reliability) of existing flood forecasts, multiple governmental
agencies and international organizations we’ve spoken to are concerned about areas that do not
have access to effective flood forecasts at all, or whose forecasts don’t provide enough lead time
for effective response. In parallel to our work on the inundation model, we’re working on some
basic research into improved hydrologic models, which we hope will allow governments not
only to produce more spatially accurate forecasts, but also achieve longer preparation time.
Hydrologic models accept as inputs things like precipitation, solar radiation, soil moisture and
the like, and produce a forecast for the river discharge (among other things), days into the future.
These models are traditionally implemented using a combination of conceptual models
approximating different core processes such as snowmelt, surface runoff, evapotranspiration and
more.
The core processes of a hydrologic model. Designed by Daniel Klotz, JKU Institute for Machine
Learning.
These models also traditionally require a large amount of manual calibration, and tend to
underperform in data scarce regions. We are exploring how multi-task learning can be used to
address both of these problems — making hydrologic models both more scalable, and more
accurate. In research collaboration with JKU Institute For Machine Learning group under Sepp
Hochreiter on developing ML-based hydrologic models, Kratzert et al. show
how LSTMs perform better than all benchmarked classic hydrologic models.
The distribution of NSE scores on basins across the United States for various models, showing
the proposed EA-LSTM consistently outperforming a wide range of commonly used models.
Though this work is still in the basic research stage and not yet operational, we think it is an
important first step, and hope it can already be useful for other researchers and hydrologists. It’s
an incredible privilege to take part in the large eco-system of researchers, governments, and
NGOs working to reduce the harms of flooding. We’re excited about the potential impact this
type of research can provide, and look forward to where research in this field will go.
III. DROUGHT PREDICTION
Studies over the past century have shown that meteorological drought is never the result of a
single cause. It is the consequence of complex individual interactions. Some of the following
may be components of a drought event:
A great deal of research has been conducted in recent years on the role of interacting systems, or
teleconnections, in explaining regional and even global patterns of climatic variability. These
patterns tend to recur periodically with enough frequency and with similar characteristics over a
sufficient length of time that they offer opportunities to improve our ability for long-range
climate prediction, particularly in the tropics. One such teleconnection is the El Niño/Southern
Oscillation (ENSO).
High Pressure
The immediate cause of drought is the predominant sinking motion of air (subsidence) that
results in compressional warming or high pressure, which inhibits cloud formation and results in
lower relative humidity and less precipitation. Regions under the influence of semipermanent
high pressure during all or a major portion of the year are usually deserts, such as the Sahara and
Kalahari deserts of Africa and the Gobi Desert of Asia. Most climatic regions experience varying
degrees of dominance by high pressure, often depending on the season. Prolonged droughts
occur when large-scale anomalies in atmospheric circulation patterns persist for months or
seasons (or longer). The extreme drought that affected the United States and Canada during 1988
resulted from the persistence of a large-scale atmospheric circulation anomaly.
Scientists don’t know how to predict drought a month or more in advance for most locations.
Predicting drought depends on the ability to forecast two fundamental meteorological surface
parameters, precipitation and temperature. From the historical record we know that climate is
inherently variable. We also know that anomalies of precipitation and temperature may last from
several months to several decades. How long they last depends on air–sea interactions, soil
moisture and land surface processes, topography, internal dynamics, and the accumulated
influence of dynamically unstable synoptic weather systems at the global scale.
The potential for improved drought predictions in the near future differs by region, season, and
climatic regime.
In the tropics, for example, meteorologists have made significant advances in understanding the
climate system. Specifically, it is now known that a major portion of the atmospheric variability
that occurs on time scales of months to several years is associated with variations in tropical sea
surface temperatures. The Tropical Ocean Global Atmosphere (TOGA) project has produced
results that suggest that it may now be possible to predict certain climatic conditions associated
with ENSO events more than a year in advance. For those regions whose climate is greatly
influenced by ENSO events, TOGA project results may help produce more reliable
meteorological forecasts that can reduce risks in those economic sectors most sensitive to climate
variability and, particularly, extreme events such as drought.
The Temperate Zone Outlook
In the extratropical regions, current long-range forecasts are of very limited reliability. The
ability that does exist is primarily the result of empirical and statistical relationships. In the
tropics, empirical relationships have been demonstrated to exist between precipitation and ENSO
events, but few such relationships have been confirmed above 30 north latitude. Meteorologists
do not believe that reliable forecasts are attainable for all regions a season or more in advance.
https://iridl.ldeo.columbia.edu/maproom/Global/World_Bank/Drought_Monitor/index3.html?
gmap=%5B2.749068304578767%2C15.584944338858076%2C3%5D
This tool displays maps of meteorological drought risk using the standardized precipitation index
SPI. It allows the user to choose between maps of either the predicted drought severity for a user-
specified likelihood or the risk of a certain magnitude of drought level happening.
The timescale presented here for demonstration is the 6-month Standardized Precipitation Index
(SPI6). The SPI6 drought forecast combines the prior 3 months of observed precipitation and
forecasted upcoming 3 months of seasonal rainfall. The menu Map Type presents two options of
display: Drought Severity or Drought Risk.
References:
https://ai.googleblog.com/2019/09/an-inside-look-at-flood-forecasting.html
https://now.northropgrumman.com/tsunamis-hurricanes-eruptions-predicting-a-natural-
disaster/
https://drought.unl.edu/Education/DroughtIn-depth/Predicting.aspx