Texto Académico

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Nat Hazards (2012) 63:325–347

DOI 10.1007/s11069-012-0152-2

ORIGINAL PAPER

Social vulnerability indices: a comparative assessment


using uncertainty and sensitivity analysis

Eric Tate

Received: 4 January 2012 / Accepted: 17 March 2012 / Published online: 4 April 2012
 Springer Science+Business Media B.V. 2012

Abstract Social vulnerability indices have emerged over the past decade as quantitative
measures of the social dimensions of natural hazards vulnerability. But how reliable are the
index rankings? Validation of indices with external reference data has posed a persistent
challenge in large part because social vulnerability is multidimensional and not directly
observable. This article applies global sensitivity analyses to internally validate the
methods used in the most common social vulnerability index designs: deductive, hierar-
chical, and inductive. Uncertainty analysis is performed to assess the robustness of index
ranks when reasonable alternative index configurations are modeled. The hierarchical
design was found to be the most accurate, while the inductive model was the most precise.
Sensitivity analysis is employed to understand which decisions in the vulnerability index
construction process have the greatest influence on the stability of output rankings. The
deductive index ranks are found to be the most sensitive to the choice of transformation
method, hierarchical models to the selection of weighting scheme, and inductive indices to
the indicator set and scale of analysis. Specific recommendations for each stage of index
construction are provided so that the next generation of social vulnerability indices can be
developed with a greater degree of transparency, robustness, and reliability.

Keywords Social vulnerability  Uncertainty  Sensitivity analysis 


Vulnerability  Index  Indicators

1 Introduction

Models serve an important role in helping us to understand natural hazards and disasters.
The majority of existing hazard models focus on physical environmental processes and
their impacts on the built environment. Physical models apply field measurements from
actual hazard events to calibrate the model and validate outputs. However, disaster vul-
nerability results not only from the interaction of physical and built environment systems,
E. Tate (&)
Department of Geography, University of Iowa, 316 Jessup Hall, Iowa City, IA 52242, USA
e-mail: eric-tate@uiowa.edu

123
326 Nat Hazards (2012) 63:325–347

but also from the social and demographic characteristics of communities (Mileti 1999).
Recent major disasters such as Hurricane Katrina and the Haiti earthquake have high-
lighted how social processes underlying poverty and marginalization can lead to height-
ened susceptibility to injury, death, dislocation, and difficulties in recovery. Assessment of
social vulnerability is now recognized as critical to understanding natural hazards risks and
developing effective response capabilities (Wisner et al. 2004).
The social analog to the quantitative physical hazard model is the social vulnerability
index. Vulnerability indices typically use demographic data to build algorithms describing
the effect of social, economic, political, and institutional factors on the spatial distribution
of human susceptibility to hazard impacts. But when it comes to validation, modelers have
been stymied in large part because social vulnerability is not a directly observable phe-
nomenon: there exists no device with which to measure it. As a result, index validation
requires the use of proxies (Schneiderbauer and Ehrlich 2006). Validation of social vul-
nerability indices has been attempted with independent proxy data including mortality
(Gall 2007), built environment damage (Burton 2010), economic losses (Schmidtlein et al.
2010), human migration (Myers et al. 2008), residential mail delivery (Finch et al. 2010;
Flanagan et al. 2011), and household survey (Fekete 2009). Collectively, these efforts have
found mixed degrees of success. A relatively unexplored alternative approach is internal
validation of social vulnerability indices (e.g., Damm 2010) through an examination of
how changes in index construction affect modeled results.
Building an index involves several steps, including indicator selection, variable trans-
formation, scaling, weighting, and aggregation. During each stage of construction, index
developers are faced with choices between plausible alternatives, thus introducing sub-
jectivity into the modeling process. As with any model, changes in input data and algo-
rithms have the potential to significantly influence the output. Although there is broad
interest in the need to quantitatively model social vulnerability, there is far less consensus
regarding the ideal set of methods used for the production of indices. Indeed, the vul-
nerability indices developed to date embody a multitude of different methodological
approaches. However, no systematic evaluation has been performed to assess how varia-
tions in methods used for social vulnerability index construction affect the output vul-
nerability rankings. As a result, we know remarkably little about the robustness of social
vulnerability indices. If methodologically fragile indices are applied to hazard mitigation
planning, decisions resulting from their use may be flawed.
The purpose of this article is to systematically evaluate the reliability of the leading social
vulnerability index designs. It employs global sensitivity analysis to quantify the uncertainty
in social vulnerability index ranks and determine which stages of the index construction
process are the most and the least important. The remainder of the article is organized as
follows. Section 2 begins with a discussion of social vulnerability indicators and the design
choices facing index developers. Section 3 examines sensitivity analysis and its capacity to
inform index construction. In Sect. 4, I apply global sensitivity analysis to compare the
robustness of the three most common social vulnerability index configurations: deductive,
hierarchical, and inductive structures. Section 5 presents the findings, while the conclusions
offer recommendations to improve the future development of social vulnerability indices.

2 Social vulnerability indices

Broadly defined, vulnerability is the potential to suffer loss or harm (Cutter 1996). As
applied in social science research, the term ‘vulnerability’ describes a condition of people

123
Nat Hazards (2012) 63:325–347 327

as opposed to physical structures, economies, or regions of the earth (Wisner et al. 2004).
Vulnerability assessments are increasingly being utilized to identify the processes that
produce vulnerability and associated variables that can be used to measure differential
hazard susceptibility. Qualitative methods, particularly at the local level, have been
instrumental in identifying the key vulnerability drivers, understanding coping and adap-
tation strategies, and deconstructing vulnerability aspects such as social networks and
institutions (Birkmann 2006b). Quantitative methods have applied this understanding in
the development of indicators used to compare the vulnerability of places and trends over
time.

2.1 Composite indicators

Indicators are quantitative variables intended to represent a characteristic of a system of


interest. They have been employed to inform decision-making, improve stakeholder par-
ticipation, build consensus, explore underlying processes, and advocacy (Parris and Kates
2003). An indicator may be composed of a single variable (e.g., income) or a combination
of variables (e.g., gross domestic product). Multiple indicators can be combined to con-
struct composite indicators, or indices, which attempt to distill the complexity of an entire
system to a single metric. Social indicators have been used since the 1960s, with sub-
sequent application to the environment (1970s), sustainability (1990s), and more recently
vulnerability (King and MacGregor 2000; Birkmann 2006a) and resilience (Cutter et al.
2010). Recent prominent global and regional scale indices that model various aspects of
vulnerability include the Human Development Index (UNDP 2010), the Disaster Risk
Index (UNDP 2004), the Prevalent Vulnerability Index, and the Environmental Sustain-
ability Index (Esty et al. 2005). The Social Vulnerability Index (Cutter et al. 2003) is the
most well-known index for sub-national scale assessment, with application both in the US
and abroad (de Oliveira Mendes 2009; Finch et al. 2010; Holand et al. 2011).
Advancements in social vulnerability conceptual frameworks and the rising interest in
the development of quantitative metrics have led to a wide array of approaches employed
for constructing indices. Index development involves a multi-stage sequential process,
which includes structural design, indicator selection, choice of analysis scale, assessment
of indicator measurement error, data transformation, scaling, weighting, and aggregation
(Table 1). During each stage, modelers must make choices between multiple legitimate
options. For example, should indicators be weighted equally or not? There is not neces-
sarily a right, wrong, or best answer to this question, as is the case for methodological
choices in other stages of index construction:
Composite indicators are much like mathematical or computational models. As such,
their construction owes more to the craftsmanship of the modeler than to universally
accepted scientific rules for encoding (OECD 2008:14).
Although there are no established rules for social vulnerability index construction, previous
studies have helped define a set of potential options for each stage. It is the differences in
these choices that distinguish different social vulnerability indices.

2.2 Index construction

The process of building an index begins with formulation of the conceptual framework. In
short, what is the index intended to measure? Targeted indices emphasize a particular

123
328 Nat Hazards (2012) 63:325–347

Table 1 Social vulnerability index construction stages and options


Stage Description Example options

Conceptual Vulnerability dimensions to Access to resources, demographic structure,


framework include evacuation, institutional
Structural Organization of indicators within Deductive, hierarchical, inductive
design the index
Analysis scale Geographic aggregation level of US county, census enumeration unit, neighborhood,
indicators raster cell size
Indicator Proxy variables for dimensions Income, education, age, ethnicity, gender,
selection occupation, disability
Measurement Accuracy and precision of the Census undercounts, reported margin of error
error demographic data
Transformation Indicator representation Counts, proportions, density
Normalization Standardization to common Ordinal, linear scaling (min–max, maximum value),
measurement units z-scores
Data reduction Reduction of large correlated Factor analysis
indicator set to a smaller set
Factor How many principal components Scree plot, Kaiser criterion, parallel analysis
retention to retain?
Weighting Relative degree of indicator Equal, expert, data envelopment analysis,
importance budget allocation, analytic hierarchy process
Aggregation Combination of normalized Additive, geometric, multi-criteria analysis
indicators to the final index

dimension of social vulnerability, such as disaster evacuation, health, or flood hazards.


More common has been the development of generalized vulnerability indices, which focus
on broad dynamics applicable to many dimensions of vulnerability, such as access to
resources, demographic structure, and institutional capacity.
Perhaps the most methodologically prominent distinguishing characteristic of vulner-
ability indices is the structural design, which includes deductive, hierarchical, and
inductive arrangements (Fig. 1). Deductive models typically contain fewer than ten indi-
cators, which are normalized and aggregated to the index (Fig. 1a) (Cutter et al. 2000;
Montz and Evans 2001; Wu et al. 2002; Dwyer et al. 2004; Collins et al. 2009; Lein and
Abel 2010). This was the most common structure applied to early social vulnerability
indices. Hierarchical designs have employed roughly ten to twenty indicators, separated
into groups (sub-indices) that share the same underlying dimension of vulnerability
(Vincent 2004; Chakraborty et al. 2005; Hebb and Mortsch 2007; Flanagan et al. 2011;
Mustafa et al. 2011). Individual indicators are aggregated into sub-indices, and the sub-
indices aggregated to the index (Fig. 1b). Inductive approaches begin with a large set of
twenty or more indicators, which are reduced to a smaller set of uncorrelated latent factors
using principal components analysis. The factors are then aggregated to build the index
(Fig. 1c). Inductive methods were popularized by the Social Vulnerability Index (Cutter
et al. 2003) and form the basis for the majority of more recent vulnerability indices (Clark
et al. 1998; Rygel et al. 2006; Borden et al. 2007; Burton and Cutter 2008; Myers et al.
2008; Fekete 2009; Burton 2010; Finch et al. 2010; Schmidtlein et al. 2010; Tate et al.
2010; Wood et al. 2010; Fekete 2011).

123
Nat Hazards (2012) 63:325–347 329

Fig. 1 Vulnerability index structural designs

With the conceptual framework and index structure defined, several additional steps are
needed to complete the index. The analysis scale represents the geographic resolution of
the areal units linked to the indicators. The choice of scale is important because statistical
relationships between social indicators often vary across scales, meaning that the same
index produced at different scales may yield distinct patterns of vulnerability. Indicator
selection involves choosing specific variables to represent the vulnerability dimensions in
the conceptual framework. For example, both income and poverty rate are plausible
proxies for material resources. Which should be included in the index, and why? Choices
among indicators are generally guided by factors such as data availability, desired number
of indicators, statistical properties, and most importantly validity—how representative is
the indicator of the underlying vulnerability dimension?
Measurement error is a rarely assessed, yet potentially important consideration. In the
U.S., the Census Bureau has long acknowledged chronic data collection error as a result of
undercounts (Clark and Moul 2003). These errors are most pronounced for many of the
same demographic categories associated with heightened vulnerability, such as ethnic
minorities, children, renters, and undocumented immigrants (Passel 2005). The transfor-
mation step specifies how each indicator is represented in the index. For example, is
vulnerability of the disabled best represented by a count of disabled persons, the proportion
of disabled in the greater population, or the number of disabled per unit area? All choices
are reasonable, but could have very different effects on the computed index. Normalization
is performed to place all indicators into a common and dimensionless measurement scale.
Min–max linear scaling is most often used for deductive and hierarchical designs (Hebb
and Mortsch 2007; Lein and Abel 2010), while inductive models typically apply z-score
standardization (Fekete 2009; Finch et al. 2010).

123
330 Nat Hazards (2012) 63:325–347

For inductive designs, the index developer must also choose how many factors to retain
from the principal components analysis. The most common option is the Kaiser criterion,
in which all components with an eigenvalue greater than one are retained. However, recent
studies have shown that use of the Kaiser criterion overestimates the number of factors to
retain, lending outsized importance to inconsequential factors (O’Connor 2000; Patil et al.
2008). Parallel analysis is now considered by statisticians to be a superior alternative for
determination of the optimal number of factors (Ledesma and Valero-Mora 2007). In
parallel analysis, a random dataset with a sample size and number of variables that par-
allels the observed dataset is generated and used as input to a principal components
analysis. This sequence is repeated in a Monte Carlo simulation, resulting in a distribution
of eigenvalues for each principal component in the random dataset. The 95 % confidence
level of each simulated eigenvalue distribution is then compared to the observed eigen-
values. The optimal number of factors to retain is the number of eigenvalues in the
observed dataset that exceed the corresponding value in the simulated dataset. This nor-
mally results in a fewer number of factors than what would be retained using the Kaiser
criterion.
Weighting and aggregation are typically the final two stages of index construction. The
weights assigned to indicators should ideally reflect their relative importance. Various
expert and statistical approaches have been employed to derive differential indicator
weights (OECD 2008), but by far the most common approach is the use of equal weights.
Equal weighting is typically applied as a default option due to a lack of understanding of
the relationships between indicators, as opposed to the notion that all indicators are of
equal importance. In the aggregation stage, the weighted normalized indicators are com-
bined to form the index. The summing of indicators to compute the arithmetic mean
(additive aggregation) is nearly universally applied, with a small minority of indices
aggregated using the geometric mean (multiplicative aggregation). Aggregation techniques
such as Pareto ranking and data envelopment analysis have also been employed (Clark
et al. 1998; Rygel et al. 2006).
All of the steps of index development are important, but the structural design is par-
ticularly critical as it establishes the framework for all other stages to follow. Unfortu-
nately, the bulk of vulnerability index studies neglect to explain why a particular design
was used. Are there differences in the robustness of different vulnerability index designs?
What is the relative influence of other modeling decisions on output patterns of social
vulnerability? The short answer to these questions is we don’t really know. Sensitivity
analysis provides a tool to address this shortcoming and forms the basis herein for a
comparative analysis of the leading social vulnerability index designs.

3 Sensitivity analysis

Existing studies that have developed social vulnerability indices tend to follow a similar
pattern. They begin with discussion of relevant social vulnerability theory, followed by
description of the study area and source data, basic outline of how the index was con-
structed, and conclude with display and analysis of results using maps and tables.
However, relatively little attention is devoted to the rationale for choices in the index
construction, or more importantly, how these choices affect the output index. The latter is
the realm of sensitivity analysis, which evaluates the influence of input data and
parameters on output models. Sensitivity analyses generally assume one of two forms:
local or global. In local analysis, model sensitivity is assessed one index construction

123
Nat Hazards (2012) 63:325–347 331

stage at a time. Global analysis allows for simultaneous evaluation of multiple con-
struction stages, often through the use of Monte Carlo simulation. Both approaches
provide the modeler with quantitative metrics to assess the relative importance of dif-
ferent modeling methods. A key benefit of sensitivity analysis is that it can help differ-
entiate between those index construction stages that greatly influence output patterns of
vulnerability and those that do not. This can enable the modeler to focus data collection
and methodology development on the choices that truly matter, thus improving the
robustness of the model.

3.1 Local sensitivity analysis

Local sensitivity analysis evaluates the response of the output index to variations in a
single construction stage, by changing the options one at a time while other stages are
held constant (Xu and Gertner 2008). The analysis is fairly simple to implement and is
typically performed using statistical methods such as correlation and analysis of variance.
As an example, consider a local sensitivity analysis of the normalization stage. Options
such as min–max linear scaling, z-score standardization, and maximum value linear
scaling might be varied to compute different sets of vulnerability ranks. Meanwhile, other
factors (e.g., variable selection, analysis scale, weighting) are fixed. Correlation coeffi-
cients are computed to compare the output vulnerability ranks. A high correlation coef-
ficient suggests the index is insensitive to changes in the normalization approach, while
low correlation implies that the choice of normalization scheme is highly influential on
the final index.
The few existing studies using sensitivity analysis to evaluate the importance of dif-
ferent stages of social vulnerability index construction have tended to apply local sensi-
tivity analysis. One project examined the effect of altering the input indicator set for an
estimation of the population vulnerable to flood evacuation (Chakraborty et al. 2005). The
authors found that depending on the number and type of input indicators employed, the
vulnerable population predicted by their hierarchical index ranged between 40,000 and
150,000 people. It demonstrated that changing the input indicators could have a large
impact on the output index. A second study examined how variations in indicator selection,
data transformation, scaling, and weighting affected modeled vulnerability to evacuation
and reconstruction in Vancouver, Canada (Jones and Andrey 2007). Using bivariate
parametric correlation, the authors found that their deductive indices were also highly
sensitive to variable selection. A third analysis focused on the inductive algorithm used
to create the Social Vulnerability Index of the United States (Cutter et al. 2003), also
known as SoVI. The authors examined the influence of scale, location, variable selection,
and weighting on output vulnerability ranks generated from exploratory factor analysis
(Schmidtlein et al. 2008). Using nonparametric correlation coefficients to compare index
rankings, the SoVI algorithm was found to be minimally sensitive to changes in input
variables.
The disparate findings regarding the importance of the indicator set make it difficult to
judge its overall degree of influence, as well as leaving open the possibility that its
importance depends on the index structure. One goal of local sensitivity analyses has been
to provide guidance to other vulnerability index developers by identifying which steps are
the most important. According to Jones and Andrey (2007:282), their sensitivity analysis
‘‘suggests that practitioners can be assured that any of the conventional scaling methods
can be used without concern.’’ However, local sensitivity tests are limited in that they can

123
332 Nat Hazards (2012) 63:325–347

only evaluate one construction stage at a time. They are particularly inefficient when the
number of uncertain stages is large, but the number of influential ones is small (Saltelli
et al. 2008). Furthermore, local sensitivity analysis is incapable of detecting interactions
between different stages, meaning that claims such as those made by Jones and Andrey
may be incomplete. The ability to vary choices in multiple stages of index construction and
assess interactions is needed to fully evaluate the importance of variable selection. To
perform this type of analysis, global sensitivity analysis is required.

3.2 Global sensitivity analysis

The goal of global sensitivity analysis is to examine how output variation in a model can be
apportioned to multiple sources of variation in the input assumptions (Saisana and Saltelli
2008). This variation, or uncertainty, arises from subjective decisions when choosing
between options during each stage of index construction. To decompose uncertainty, the
index algorithm should be subjected to a bootstrap analysis (Villa and McLeod 2002), in
which the uncertainty associated with each stage is propagated through the model.
Uncertainty analysis applies this approach to assess how uncertainty in model inputs
translates into uncertainty in model outputs.
Global sensitivity analysis typically requires four sequential steps: sample selection,
Monte Carlo simulation, uncertainty analysis, and sensitivity analysis (Fig. 2). In the first
step, the input distribution of each model parameter is sampled to select a single value. For
indices, the parameters are the stages of construction (e.g., normalization), each with an
equal probability distribution comprised of the set of parameter values (e.g., linear scaling,
z-scores). The index is then computed and rankings saved. This process is repeated
thousands of times in a Monte Carlo simulation. Instead of a discrete output vulnerability
rank for each census unit, with Monte Carlo simulation each rank becomes a frequency
distribution. Descriptive statistics such as the variance, confidence limits, median, and
coefficient of variation can then be evaluated as measures of uncertainty (Tate
forthcoming).
With the distribution of index rankings known, sensitivity analysis is applied to
determine the proportional contribution of each construction stage to the total uncertainty.
One way to achieve this is to decompose the variance of the rank distribution (Saltelli and
Tarantola 2002; Saisana et al. 2005b; Saltelli et al. 2008):

Fig. 2 Global sensitivity analysis framework

123
Nat Hazards (2012) 63:325–347 333

X XX
V ðY Þ Vi þ Vij þ    þ V12...k ð1Þ
i i j[i

Y = index metric of interest, V(Y) = total variance of Y, Vi = partial variance in Y due to


index stage i, Vij = partial variance in Y due to interaction between stages i and j.
The metric Y is the value of the interest in the sensitivity analysis. Previous index
studies have investigated (a) the index rank for individual enumeration units (e.g., coun-
tries, counties, census tracts), (b) the difference in rank between a pair of enumeration
units, and (c) the average deviation in rank from a baseline index (Saisana et al. 2005b;
Aguña and Kovacevic 2011). The ability to quantitatively assess the relative importance of
individual index stages, as well as account for interactions between stages distinguishes
global from local sensitivity analysis.
The use of variance-based techniques for global sensitivity analysis offers certain
advantages over deterministic approaches. When multiple sources of input uncertainty are
propagated through the model, indices often turn out to be nonlinear and non-additive
(OECD 2008). Variance-based methods are considered to be ‘‘model free’’ in that they can
be applied to models that are nonlinear, non-monotonic, and non-additive (Saltelli et al.
2008). A model is considered non-additive if the total output variance cannot be expressed
as a sum of the variances of the model inputs. An expert panel at U.S. Environmental
Protection Agency recently reviewed methods for sensitivity analysis and listed preferred
attributes (EPA 2009). These included immunity to assumptions regarding linearity and
additivity, the ability to evaluate interaction effects between uncertain inputs, and the
capacity to assess one varying input while other inputs are also varied. The panel did not
recommend a particular sensitivity analysis method, but noted that only variance-based
methods meet all of these criteria.

4 Methods

Variance-based global sensitivity analysis has been applied to a number of international


social and environmental indices to evaluate their robustness to changes in methodology.
This includes studies of the Environmental Sustainability Index (Saisana et al. 2005a),
Environmental Performance Index (Saisana and Saltelli 2008), and the Human Develop-
ment Index (Aguña and Kovacevic 2011). This section provides an example application to
indices of social vulnerability to hazards. The overarching goals of the analysis are to
compare the robustness of deductive, hierarchical, and inductive index designs, and within
each design, distinguish between which construction stages are influential and which are
not.

4.1 Study area

Nueces County, Texas, serves as the study area for the investigation, with Norfolk, Vir-
ginia, and Sarasota County, Florida, selected as comparison sites (Fig. 3). Located on the
Gulf Coast of Texas, Nueces County is a primarily rural county spanning over 2,100 square
kilometers. In the year 2000, the total population was approximately 314,000, 55 % of
which were ethnic Hispanic. The per capita income was about $17,000, with nearly 18 %
of households below the poverty line and 36 % possessing some type of disability. The
confluence of minority status, disability, and poverty suggests areas within Nueces County

123
334 Nat Hazards (2012) 63:325–347

Fig. 3 Study area locations

may have a high degree of social vulnerability. All demographic variables used in the
vulnerability index construction were collected from the year 2000 US census.
The three selected study areas are similar in their exposure to flooding and hurricane
winds, number of census enumeration units, and membership in the nation’s top quintile of
most socially vulnerable counties (Cutter et al. 2003). However, they differ in other char-
acteristics. While Nueces County is vast and largely rural, Norfolk is completely urban and
confined to a relatively small area. Sarasota County is somewhere in between, with urban
areas along the coast and rural areas inland. Demographically, the population of Sarasota
County is older and wealthier. Norfolk has a large African-American population, whereas a
majority of the residents of Nueces County are Hispanic. In selecting the study areas, the goal
was to limit their key differences largely to those demographic characteristics used for
vulnerability modeling. The underlying question is which is a more important determinant of
index uncertainty and sensitivity: characteristics of the study area or the index construction
methods used to model them? If the sensitivity analysis results significantly diverge between
study areas, it can be concluded that index uncertainty and sensitivity is place-specific. If
however, the differences in sensitivity metrics are small, there is a firmer basis to argue that
the findings from one study area are generalizable to other places.

4.2 Analysis design

The parameters employed for the global sensitivity analysis are shown in Table 2. The
choice of which construction stages and options to include in the analysis varies somewhat

123
Nat Hazards (2012) 63:325–347 335

Table 2 Sensitivity analysis parameters


Index stage Options Deductive Hierarchical Inductive

Indicator set Base 4 4 4


Alternative
Analysis scale Census tract 4 4 4
Census block group
Measurement error Census 2000 4 4 4
Census 2000 w/undercounts
Transformation None (counts) 4 4
Population
Area
Normalization Min–max scaling, 4 4
Max value scaling Z-scores
Factor retention Kaiser criterion 4
Parallel analysis
Weighting Equal 4 4 4
Expert

with the index design. For instance, the normalization stage is not evaluated for inductive
designs because z-score standardization has been applied in nearly all previous studies.
Likewise, factor retention is only relevant to principal components analysis used for
inductive indices. The options appearing in italics (e.g., base, census tract) are the most
commonly used methods in the peer-reviewed literature and are used to generate a baseline
index for each of the three index designs. Producing a baseline index configuration is an
important precursor to conducting robustness analysis (Cherchye et al. 2007), providing a
measuring stick against which to evaluate alternative construction methods.
The design includes the choice of two indicator sets for each index structure. These are
detailed in Tables 4,5, and 6 in ‘‘Appendix’’. For the inductive structure, the base indi-
cators mimic the recent incarnation of the SoVI algorithm (Finch et al. 2010; Wood et al.
2010). The alternative indicators largely overlap the baseline indicators, but a few variables
are replaced to either represent the same vulnerability dimensions with different variables
or to induce minor shifts in the dimensions included in the index. To assess the importance
of scale, the design includes analysis at both census block group and tract levels. The goal
is to determine whether a given area has the same level of vulnerability if analyzed at
different scales. To do so, index scores computed at the block group level are averaged
within each tract, producing tract level rankings. These ranks are then compared with those
originally analyzed at the tract level.
To assess the influence of indicator measurement error, undercount data from the 2000
census were acquired (FairData 2003). Unfortunately, the undercount data were not
available for all of the vulnerability indicators, allowing only for alternative calculation of
race and ethnicity indicators. When creating an inductive vulnerability index, one key
decision is how many factors to retain from the PCA. The factor retention parameter thus
includes options of the Kaiser criterion and parallel analysis. The capability for parallel
analysis with PCA is currently lacking from most of the popular statistical packages, so a
web-based utility for parallel analysis was employed (Patil et al. 2007).
The weighting parameter includes options of equal and expert weights. One potential
reason for the ubiquity of usage of equal weights is the scarcity of available alternatives.

123
336 Nat Hazards (2012) 63:325–347

The expert weights for this study are provided in Tables 4,5, and 6 in ‘‘Appendix’’ and are
drawn from a survey of hazards and vulnerability practitioners and researchers (Emrich
2005). For hierarchical models using differential weights, index developers are faced with
a decision to apply the weights at the indicator or sub-index level. Previous work has
stressed the importance of using equal weights at the sub-index level unless there is
important justification for doing otherwise (Villa and McLeod 2002). This guidance is
followed herein: for the expert-weighting scheme, differential weights are applied at the
indicator level, with equal weights assigned to each of the three sub-indices. In the
inductive design, the weights of individual factors are determined by summing the weights
of all constituent indicators with a component loading above 0.5 (HVRI 2008). Notably
absent from the analysis design in Table 2 is a parameter for indicator aggregation.
Although aggregation is an important stage of index construction, previous social vul-
nerability indices have nearly always applied additive aggregation. As the goal of this
article is to compare the options used in the most popular index designs, aggregation is
omitted from the analysis.
The Monte Carlo simulation for each index design structure begins with construction of
the baseline index. During each simulation run, a vulnerability index is constructed and the
average rank deviation from the baseline index is calculated:

1X M
DBas ¼ jRankBas ðCUi Þ  RankðCUi Þj ð2Þ
M i¼1

DBas =average deviation from the baseline rank, M = number of census units,
CUi = census unit i.
The global sensitivity analysis software SimLab version 2.2 (JRC 2011) was used for
the sample selection that begins the analysis and computation of sensitivity metrics that
completes it. In between, Visual Basic code working within ArcGIS 10 was developed to
read input samples from SimLab, perform the Monte Carlo simulation, compute statistics
for the uncertainty distribution, and export the average deviation results back to SimLab
for the sensitivity analysis (Tate 2011). One commonly expressed downside of global
sensitivity analysis is the computational expense (EPA 2009). However, the variance
decomposition algorithms in Simlab are able to produce sensitivity measures requiring a
reduced number of simulations compared to what is needed for a full factorial analysis
(Saisana et al. 2005b):
N ¼ 2Bðk þ 1Þ ð3Þ
N = number of runs in the Monte Carlo simulation, B = base sample size (default=512),
k = number input parameters.
Based on the analysis design outlined in Table 2, the number of parameters (stages)
analyzed for the deductive and hierarchical designs is six. Using Equation 3, this requires
7,168 runs in order to produce reliable sensitivity measures. Similarly, the sensitivity
analysis for the inductive design necessitates 6,144 model runs.

5 Results

The Monte Carlo simulation produces a distribution of the average baseline deviation
metric for each of the three model structures, providing a means to assess uncertainty in the
baseline indices. Two important descriptive statistics of the distributions are the median

123
Nat Hazards (2012) 63:325–347 337

Fig. 4 Uncertainty analysis comparison

and the variance. The median offers a measure of the accuracy of the baseline ranking
when alternative construction approaches are considered. The term ‘accuracy’ here con-
notes closeness of the index ranking to its true value. This presents a challenge because
social vulnerability cannot be directly measured. As such, the median, as an unbiased
estimator of central tendency, is considered here to represent the ‘true’ index value. Thus,
accuracy increases as the median baseline deviation decreases. The variance is a proxy for
index precision: how close are repeated index rankings to one another? A robust index is
one that is both accurate and precise.
The histograms of the Monte Carlo output distributions for Nueces County are shown in
Fig. 4. Five percent of observations are culled from each end of the distribution to remove
outliers. The values in the figure thus represent the 90 % confidence interval. The
deductive baseline index is easily the worst performer of the three, displaying both a high
median value (11.2 rank positions out of 64 census tracts) and a high variance (r = 4.7
positions). The hierarchical model is more accurate, but with a similar variance. The key to
improving the model lies in increasing its precision by identifying which construction
stages contribute most to the uncertainty. For the inductive index, the low variance of
average deviation values demonstrates it is the most precise social vulnerability index
structure (r = 3.3). However, the accuracy is poor, as the particular set parameters
embodied in the baseline index (SoVI) produce rankings that are outliers when compared
to the full set of potential outcomes. Ideally, the index developer would experiment with
the configuration of the inductive baseline so its rankings reside in the low variance cluster
on the right side of the histogram.
The median and variance are useful tools to assess the reliability of the index designs,
but left unknown is which model parameters are the main drivers of uncertainty. In other
words, which stages of index construction are the most important? Global sensitivity
analysis provides diagnostic tools to answer these questions, producing sensitivity metrics
that assess the behavior of index parameters in terms of both main (first order) and
interaction (higher order) effects. The main effect is the proportion of the total variance
V(Y) accounted for solely by stage i (Saisana et al. 2005b; Aguña and Kovacevic 2011):

123
338 Nat Hazards (2012) 63:325–347

Table 3 Sensitivity analysis results


Deductive Hierarchical Inductive

Index stage First Total Diff First Total Diff First Total Diff
order effect order effect order effect

Indicator set 0.08 0.11 0.02 0.06 0.06 0.00 0.10 0.80* 0.70
Measurement 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
error
Analysis scale 0.05 0.09 0.04 0.02 0.04 0.02 0.04 0.67* 0.63
Transformation 0.80* 0.87 0.07 0.31* 0.36 0.06 – – –
Normalization 0.00 0.00 0.00 0.00 0.00 0.00 – – –
Factor retention – – – – – – 0.00 0.53* 0.53
Weighting 0.00 0.01 0.01 0.58* 0.64 0.06 0.00 0.38* 0.38
Sum 0.92 0.97 0.14

* Influential stage

Si ¼ Vi =V ðY Þ ð4Þ
The variable Si is the first-order or main effect for index stage i. The values of Si range
between zero and one, with higher values indicative of greater importance (Saltelli et al.
2008). However, it cannot be assumed that a low first-order sensitivity value indicates a
non-influential parameter, because the index stage may be involved in interactions with
other construction stages. To avoid this error, the total effect sensitivity metric is com-
puted, which combines main and interaction effects for each stage. If the total sensitivity is
near zero, only then can it be concluded that the construction stage is truly non-influential.
The overall presence of interactions can be evaluated by summing the first-order indices. If
the sum is less than one, some degree of interactions exists between construction stages.
The sensitivity metrics for the three model structures are shown in Table 3. The first-
order values are the key to identifying highly influential parameters. There is no fixed
threshold for what precisely constitutes a ‘high’ value, but one rule of thumb is to use the
inverse of the number of factors as an importance cutoff (Saisana et al. 2005a, b). This
equates to a threshold of 1/6 or 0.17 for the deductive and hierarchical structures, and 1/5
or 0.20 for the inductive. Low values in the total effect column are used to identify non-
influential parameters. High values in the difference column identify parameters involved
in interactions with one another.
For the deductive index, the transformation stage is the only parameter with a high first-
order effect, explaining 80 % of the output variance. Taken together, the first-order effects
account for 92 % of the variance. This means that few interactions between index stages
are occurring. The roadmap to reducing the uncertainty in the deductive model is clear:
focus effort on choosing the transformation approach that best represents vulnerability. The
hierarchical model also has few interactions, with the first-order effects responsible for
over 97 % of the output variance. The main drivers of uncertainty are the weighting and
transformation stages. Based on the total effect values, all other stages are non-influential.
The robustness of the hierarchical model could be improved by devoting additional
methodological attention to the weighting algorithm. For the inductive model, none of the
parameters taken independently are influential; collectively accounting for only 14 % of
the output variance. But except for measurement error, all of the parameters are involved in
a significant degree of interactions with one another. The choice of indicator set is the

123
Nat Hazards (2012) 63:325–347 339

Fig. 5 Sensitivity analysis


comparison

greatest contributor to the uncertainty, but analysis scale, factor retention, and weighting
are also important drivers. The high degree of interactions makes it difficult to determine
which development stages should receive the most attention. What is known is that only
the measurement error parameter can be discounted as non-influential.

123
340 Nat Hazards (2012) 63:325–347

Identifying the influential parameters for each model structure can be used to improve
the social vulnerability indices for Nueces County, but are the findings generalizable to
other study areas? The global sensitivity metrics for Norfolk and Sarasota can be used to
address this question. Figure 5 displays the distribution of total effect values when the
sensitivity metrics for the three study areas are averaged. The drivers of uncertainty in the
deductive model continue to be the transformation (accounts for 57 % of variance) and
indicator selection (23 %) stages. For the hierarchical model, the weighting scheme (60 %)
is the most influential, followed by indicator transformation (26 %). The contribution to
uncertainty of the other parameters is negligible. The parameters for the inductive structure
are highly interactive in their effect on index uncertainty. The inductive model is highly
sensitive to the choice of indicator set (30 %) and analysis scale (28 %), and moderately
sensitive to the weighting scheme (20 %) and factor retention method (19 %). Overall, the
sensitivity metrics from Nueces County are consistent across the comparison areas, sug-
gesting that the uncertainty and sensitivity of social vulnerability indices is more a function
of the construction methodology of the index than differences in demographics between
places.

6 Conclusions

There is a growing recognition that hazard vulnerability reduction efforts must include
social dimensions if they hope to be successful. Refocused efforts to mitigate social
vulnerabilities have driven a rising interest in quantitative metrics to evaluate them. In the
U.S., social vulnerability indicators and indices are beginning to become part of the process
at local (City of Cedar Rapids 2010), state (California OES 2010), and federal (Dunning
and Durden 2011) levels. Parallel trends are occurring in the related fields of sustainability
and disaster resilience. The burgeoning transition from academic construct to decision-
making tool makes it even more important that the indices are robust. The big question
facing proponents of vulnerability indices is, how do you know that the model is correct?
One line of vulnerability index research has focused on external validation using
independent proxy data, such as mortality, economic loss, and household survey.
Through an emphasis on the methods used for index construction, global sensitivity
analysis provides a path for equally important internal validation. The two are inexo-
rably linked: if an index lacking internal robustness is used as the basis for an external
validation study, the strength of any statistical relationships found with external data will
be tenuous at best. This article has pursued the internal validation path, using global
sensitivity analysis to provide a systematic evaluation of the most popular social vul-
nerability index configurations. The main findings and recommendations are summarized
as follows:
• Model structure. Deductive, hierarchical, and inductive index structures were
evaluated. No structure is inherently better or worse for social vulnerability indices,
but the sensitivity analyses findings varied greatly by model structure. The most robust
index configuration is likely to differ for each design structure.
• Baseline index. Based on the median of the uncertainty distribution, the hierarchical
baseline was the most accurate. Based on the variance, the inductive model was
found to be the most precise. However, the inductive index configuration based on
the SoVI algorithm was an outlier. The deductive baseline was the worst performer in
terms of both accuracy and precision. Given the prominence of the SoVI model,

123
Nat Hazards (2012) 63:325–347 341

changes in its construction methodology should be considered so that its rankings are
more robust.
• Indicator set. The deductive and inductive designs are highly sensitive to the choice of
input indicators. In the bulk of the previous studies, only limited justification has been
given regarding the selection of specific indicators. Index developers as a whole should
devote more attention to the theoretical links between indicators and underlying
vulnerability dimensions, as well as deeper consideration of the statistical properties of
indicators.
• Analysis scale. The deductive and hierarchical models were found to be scale
invariant. However, the analysis scale is highly influential for the inductive model
structure.
• Measurement error. Census undercounts were found to be a non-influential parameter
for all model structures. However, the U.S. Census Bureau is transitioning away from
the use of the decennial census toward the annual American Community Survey (ACS).
This will impact many of the indicators used for social vulnerability modeling. The
ACS data should not be treated as counts, but as the sampling estimates they truly are,
with the margin of error explicitly accounted for in the modeling process and outputs.
Global sensitivity analyses can be applied to characterize the importance of the
measurement error of this new dataset.
• Transformation. The data transformation step was found to be highly influential in each
situation for which it was evaluated. This stage should receive the highest level of
methodological scrutiny in the development of social vulnerability indices.
• Normalization. Data normalization exerts a negligible degree of influence on modeled
social vulnerability ranks. Normalization alternatives should receive no more than
limited methodological attention during vulnerability index development.
• Factor retention. Choices between factor retention approaches were found to
moderately influence the variation in vulnerability ranks. Recent statistical studies
indicate the use of the Kaiser criterion leads to factor over-extraction and should be
curtailed in favor of parallel analysis.
• Weighting. The effect of the weighting scheme varies based on model structure. The
inductive model was moderately sensitive to weighting, while the deductive model was
insensitive. However, the weighting approach is the main driver of uncertainty for the
hierarchical model. As such, developers of hierarchical social vulnerability indices
should devote additional methodological attention to this step. Further research should
also be conducted to better understand the theoretical and statistical implications of
applying weights at the indicator vs. the sub-index level.
• Aggregation. This parameter was not evaluated due to the predominance of additive
aggregation in existing studies. Future research should investigate alternative
aggregation schemes.
Previous work in the field of vulnerability science has done a good job taking
qualitative work describing social stratification in pre- and post-disaster settings, cate-
gorizing the findings into vulnerability dimensions, and developing quantitative indica-
tors to describe them. As a generalization, efforts to build social vulnerability indices
have consisted of selection of a subset of normalized indicators and computing the
arithmetic mean. Granted, a minority of published efforts has invested more theoretical
heft into steps such as indicator selection and factor analysis, but the overall approach
has been similar. From research in composite indicator development, we know that
creation of a robust index requires much more than this. There are pros, cons, and

123
342 Nat Hazards (2012) 63:325–347

nuance involved in the methods applied during stages of index development that should
be deeply considered and subsequently evaluated using uncertainty and sensitivity
analyses.
If hazard risk reduction efforts are to be truly effective, decision-making in pre-disaster
planning, post-disaster response, and resource allocation must be based on reliable infor-
mation. Social vulnerability indices are a potentially powerful tool for these purposes in
that they summarize complexity, provide quantitative metrics to compare places and track
progress, and are relatively easy for non-experts to interpret. These benefits of indices
make it likely they will continue to be attractive to stakeholders, practitioners, and decision
makers, despite counterarguments that generalized aggregated indices are too blunt an
instrument (Saisana et al. 2005b) to be used for specific hazard mitigation interventions.
Therefore, it is important to understand whether the set of practices currently used for
vulnerability index development is up to the task. To date, the designs of social vulnera-
bility indices have been largely based on a mix of ad hoc modeling decisions and prece-
dence established by previous efforts. Global sensitivity analysis offers a way to assess the
effect of these design choices, helping to produce more transparent, robust, and reliable
indices. The technique should be a standard element of the social vulnerability index
construction process.

Acknowledgments This research was funded by the National Science Foundation, through the Graduate
Research Fellowship Program. In addition to the insightful feedback provided by the three anonymous
reviewers, I would like to thank Shama Perveen and Tim Frazier, who provided comments on an earlier
draft.

Appendix

See Tables 4,5 and 6.

Table 4 Deductive model variable set and expert weights


Dimension Description Indicator set Expert weight* (%)

Socioeconomic status Adults, no high school education Base 7


Per capita income Base 21
Income below poverty level Alt 22
Households with no vehicles Alt 8
Mobile homes Alt 8
Age Children under 5 years Base, Alt 12, 13
Elderly over 64 years Base, Alt. 14, 15
Race and ethnicity Non-white and non-Anglo Base 16
Gender Female-headed households Base 8
Home ownership Rental housing units Base 9
Population distribution Housing density Alt 8
Special needs Disabled Base, Alt 6, 8
Poor English proficiency Base, Alt 6, 8
Nursing home residents Alt 11

* Equal weight = 11 %

123
Nat Hazards (2012) 63:325–347 343

Table 5 Hierarchical model variable set and expert weights


Sub-Index Indicator Indicator set Expert weight* (%)

Differential access to resources Mean house value Base, Alt 23, 22


Adults, no high school education Base, Alt 15
Rental housing units Base, Alt 17
Per capita income Base 29
Income below poverty level Alt 28
Service sector employment Base 16
Civilian unemployment Alt 19
Demographic structure Children under 5 years Base, Alt 21, 21
Elderly over 64 years Base, Alt 23, 22
Black or African-American Base 20
Hispanic or Latino Base 18
Non-white and non-Anglo Alt 25
Female-headed households Base 17
Females Alt 16
People per housing unit Alt 17
Special needs Disabled, over 5 years old Base, Alt 19
Nursing home residents Base, Alt 23, 19
Housing density Base, Alt 19
Households w/no vehicles Base, Alt 19
Poor English proficiency Base 19
Recent immigrants Alt 22
* Equal weight = 20 %

Table 6 Inductive model indicators


Dimension Indicator Alternative index

Race and ethnicity Black or African-American


Native American
Asian, Pacific Islander
Hispanic or Latino
Age Children under 5 years
Elderly over 64 years
Median age Removed: both low and high
values indicate high
vulnerability
Social dependence Social security recipients
Gender, family structure Females
Female-headed households
People per housing unit

123
344 Nat Hazards (2012) 63:325–347

Table 6 continued

Dimension Indicator Alternative index

Special needs Nursing home residents


Recent immigrants
Disabled Added: special needs
dimension
underrepresented
Households with no vehicles Added: special needs
dimension
underrepresented
Material resources Per capita income
Income [ $100,000
Income below poverty level
Mean house value
Mean contract rent
Human resources No high school education
Service sector employment
Civilian workers
Unemployed civilian workers
Female civilian workers
Housing Rental housing units
Mobile homes Removed: built environment,
not social vulnerability
metric
Health Health care practitioners
Hospitals in the census unit Removed: accessibility more
important than presence/
absence
Economic sector Extractive industry workers
Public infrastructure workers Removed: workers likely in
demand during recovery
Population distribution Housing density
Urban population
Rural farm population

References

Aguña, C. G. and M. Kovacevic (2011) Uncertainty and sensitivity analysis of the human development
index. United Nations Development Programme, Research Paper 2010/47
Birkmann J (2006a) Indicators and criteria. Measuring vulnerability to natural hazards: towards disaster
resilient societies. J. Birkmann. Tokyo: United Nations University Press: 55–77
Birkmann J (2006b) Measuring vulnerability to natural hazards: towards disaster resilient societies. United
Nations University Press, Tokyo
Borden KA, Schmidtlein MC, Emrich CT, Piegorsch WW, Cutter SL (2007) Vulnerability of US cities to
environmental hazards. J Homel Secur Emerg Manage 4(2):1–21
Burton CG (2010) Social vulnerability and hurricane impact modeling. Nat Hazards Rev 11(2):58–68
Burton C, Cutter SL (2008) Levee failures and social vulnerability in the Sacramento-San Joaquin Delta
area, California. Nat Hazards Rev 9(3):136–149
California OES (2010) State of California multi-hazard mitigation plan. Governor’s Office of Emergency
Services, Sacramento, CA

123
Nat Hazards (2012) 63:325–347 345

Chakraborty J, Tobin GA, Montz BE (2005) Population evacuation: assessing spatial variability in geo-
physical risk and social vulnerability to natural hazards. Nat Hazards Rev 6(1):23–33
Cherchye L, Lovell K, Moesena W, Puyenbroecka TV (2007) One market, one number? A composite
indicator assessment of EU internal market dynamics. Eur Econ Rev 51:749–779
City of Cedar Rapids (2010) Other social effects report: city of Cedar Rapids, Iowa - Flood of 2008. Cedar
Rapids
Clark JR, Moul D (2003) Coverage improvement in Census 2000 Enumeration. US Census Bureau, Census
2000 testing, experimentation and evaluation program
Clark G, Moser S, Ratick S, Dow K, Meyer W, Emani S, Jin W, Kasperson J, Kasperson R, Schwarz H
(1998) Assessing the vulnerability of coastal communities to extreme storms: the case of revere, MA,
USA. Mitig Adapt Strat Glob Change 3(1):59–82
Collins TW, Grineski SE, Aguilar M (2009) Vulnerability to environmental hazards in the Ciudad Juárez
(Mexico)-El Paso (USA) metropolis: A model for spatial risk assessment in transnational context. Appl
Geogr 29(3):448–461
Cutter SL (1996) Vulnerability to environmental hazards. Prog Hum Geogr 20:529–539
Cutter SL, Mitchell JT, Scott MS (2000) Revealing the vulnerability of people and places: a case study of
Georgetown County, South Carolina. Ann As Am Geogr 90(4):713–737
Cutter SL, Boruff BJ, Shirley WL (2003) Social vulnerability to environmental hazards. Soc Sci Q
84(1):242–261
Cutter SL, Burton CG, Emrich CT (2010) Disaster resilience indicators for benchmarking baseline condi-
tions. J Homel Secur Emerg Manage 7:1–24
Damm M (2010) Mapping social-ecological vulnerability to flooding: a sub-national approach for Germany.
University of Bonn, PhD Dissertation
de Oliveira Mendes JM (2009) Social vulnerability indexes as planning tools: beyond the preparedness
paradigm. J Risk Res 12(1):43–58
DP UN (2004) Reducing disaster risk: a challenge for development. In: Pelling M, Maskrev A, Ruiz P, Hall L
(eds) United Nations development programme. Bureau for Crisis Prevention and Recovery, New York
Dunning MC, Durden S (2011) Social vulnerability analysis methods for corps planning. I. f. W. resources.
US Army Corps of Engineers, Institute for Water Resources, Washington DC
Dwyer A, Zoppou C, Nielsen O, Day S, Roberts S (2004) Quantifying social vulnerability: a methodology
for identifying those at risk to natural hazards. Canberra, Geoscience Australia
Emrich CT (2005) Social vulnerability in US metropolitan areas: improvements in hazard vulnerability
assessment. University of South Carolina, PhD Dissertation
EPA (2009) Guidance on the development, evaluation, and application of environmental models. Office of
the Science Advisor: Council for Regulatory Environmental Modeling, Washington DC
Esty DC, Levy M, Srebotnjak T, Alexander de Sherbinin (2005). 2005 Environmental Sustainability Index:
Benchmarking National Environmental Stewardship. New Haven, Yale Center for Environmental Law
& Policy
Fair Data (2003) ‘‘Undercount mapper: statistically adjusted PL94 data.’’ Nationwide dataset of block-level
data extracted from the statistically adjusted 2000 census PL94-171 Retrieved April 18, 2009, from
http://www.fairdata2000.com/Adjusted/index.html
Fekete A (2009) Validation of a social vulnerability index in context to river-floods in Germany. Nat
Hazards Earth Syst Sci 9(2):393–403
Fekete A (2011) Spatial disaster vulnerability and risk assessments: challenges in their quality and
acceptance. Natural hazards: 1–18
Finch C, Emrich CT, Cutter SL (2010) Disaster disparities and differential recovery in New Orleans. Popul
Environ 31(4):179–202
Flanagan BE, Gregory EW, Hallisey EJ, Heitgerd JL, Lewis B (2011) A social vulnerability index for
disaster management. J Homel Secur Emerg Manage 8(1):1–22
Gall M (2007) Indices of social vulnerability to natural hazards: a comparative evaluation. University of
South Carolina, PhD Dissertation
Hebb A, Mortsch L (2007) Floods: mapping vulnerability in the upper Thames Watershed under a changing
climate. University of Waterloo, Project Report XI, pp 1–53
Holand IS, Lujala P, Rød JK (2011) Social vulnerability assessment for Norway: a quantitative approach.
Norsk Geografisk Tidsskrift - Norwegian Journal of Geography 65(1):1–17
HVRI (2008) The SoVI Recipe from http://webra.cas.sc.edu/hvri/docs/SoVIRecipe.pdf
Jones B, Andrey J (2007) Vulnerability index construction: methodological choices and their influence on
identifying vulnerable neighborhoods. Int J Emerg Manag 4(2):269–295
JRC (2011) Simlab. Software package for uncertainty and sensitivity analysis. Joint Research Centre of the
European Commission, Italy

123
346 Nat Hazards (2012) 63:325–347

King D, MacGregor C (2000) Using social indicators to measure community vulnerability to natural
hazards. Australian J Emerg Manag 15(3):52–57
Ledesma RD, Valero-Mora P (2007) Determining the number of factors to retain in EFA: an easy-to-use
computer program for carrying out parallel analysis. Pract Assess, Res Eval 12(2):1–11
Lein JK, Abel LE (2010) Hazard vulnerability assessment: how well does nature follow our rules? Environ
Hazards 9(2):147–166
Mileti D (1999) Disasters by design: A reassessment of natural hazards in the United States. D.C., Joseph
Henry Press, Washington
Montz BE, Evans TA (2001) GIS and social vulnerability analysis. Kluwer Academic Publishers, Coping
with Flash Floods. E. Gruntfest and J. Handmer. Netherlands, pp 37–48
Mustafa D, Ahmed S, Saroch E, Bell H (2011) Pinning down vulnerability: from narratives to numbers.
Disasters 35(1):62–86
Myers CA, Slack T, Singelmann J (2008) Social vulnerability and migration in the wake of disaster: the case
of Hurricanes Katrina and Rita. Popul Environ 29(6):271–291
O’Connor BP (2000) SPSS and SAS programs for determining the number of components using parallel
analysis and Velicer’s MAP test. Behav Res Methods Instr Comput 32(3):396–402
OECD (2008) Handbook on constructing composite indicators: methodology and user’s guide. Organization
for Economic Co-operation and Development, Paris
Parris TM, Kates RW (2003) Characterizing and measuring sustainable development. Annu Rev Environ
Resour 28(1):559–586
Passel JS (2005) Estimates of the size and characteristics of the undocumented population. Pew Hispanic
Center, Washington, DC
Patil VH, Singh SN, Mishra S, Donavan DT (2007) ‘‘Parallel analysis engine to aid in determining the
number of factors to retain.’’ Retrieved January 11, 2011, from http://ires.ku.edu/*smishra/paralle
lengine.htm
Patil VH, Singh SN, Mishra S, Todd Donavan D (2008) Efficient theory development and factor retention
criteria: Abandon the [] eigenvalue greater than one’criterion. J Bus Res 61(2):162–170
Rygel L, O’Sullivan D, Yarnal B (2006) A Method for constructing a social vulnerability index: an
application to hurricane storm surges in a developed country. Mitig Adapt Strat Glob Change 11(3):
741–764
Saisana M, Saltelli A (2008) Uncertainty and sensitivity analysis of the 2008 environmental performance
index. Joint Research Centre of the European Commission, Ispra
Saisana M, Nardo M, Srebotnjak T (2005a) Robustness analysis of the 2005 environmental sustainability
index. Joint Research Centre of the European Commission and the Yale Center for Environmental Law
and Policy, Ispra
Saisana M, Saltelli A, Tarantola S (2005b) Uncertainty and sensitivity analysis techniques as tools for the
quality assessment of composite indicators. J Roy Stat Soc 168(2):307–323
Saltelli A, Tarantola S (2002) On the relative importance of input factors in mathematical models: safety
assessment for nuclear waste disposal. J Am Stat As 97(459):702–709
Saltelli A, Ratto M, Andres T, Campolongo F, Cariboni J, Gatelli D, Saisana M, Tarantola S (2008) Global
sensitivity analysis: the primer. Wiley, West Sussex
Schmidtlein MC, Deutsch RC, Piegorsch WW, Cutter SL (2008) A sensitivity analysis of the social vul-
nerability index. Risk Anal 28(4):1099–1114
Schmidtlein MC, Shafer JM, Berry M, Cutter SL (2010) Modeled earthquake losses and social vulnerability
in Charleston, South Carolina. Appl Geogr 31(1):269–281
Schneiderbauer S, Ehrlich D (2006) Social levels and hazard (in) dependence in determining vulnerability.
In: Birkmann J (ed) Measuring vulnerability to natural hazards: towards disaster resilient societies.
United Nations University Press, Tokyo, pp 78–102
Tate E (2011) Indices of social vulnerability to hazards: model uncertainty and sensitivity. Department of
Geography, University of South Carolina, PhD Dissertation
Tate E (forthcoming) Uncertainty analysis for a social vulnerability index. Annals of the association of
American geographers
Tate E, Cutter SL, Berry M (2010) Integrated multihazard mapping. Environ Plan B: Plan Des 37(4):
646–663
UNDP (2010) The real wealth of nations: pathways to human development. Human development report.
United Nations Development Programme, New York
Villa F, McLeod H (2002) Environmental vulnerability indicators for environmental planning and decision-
making: guidelines and applications. Environ Manage 29(3):335–348
Vincent K (2004) Creating an index of social vulnerability to climate change in Africa. Working paper 56.
London: Tyndall Centre for Climate Change Research

123
Nat Hazards (2012) 63:325–347 347

Wisner B, Blaikie P, Cannon T, Davis I (2004) At risk: natural hazards. People’s Vulnerability and
Disasters, New York, NY, Routledge
Wood NJ, Burton CG, Cutter SL (2010) Community variations in social vulnerability to Cascadia-related
tsunamis in the US Pacific Northwest. Nat Hazards 52(2):369–389
Wu SY, Yarnal B, Fisher A (2002) Vulnerability of coastal communities to sea level rise: a case study of
Cape May County, New Jersey, USA. Clim Res 22:255–270
Xu C, Gertner GZ (2008) A general first-order global sensitivity analysis method. Reliab Eng Syst Saf
93(7):1060–1071

123

You might also like