Professional Documents
Culture Documents
Hydrogeological Conceptual Site Models
Hydrogeological Conceptual Site Models
Hydrogeological Conceptual Site Models
and Visualization
Neven Kresic
Alex Mikszewski
Microsoft® Access® screen shots used with permission from Microsoft based on “Use of Microsoft Copyrighted Content”
guidelines.
Downloaded by [University of Auckland] at 23:39 09 April 2014
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials
or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material repro-
duced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming,
and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copy-
right.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been
granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifica-
tion and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
Preface............................................................................................................................................ xiii
Authors............................................................................................................................................xv
1. Introduction..............................................................................................................................1
1.1 Historical Example......................................................................................................... 3
1.2 Example Uses of This Book..........................................................................................9
References................................................................................................................................ 10
Downloaded by [University of Auckland] at 23:39 09 April 2014
vii
viii Contents
4. Contouring............................................................................................................................199
4.1 Introduction................................................................................................................ 199
4.2 Contouring Methods................................................................................................. 201
4.2.1 Manual Contouring...................................................................................... 201
4.2.2 Contouring with Computer Programs...................................................... 202
4.2.3 Spatial Interpolation Models....................................................................... 208
4.2.3.1 Deterministic Models................................................................... 208
4.2.3.2 Geostatistical Models.................................................................... 212
4.2.3.3 Trend and Anisotropy.................................................................. 213
4.2.3.4 Error and Uncertainty Analysis.................................................. 218
4.3 Kriging.........................................................................................................................222
4.3.1 Variography...................................................................................................225
4.3.1.1 Semivariogram Curve-Fitting..................................................... 229
Downloaded by [University of Auckland] at 23:39 09 April 2014
5. Groundwater Modeling......................................................................................................293
5.1 Introduction................................................................................................................ 293
5.2 Misuse of Groundwater Models.............................................................................. 294
x Contents
6. Three-Dimensional Visualizations.................................................................................347
6.1 Introduction................................................................................................................ 347
6.2 3D Conceptual Site Model Visualizations..............................................................348
6.2.1 3D Views of Geologic Model....................................................................... 353
6.2.2 4D Views of Groundwater Chemistry....................................................... 357
6.2.3 Views of 3D Plumes and Soil Plumes........................................................ 361
6.2.4 Specialty 3D Visualizations......................................................................... 361
Citations................................................................................................................................. 365
7. Site Investigation.................................................................................................................367
7.1 Data and Products in Public Domain..................................................................... 368
7.1.1 USGS Data and Publications....................................................................... 369
7.1.2 State GIS Data................................................................................................ 370
7.2 Database Coordination.............................................................................................. 371
7.3 Georeferencing........................................................................................................... 372
7.3.1 Georeferencing AutoCAD Data.................................................................. 372
7.3.2 Georeferencing Raster Data........................................................................ 376
7.4 Developing a Site Basemap....................................................................................... 381
7.5 Developing and Implementing Sampling Plans................................................... 382
7.5.1 Developing Sampling Plans........................................................................ 382
7.5.1.1 Systematic Planning to Balance Cost and Risk......................... 382
7.5.1.2 Example Application of Visual Sample Plan............................. 385
7.5.2 Implementing Sampling Plans................................................................... 393
7.5.2.1 Data Collection.............................................................................. 393
7.5.2.2 Real-Time Data Management, Analysis, and Visualization...... 399
7.6 Example Visualizations for Site Investigation Data..............................................404
7.6.1 Plan-View Maps............................................................................................404
7.6.2 Boring Logs and Cross Sections................................................................. 411
7.6.3 Graphs and Charts........................................................................................ 418
7.7 Toxic Gingerbread Men and Other Confounders................................................. 421
References..............................................................................................................................422
Contents xi
8. Groundwater Remediation................................................................................................425
8.1 Introduction................................................................................................................425
8.2 Pump and Treat..........................................................................................................430
8.2.1 Introduction...................................................................................................430
8.2.2 Design Concepts...........................................................................................430
8.2.3 System Optimization....................................................................................442
8.3 In Situ Remediation....................................................................................................444
8.3.1 Introduction...................................................................................................444
8.3.2 In Situ Thermal Treatment...........................................................................445
8.3.2.1 Design Concepts............................................................................445
8.3.2.2 Case Study...................................................................................... 449
8.3.3 In Situ Chemical Oxidation......................................................................... 455
8.3.3.1 Design Concepts............................................................................ 455
Downloaded by [University of Auckland] at 23:39 09 April 2014
From their origins, exploration and inquiry in the Earth sciences have been dependent
on conceptual models and data visualizations to test theories and convey findings to the
general public. One can appreciate the power and importance of conceptual graphics by
flipping through the pages of a National Geographic magazine. Data visualization is inex-
tricably linked to quantitative spatial data analysis—the two major forms of which, for the
Earth sciences, are statistical interpolation and modeling. Data analysis and visualization
are invaluable in assessing the efficacy of current regulatory and consulting practices to
ensure that political and technical interventions related to the management of ground
Downloaded by [University of Auckland] at 23:39 09 April 2014
water resources and contaminated sites are evidence based and lead to desirable outcomes.
This book covers conceptual site model development, data analysis, and visual data pre-
sentation for hydrogeology and groundwater remediation. While this book is technical
in nature, equations and advanced theoretical discussions are minimized with the focus
instead placed on key concepts and practical data analysis and visualization strategies. As
a result, we believe that nontechnical stakeholders involved in groundwater projects will
find this book interesting and relevant as well. We sincerely hope that the reader’s aca-
demic or professional practice, whatever that may be, benefits from the tips and techniques
contained herein. We wish to thank Hisham Mahmoud, Don Chandler, Dave Goershel,
Dan Grogan, Allen Kibler, Leonard Ledbetter, Ann Massey, Larry Neal, and Steve Youngs
of AMEC for their continuing support and advice, and Ted Chapin and Karl Kasper of
Woodard & Curran for their support in the completion of this book.
xiii
Downloaded by [University of Auckland] at 23:39 09 April 2014
Authors
xv
Downloaded by [University of Auckland] at 23:39 09 April 2014
1
Introduction
The physics of groundwater flow, geochemistry, contaminant fate and transport, ground-
water remediation, and groundwater resources development and management are all
subjects that have been covered extensively in innumerable textbooks. Thus, a student
or a practicing groundwater professional has access to a wealth of information regarding
hydrogeological theory. A strong technical background in hydrogeology and related dis-
ciplines, such as fluid mechanics, forms the foundation for a successful career in academia
or the public or private sectors. However, this is typically where the education ends, and
continued development is generally only possible by obtaining real-world experience in
field hydrogeology, quantitative spatial data analysis, and data visualization that includes
mapping. The novice groundwater professional may also find that there are critical hydro-
geological concepts applicable at varying investigatory scales that are not typically covered
in conventional textbooks. The political and regulatory framework that a hydrogeologist
must operate within is another area where improved educational materials are desirable
but lacking.
The intention of this book is to fill the void in hydrogeological literature through iden-
tification and explanation of key concepts in professional hydrogeology and to provide
practical guidance and real-life examples related to the following applications:
• Regulators such as the United States Environmental Protection Agency (US EPA)
• Commercial and industrial clients
• Attorneys involved in litigation or real-estate transactions
• Juries
• Communities affected by a contaminated site or a water supply project
1
2 Hydrogeological Conceptual Site Models
It is the hope of the authors that this book will be interesting and useful to any of the above
stakeholders involved in groundwater projects. While this book is technical in nature,
equations and advanced theoretical discussions are minimized, with the focus placed on
key concepts, practical data analysis, and visualization strategies.
In addition, concepts are presented throughout this book related to the current state of the
hydrogeological practice, focusing on prevailing ideologies and recommendations for
improvement. These topics are often controversial, and the authors hope that this book
provokes thought and discussion on how we can evolve current policies and practices to
achieve better outcomes at a lesser cost to society. The authors have no agenda or underly-
ing motivation in these discussions, and it should be noted that this book was completed
without financial support from any public or private entity.
One example of a thought-provoking topic similar to others included in this book is
the current regulatory policy related to arsenic (As) in private drinking-water supplies in
Downloaded by [University of Auckland] at 23:40 09 April 2014
eastern New England. Arsenic occurs naturally in metasedimentary bedrock units in the
region that are extensively tapped by private water-supply wells. In 2003, it was estimated
that more than 100,000 people across eastern New England were using private water sup-
plies with arsenic concentrations above the federal maximum contaminant level (MCL)
of 10 µg/L (Ayotte et al. 2003). This represents a widespread exposure to a chemical at
dangerously high levels.
Figure 1.1 is a map of arsenic concentrations measured at private bedrock wells in south-
eastern New Hampshire during a 2003 study performed by the United States Geological
Survey. Despite these alarming data, arsenic is not regulated by the state of New Hampshire
in private drinking-water wells, and there are no current requirements to even test exist-
ing wells for the contaminant. In 2010, a bill (HB 1685) that would have made it a require-
ment to test new wells and wells involved in home sales was killed by the New Hampshire
Legislature (Susca and Klevens 2011).
In contrast to this policy of allowing arsenic exposure, environmental regulations
require the expenditure of millions of dollars to remediate Superfund and state-led con-
taminated sites where the exposure often constitutes a very low risk (e.g., one in a million
excess lifetime cancer risk) or is hypothetical in nature (e.g., potential future consumption
of groundwater). For example, at the Visalia Pole Yard Superfund site, well over $20 million
was spent to remediate groundwater contamination that was not posing an actual risk (see
Chapter 8 for a more detailed discussion). This is a classic example of policy that permits
self-inflicted risk while disproportionately targeting externally inflicted risk, ignoring the
relative costs and benefits of the overall outcome. One potential declaration of this ideol-
ogy is
When protecting human health and the environment, it is not our place to address risk
related to naturally occurring contamination or individual lifestyle choices, but we will
act aggressively to remedy any minimal level of risk caused by a third-party agent.
The reader should consider how this logic impedes efforts to protect human health and
the environment. Developing sound conceptual models and using effective data analysis
and visualization tools can help address problems even at this philosophical scale; practic-
ing groundwater professionals are encouraged to use their expertise to be active agents of
change. A historical example of the power of these methods is provided in the following
section.
Introduction 3
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 1.1
Arsenic concentrations in private bedrock wells in southeastern New Hampshire and grouped geologic units
showing the percentage of wells with concentrations of arsenic greater than the current MCL of 0.010 mg/L.
(Modified from USGS, 2003. Arsenic Concentrations in Private Bedrock Wells in Southeastern New Hampshire.
US Department of the Interior, USGS Fact Sheet 051-03.)
high mortality rates. At the time, the spread of cholera and most other diseases was
blamed on foul inner-city air. This conceptual model for disease transmission by odors
was termed the miasmatic theory and was widely accepted by sanitation professionals,
public officials, and Parliament in London by the late 1840s (Johnson 2006). Dr. Snow will
forever be remembered for his fight against this flawed, superstition-based theory.
Dr. Snow’s interest in cholera was likely spurred by the London cholera outbreak of
1848–1849, which killed 50,000 people (Johnson 2006). The doctor became obsessed with
the disease and, during that outbreak, developed an original conceptual model for cholera
transmission based on his knowledge and experience as a medical doctor. He reasoned that
cholera is fundamentally a diarrheic disease of the gut and, therefore, is caused by some-
thing ingested rather than inhaled. Where advocates of the miasmatic theory argued that
cholera was a poison inhaled and circulated through the blood, causing fever, Dr. Snow
argued that the pathology of cholera is caused by dehydration from severe diarrhea (Koch
Downloaded by [University of Auckland] at 23:40 09 April 2014
2011). He further built his argument on waterborne transmission through two population-
based studies conducted during the 1848–1849 epidemic. His findings were communicated
through a landmark 1849 publication On the Mode and Communication of Cholera. While
Dr. Snow’s work garnered much public interest, it was generally concluded, at the time,
that his publication failed to provide sufficient evidence linking cholera to water supply.
He therefore stewed for an additional five years before getting another chance at conclu-
sively proving the accuracy of his conceptual model. This opportunity came in the form of
another cholera outbreak in the Soho neighborhood centered on the famous Broad Street
pump (Johnson 2006).
The Soho outbreak was particularly swift and virulent, yet both Dr. Snow and his rival
working for the Board of Health, Reverend Henry Whitehead, were able to conduct rig-
orous, on-the-ground data collection during the outbreak itself. Armed with his correct
conceptual model, Dr. Snow collected site-specific data linking the spread of disease to the
Broad Street pump. He presented his immediate findings to the Board of Governors of St.
James Parish, and the evidence was compelling enough to convince the board to remove
the handle from the pump, thereby eliminating public access to the well. The action was
met with jeers by the observing public. While the data indicate that the outbreak was
already waning by the time of the pump handle removal, Dr. Snow’s actions likely con-
tributed to its decline and, at the minimum, prevented a second wave of disease spread
(Tufte 1997). The toll of the cholera outbreak was devastating; 90 out of the 896 Broad Street
residents died within two weeks (Johnson 2006).
Seizing the opportunity to further promote his theory, Dr. Snow quickly compiled his
data on the Soho outbreak for scientific publication. He summarized his findings in a
now famous map originally presented to the Epidemiological Society in December 1854
and included as Figure 1.2. Cholera deaths are represented as thick black bars, which are
clearly clustered around the Broad Street pump. While many declare that Dr. Snow’s map
“solved the mystery” of cholera (e.g., FlowingData 2007), it was not used to get the pump
handle removed (Dr. Snow’s weight of on-the-ground evidence was sufficient to get a des-
perate board to try anything), and it did not convince the board or the general public
of the waterborne theory of cholera transmission. The impact of the map has therefore
been somewhat exaggerated. The miasmatic theory persisted for several decades after Dr.
Snow’s work until it was replaced by the germ theory, and German scientist Robert Koch
isolated the cholera microbe in 1883. Ironically, Vibrio cholerae had already been identi-
fied in 1854—the same year as the Soho outbreak—by the Italian Fillipo Pacini, a finding
that was largely ignored by his contemporaries but later acknowledged by the parasite’s
renaming in 1965 to Vibrio cholerae Pacini 1854 (Johnson 2006).
Introduction 5
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 1.2
Dr. John Snow’s famous map of the 1854 Broad Street cholera outbreak first presented in December 1854 and
later published in 1855. Available at http://www.ph.ucla.edu/epi/snow.html.
This fascinating story is presented in detail in the work of Johnson (2006). It has been
summarized here to provide a historical example of how conceptual models, data analysis,
and data visualization can be used to tackle even the most difficult scientific and soci-
etal problems. Dr. Snow developed a conceptual model based on his professional knowl-
edge, collected and analyzed data quantitatively, and presented his results in an effective
visualization (which also served as an additional test on his original theory). However,
as previously stated, Dr. Snow unfortunately did not solve the mystery as his contempo-
raries remained unconvinced. Koch (2011) proposes that Dr. Snow could have used more
detailed quantitative analysis to bolster his study and potentially win over even the most
ardent miasma believers. Dr. Snow did not calculate relative mortality rates in the indi-
vidual pump catchments, which is a form of quantitative analysis that Koch (2011) asserts
was practiced at the time. A rendering of Dr. Snow’s data created using modern mapping
6 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 1.3
Cholera deaths per 1000 persons for the pump catchments in the area of the 1854 cholera outbreak. (Mortality
rates and approximate georeferenced catchment, cholera death, and pump locations from Koch, T., Disease
Maps: Epidemics on the Ground, University of Chicago Press, Chicago, 2011, 330 pp.). World Street Map sources:
Esri, DeLorme, NAVTEQ, TomTom, USGS, Intermap, iPC, NRCAN, Esri Japan, METI, Esri China (Hong Kong),
Esri (Thailand).
techniques is presented as Figure 1.3, including clear delineation of the pump catchments
and labeling the number of cholera deaths per 1000 persons in each catchment. The mor-
tality per 1000 persons in the Broad Street pump catchment (149 per 1000 persons) clearly
overwhelms the rates of the adjacent catchments (Koch 2011). It is important to note that
Dr. Snow produced a second version of his original map that innovatively used a Voronoi
diagram to delineate the area where the Broad Street pump was the closest source of water.
This results in a similar effect to the catchment-area delineations presented in Figure 1.3.
If Dr. Snow had performed these mortality calculations and presented them in such a
manner, might he have ended the cholera debate once and for all? The authors believe it
is highly unlikely. While the addition of mortality rates does enhance the visualization, it
often takes generations for entrenched ideologies to be purged from the public mind. In
Introduction 7
some cases, it takes extreme acts of self-sacrifice, such as self-experimentation, to prove the
validity of a scientific concept. While not necessary for cholera, self-experimentation was
critical in demonstrating the role of the mosquito in yellow fever transmission. The yellow
fever saga is brilliantly chronicled by Crosby (2006). If Dr. Snow had voluntarily consumed
cholera-impacted water, or conducted a study using other human subjects, maybe the tran-
sition to the waterborne theory would have been expedited. However, apart from martyr-
dom or unethical experimentation, Dr. Snow contributed as much as humanly possible to
the fight against cholera. At the time of this writing, cholera has still not been eradicated,
and a deadly outbreak continues in the Caribbean country of Haiti. As of July 31, 2011,
there have been more than 400,000 reported cases of cholera associated with the epidemic
in Haiti that began in fall 2010 (World Health Organization 2011). The reader may explore
how entrenched ideologies have contributed to the persistence of this outbreak.
The John Snow story is relevant to this publication for multiple reasons. For starters, it
Downloaded by [University of Auckland] at 23:40 09 April 2014
involves contaminated groundwater and associated impacts on public health. More impor-
tantly, though, it outlines the framework for conducting spatial scientific studies that is the
fundamental topic of this book. The key elements of this framework are
A flow chart illustrating the relationship of these elements is provided in Figure 1.4. Note
that this framework is cyclical as it is valuable to perform data visualization or analysis
first before focusing on the conceptual model, particularly where historical data are lim-
ited or completely absent. However, without a conceptual model, data collection, analysis,
and visualization are uninformed and can lead to erroneous interpretations. If Dr. Snow
had blindly plotted the cholera deaths on his map without providing substantive technical
and conceptual justification for his theory, the map could have just as easily linked cholera
FIGURE 1.4
Flow chart of the framework for spatial investigation advanced by Dr. Snow and applicable to modern hydroge-
ology. Note that data analysis and visualization often occur cooperatively.
8 Hydrogeological Conceptual Site Models
to a former plague burial site in the Broad Street area, which would have fit nicely into the
miasmatic model (Koch 2011).
The failure to include conceptual models in hydrogeological studies results in the propa-
gation of major errors in professional practice. Examples of such fundamental errors high-
lighted in this book are
(Chapter 8)
It is the hope of the authors that this book educates groundwater professionals and stake-
holders alike about these major errors. However, more important objectives are to encour-
age independent thinking about current groundwater issues and to promote the use of
conceptual models and advanced data analysis and visualization tools to better solve
hydrogeological problems.
The breakdown of independent analysis and the failure to use appropriate conceptual and
quantitative models are symptoms of groupthink, a term discussed further in Chapter
5. Groupthink has led to innumerable engineering failures including such disasters as the
Space Shuttle Columbia accident in 2003. According to the Columbia Accident Investigation
Board (CAIB), foam shedding from space shuttles was originally viewed as a potential safety
issue early in the shuttle program. However, foam shedding occurred so frequently over the
course of 112 missions without major incident that it was eventually accepted as a nuisance
management issue rather than a significant hazard. Even when it became apparent from
analytic evidence that the Columbia accident was caused by damage to the shuttle’s thermal
protection system from a collision with detached foam debris, there remained “lingering
denial” that foam could really be the root cause (CAIB 2003). As a result, the CAIB had to
conduct impact and analysis testing using a real-life physical model to provide irrefutable
proof that foam can inflict potentially catastrophic damage to shuttle paneling.
Volume I of the CAIB report is included on the companion DVD for reference. In addi-
tion to the flawed notion that foam shedding was solely a maintenance problem, the report
identifies many other factors that contributed to the fatal accident:
• The use of a semiempirical quantitative model beyond its calibration range rather
than a physics-based model
• Poor communication of decision uncertainty and risk to National Aeronautics and
Space Administration (NASA) management (see also Tufte [2006])
• Concern regarding jumping the chain of command
• Fear of being ridiculed for expressing dissenting opinion
• Decision-making processes that were obscured by scheduling metrics and politi-
cal pressures
All the above factors can similarly affect projects in hydrogeology and groundwater reme-
diation, leading to engineering failure and associated consequences.
Introduction 9
Example 1
A consulting firm has just been awarded a contract for a Phase II/Comprehensive Site
Investigation Assessment at a former industrial facility. The primary component of the
Phase II report is a conceptual site model (CSM), which will dictate where and how
environmental data will be collected and what the significance of the data will be. This
book can help the consultant develop an effective CSM for the site, leading to defensible
characterization strategies and study conclusions. Concepts related to CSMs and site
Downloaded by [University of Auckland] at 23:40 09 April 2014
investigations are presented in Chapters 2 and 7. Data management and contouring are
also key elements of Phase II investigations, which are discussed at length in Chapters
3 and 4, respectively.
Example 2
A hydrogeologist becomes an expert witness in a lawsuit regarding the contamination
of several public water-supply wells. The hydrogeologist develops a fate and transport
groundwater model that demonstrates the client is not responsible for the contamina-
tion. For the upcoming trial, the hydrogeologist has been asked to produce simplified
graphics illustrating the principles behind the groundwater model and its overall con-
clusions. The hydrogeologist can use this book as a resource for producing data tables,
graphs, maps, illustrations, and animations of modeling results that may be easily
understood by the nontechnical trial jury. The hydrogeologist can also find key insight
in this book regarding the use of groundwater models in professional practice. Concepts
and visualizations related to groundwater models are presented in Chapter 5. Chapter
6, covering three-dimensional visualizations, may also be useful for this application.
Numerous examples including animations are provided on the companion DVD.
Example 3
An environmental engineer is responsible for the design and operation of an in situ
chemical oxidation (ISCO) and monitored natural attenuation (MNA) remedy at a high-
profile Superfund site. An initial round of ISCO injections at the contaminant source
area has been completed. The potentially responsible parties (PRPs) paying for the
cleanup have just asked the environmental engineer to demonstrate to the US EPA that
the source area remediation has been completed to the extent practicable and that the
remedy can fully transition to long-term MNA. Similarly, the US EPA has asked the
engineer to verify that MNA processes are occurring at the site to substantiate this
transition. The engineer can use this book to learn about key concepts in ISCO, technical
impracticability, and MNA and also as a reference for developing compelling visualiza-
tions of field data to justify remedial decisions to the US EPA. Groundwater remediation
is discussed at length in Chapter 8.
Example 4
A municipality has just completed a long-term pumping test at an extraction well that
is being considered for use as a public water supply. The town hydrogeologist needs to
present the results of the test at a town hall meeting to local conservation committees,
10 Hydrogeological Conceptual Site Models
state regulators, and the general public. A major concern of the conservation groups and
regulators is the dewatering of a small river located near the extraction well site. The
hydrogeologist can use this book to better understand surface water and groundwater
interactions, which are described in Chapters 2, 4, and 9. In addition, this book can
help the hydrogeologist perform groundwater modeling and contouring that assess the
potential for induced infiltration under pumping conditions (Chapters 4 and 5). Lastly,
the hydrogeologist can find example visualizations throughout this book and the com-
panion DVD that may be helpful in developing simplified data tables, graphs, maps,
and illustrations for the town hall meeting, helping nontechnical stakeholders clearly
understand the study conclusions.
Downloaded by [University of Auckland] at 23:40 09 April 2014
References
Ayotte, J. D., Montgomery, D. L., Flanagan, S. M., and Robinson, K. W., 2003. Arsenic in groundwater
in eastern New England: Occurrence, controls, and human health implications. Environ. Sci.
Technol., 37(10), 2075–2083.
Columbia Accident Investigation Board, 2003. The Columbia Accident Investigation Board Report:
Volume I. The National Aeronautics and Space Administration and the Government Printing
Office, Washington, DC, 248 pp.
Crosby, M. C., 2006. The American Plague: The Untold Story of Yellow Fever, the Epidemic that Shaped our
History. The Berkley Publishing Group, New York, 308 pp.
FlowingData, 2007. John Snow’s Famous Cholera Map. Available at http://flowingdata
.com/2007/09/12/john-snows-famous-cholera-map/. Accessed July 30, 2011.
Frerichs, R. R., 2011. John Snow—A Historical Giant in Epidemiology. UCLA Department of Epide
miology, School of Public Health. Available at http://www.ph.ucla.edu/epi/snow.html.
Accessed August 21, 2011.
Johnson, S., 2006. The Ghost Map. Riverhead Books, New York, 299 pp.
Koch, T., 2011. Disease Maps: Epidemics on the Ground. The University of Chicago Press, Chicago, 330 pp.
Snow, J., 1855. On the Mode of Communication of Cholera, 2nd Edition. Churchill, London.
Susca, P., and Klevens, C., 2011. NHDES Private Well Strategy Private Well Working Group.
Drinking Water and Groundwater Bureau, N.H. Department of Environmental Services (NHDES).
Available at http://www.dartmouth.edu/~toxmetal/program-resources/research-translation/
arsenic consortium.html. Accessed August 15, 2011.
Tufte, E. R., 1997. Visual Explanations: Images and Quantities, Evidence and Narrative. Graphics Press,
Cheshire, 156 pp.
Tufte, E. R., 2006. Beautiful Evidence. Graphics Press, Cheshire, 213 pp.
United States Geological Survey (USGS), 2003. Arsenic Concentrations in Private Bedrock Wells in
Southeastern New Hampshire. U.S. Department of Interior, USGS Fact Sheet 051-03.
World Health Organization, 2011. Haiti: Cholera Epidemic Reached a Second Peak and Case
Numbers Now Decreasing. Available at http://www.who.int/hac/crises/hti/en/. Accessed
August 28, 2011.
2
Conceptual Site Models
2.1 Definition
A hydrogeological conceptual site model (CSM) is a description of various natural and
anthropogenic factors that govern and contribute to the movement of groundwater in the
subsurface. Simply put, it is the answer to the following key questions:
When the groundwater is contaminated, a CSM also includes answers to similar general
questions regarding the contaminant(s). ASTM International (2008), formerly the American
Society for Testing and Materials, defines a conceptual site model for contaminated sites as
follows: “A written or pictorial representation of an environmental system and the biologi-
cal, physical, and chemical processes that determine the transport of contaminants from
sources through environmental media to environmental receptors within the system.”
An accurate CSM is critical in satisfying the ultimate goal of any project, which, in
hydrogeological practice, typically involves a decision regarding water supply, protection
of human health and the environment, or both.
A schematic (qualitative) CSM of an alluvial hydrogeological system consisting of sev-
eral water-bearing zones (aquifers, hydrostratigraphic units) is presented in Figure 2.1
together with major areas of groundwater recharge. Insets show schematic CSMs focused
on groundwater contamination discovered at two sites. Although these three CSMs, one
regional and two local, may have been developed independently, it is obvious that each
would benefit greatly from incorporating information collected for seemingly different
purposes at the other sites. In many instances, the success of a CSM will depend on the
ability of the project team to gather relevant information at different scales and from dif-
ferent sources, and integrate it with the data collected during site-specific investigations.
This concept is discussed in detail in Chapter 7.
The complexity and quantitative aspects of a CSM vary broadly depending on project
goals and the investigative stages of data collection. For example, a preliminary model
developed during early phases of water resource planning on a watershed scale may be
qualitative in nature and limited to general descriptions of underlying aquifers, their likely
11
12 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.1
Schematic regional and two local CSMs in an alluvial aquifer. (a) Three-phase contaminant plume developed
from a leaky underground storage tank at a gas station. (Modified from U.S. EPA 2000.) (b) Contaminant plumes
developed from a source of DNAPLs at the land surface.
Conceptual Site Models 13
recharge and discharge areas, and existing groundwater users. This information may be
readily available from various government agencies such as geological surveys in a for-
mat appropriate for direct presentation to applicable decision makers. In contrast, a CSM
adopted to serve as the basis for the design of a groundwater remediation (aquifer restora-
tion) system is, by default, very detailed and quantitative. Such a CSM is usually the result
of lengthy and expensive site-specific investigations and includes quantification of risk,
time-dependent (changing in time) groundwater flow rates and velocities, and fate and
transport of contaminants. CSMs that include quantitative aspects are increasingly relying
on mathematical models (analytic and numeric) during various stages of CSM develop-
ment. Mathematical models enable testing of hypotheses and can make predictions on,
for example, sustainable rates of groundwater extraction for water supply or probable con-
taminant concentrations at points of exposure. Quantitative models also allow evaluation
of the uncertainty associated with management decisions.
Downloaded by [University of Auckland] at 23:40 09 April 2014
The main purpose of a CSM is to provide a single visual product composed of text,
pictures, and, if necessary, animations, where all the information about the site is easily
accessible and can be used for decision making at any stage of project implementation. At
the same time, it is very important to understand that the CSM is a dynamic entity, con-
tinuously refined and updated. At the initial stage of the project, it is possible to have two
or three competing preliminary CSMs because the readily available information may not
lead to a definitive concept. As the project progresses and new information is collected,
the CSM becomes more detailed and quantitative, helps plan additional investigations,
and focuses the project team on feasible solutions. These solutions (such as design of a well
field for water supply or a bioremediation system for aquifer restoration) will be possible
only when there is consensus among all involved parties that the final CSM accurately
represents the hydrogeology of the site at the scale of interest—a concept further discussed
below. The remainder of this chapter focuses on key physical elements of hydrogeologi-
cal conceptual site models. Elements of the CSM related to contaminant exposure (i.e., the
exposure or receptor profile) are discussed in Chapter 8.
dated or obsolete information. In addition, the key components of a site physical profile
listed above are usually described in a static manner even though most, if not all, of them
change in time as a result of various natural and anthropogenic factors.
manual on developing conceptual hydrogeological models for fractured rock aquifers in the
Piedmont and Mountain regions of North Carolina, USA. This manual of the North Carolina
Department of Environment and Natural Resources explains, in detail, the importance of
topography for determining groundwater recharge and discharge zones and flow directions.
Figure 2.2 shows that the path of natural groundwater movement is relatively short
and almost invariably restricted to the zone underlying the topographic slope extend-
ing from a topographic divide to an adjacent stream. Thus, the concept of a local slope–
aquifer system applies. On the opposite sides of an interstream topographic divide are
two similar slope–aquifer systems as shown by A and B. The region as a whole is a
FIGURE 2.2
Conceptual view of double slope–aquifer systems and its compartments (C). All arrows indicate ground-
water flow directions. (Modified from LeGrand, H. W., Sr., A Master Conceptual Model for Hydrogeological Site
Characterization in the Piedmont and Mountain Region of North Carolina: A Guidance Manual, North Carolina
Department of Environment and Natural Resources, Division of Water Quality, Groundwater Section, 2007.)
Conceptual Site Models 15
works, which act as new groundwater discharge zones. For example, Figure 2.3 shows the
region of Butte, MT, which had already earned the nickname “The Richest Hill on Earth”
FIGURE 2.3
This image of the Berkeley Pit in Butte, MT, shows many features of the mine workings, such as the terraced
levels and access roadways of the open mine pits (gray and tan sculptured surfaces). A large gray tailings pile
of waste rock and an adjacent tailings pond appear to the north of the Berkeley Pit. Color changes in the tailings
pond result primarily from changing water depth. This astronaut photograph ISS013-E-63766 was acquired
August 2, 2006, with a Kodak 760C digital camera using an 800-mm lens and is provided by the ISS Crew Earth
Observations experiment and the Image Science & Analysis Group, Johnson Space Center. (From NASA, Earth
Observatory, http://earthobservatory.nasa.gov/, accessed March 2011.)
16 Hydrogeological Conceptual Site Models
by the end of the 19th century because of mining for gold, silver, and copper. Demand for
electricity increased demand for copper so much that by World War I, the city of Butte was
a boomtown. Well before World War I, however, copper mining had spurred the creation
of an intricate network of underground drains and pumps to lower the groundwater level
and continue the extraction of copper. Water extracted from the mines was so rich in dis-
solved copper sulfate that it was also mined (by chemical precipitation) for the copper it
contained. In 1955, copper mining in the area expanded with the opening of the Berkeley
Pit. The mine took advantage of the existing subterranean drainage and pump network to
lower groundwater until 1982 when a new owner suspended operations. After the pumps
were turned off, water from the surrounding rock basin began seeping into the pit. By the
time an astronaut on the International Space Station took this picture, water in the pit was
more than 275 meters (900 feet) deep. Because its water contains high concentrations of
metals such as copper and zinc, the Berkeley Pit is listed as a federal Superfund site. The
Downloaded by [University of Auckland] at 23:40 09 April 2014
Berkeley Pit receives groundwater flowing through the surrounding bedrock and acts as a
terminal pit or sink for these heavy metal–laden waters, which can be as strong as battery
acid. Ongoing cleanup efforts include treating and diverting water at locations upstream
of the pit to reduce inflow and decrease the risk of accidental release of contaminated
water from the pit into local aquifers or surface streams (NASA 2011).
Another key concept of geomorphology is that specific landforms are closely related
to the underlying geology, that is, rock types and the tectonic fabric that includes folds,
fractures, faults, and other discontinuities in the rock mass. Together, the geology and
the geomorphologic processes shaping the land surface play key roles in the formation of
aquifers and the resulting characteristics of groundwater flow at the site. Thanks to rapid
developments in remote sensing technology and easy access to various Internet (online)
sources of Earth imagery and digital elevation data, it is now possible to perform a very
detailed visual analysis of geomorphologic features without even visiting a site. This, how-
ever, is not recommended by the authors—one should always make every attempt to visit
the site he or she is working on. As emphasized throughout this book, site topography can
be displayed in three dimensions, rotated and viewed from different angles by using a
variety of commercial and public-domain computer programs. Aerial and satellite images,
geologic maps, and other thematic maps can be easily draped over digital 3D topography
and analyzed in the same fashion (Figures 2.4 and 2.5). In addition, every working profes-
sional should have ready access to Google Earth, which is now the default, free platform for
visualization of land surface features. Figures 2.6 through 2.18 illustrate many benefits of
analyzing remote sensing imagery and digital elevation models (DEMs) when developing
hydrogeological CSMs.
The Kunlun fault shown in Figure 2.6 is one of the gigantic strike-slip faults that bound
the north side of Tibet. Left-lateral motion along the 1500-km (932-mi) length of the Kunlun
has occurred uniformly for the last 40,000 years at a rate of 1.1 cm/year, creating a cumula-
tive offset of more than 400 m. In this image, two splays of the fault are clearly seen cross-
ing from east to west. The northern fault juxtaposes sedimentary rocks of the mountains
against alluvial fans. Its trace is also marked by lines of vegetation, which appear red
in the image. The southern, younger fault cuts through the alluvium. Box A shows wet
ground caused by discharge of groundwater from the alluvial fans. The dark linear area in
the outlined box B is wet ground where groundwater has ponded against the fault (NASA
2011).
Songhua River just upstream (west) of the city of Harbin, China, is shown in Figure 2.7.
The main stem of the river and its myriad channels appear deep blue, winding from bot-
tom left toward center right. To the west of the river, shallow lakes appear electric blue.
Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.4
Portion of the geologic map from Hydrogeologic Atlas of the Hill Country Trinity Aquifer, Blanco, Hays, and Travis Counties, Central Texas. (Modified from
Wierman, D. A. et al., Hydrogeologic Atlas of the Hill Country Trinity Aquifer, Blanco, Hays, and Travis Counties, Central Texas, 2010.)
17
18 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.5
Geologic map shown in Figure 2.4 draped over a DEM. (Courtesy of Gavin Hudgeons, AMEC.)
The surrounding landscape reveals the Manchurian Plain in shades of brown, crossed
by pale lines (roads) and spots representing villages and towns. The extreme flatness of
the Manchurian Plain has caused the river to meander widely over time. The result of the
meandering is that the river is surrounded by a wide plain that is filled with swirls and
curves, showing paths the river once took (NASA 2011). The plain includes classic features
of meandering rivers, such as oxbow lakes—semicircular lakes formed when a meander
is cut off from the main channel by river-deposited sediment. As meandering rivers, such
as this one, shift their positions across the valley bottom, they create a complicated pattern
of heterogeneous sediment deposits of varying grain sizes both laterally and vertically.
Consequently, groundwater flow rates, directions, and velocities are quite convoluted, not
just in the three-dimensional space but also in time because of changing water levels in the
main river and its tributaries, including flooding (see also Figure 4.40).
Analysis of topography is particularly useful when deciphering possible geologic
reasons for a certain type of surface drainage as illustrated schematically in Figure 2.8.
Although topographic maps printed on paper will continue to be utilized for years by
default (and in some parts of the world may still be the only available option), DEMs offer
many advantages as illustrated in Figures 2.9 and 2.10. For example, areas denoted as A
and B have a higher density of surface drainage features and steeper slopes (more closely
spaced contours), which are clearly visible on both the topographic map in Figure 2.9 and
the DEM in Figure 2.10. This difference may be the result of a less permeable rock type, dif-
ferent slopes of sedimentary layers, or some other geologic reason such as local uplifting
(tectonic movement) that promotes vertical erosion. However, more subtle landforms in
the central flood plain such as the oxbow lakes and river terraces that are clearly visible on
the bottom of Figure 2.10 are less obvious on the printed map or not even depicted because
of a relatively coarse contour interval of 20 ft. In contrast, the high-resolution DEM derived
from Light Detection and Ranging (LiDAR) topographic data even shows rows of crops in
some of the flood-plain fields. In terrains like this, a hydrogeologic site visit would focus
Conceptual Site Models 19
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.6
Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) satellite image of the Kunlun
fault, north Tibet. This visible light and near infrared scene was acquired on July 20, 2000. (Courtesy of NASA/
GSFC/MITI/ERSDAC/JAROS and the U.S./Japan ASTER Science Team. From NASA, Earth Observatory, http://
earthobservatory.nasa.gov/, accessed March 2011.)
20 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.7
ASTER image of the Songhua River upstream (west) of the city of Harbin, China, acquired on April 1, 2002.
(Courtesy of NASA/GSFC/METI/ERSDAC/JAROS and the U.S./Japan ASTER Science Team. From NASA,
Earth Observatory, http://earthobservatory.nasa.gov/, accessed March 2011.)
E E
D D
F
D
F
E F
FIGURE 2.8
Criteria for interpretation of drainage patterns on topographic maps and remote sensing imagery. Top left:
(a) Drainage density is high in less permeable rocks; (b) more permeable rocks have fewer drainage features;
(c) surface drainage is disintegrated or missing in karstified rocks. Top right: (a) Dendritic drainage is character-
istic of homogeneous and isotropic geologic terrains; (b) rectangular drainage is common in folded, stratified
(layered) sedimentary rocks dissected by perpendicular fractures and faults; (c) circular drainage (ring pat-
tern) is characteristic for domes or partially destroyed calderas. Bottom: Faults can be inferred from drainage
features such as long, straight stream segments (a), aligned segments of neighboring streams that change flow
directions abruptly (b), and segments of different streams extending (aligning) over the ridges (c). (Modified
from Dimitrijević, M., Geolosko Kartiranje (Geological Mapping), ICS, Beograd, 1978.)
Conceptual Site Models 21
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.9
Part of printed 1:24,000 USGS topographic quadrangle with a contour interval of 20 ft. Note denser surface
drainage in areas A and B (see also Figure 2.8).
on the river terrace and other escarpments looking for springs, seeps, and rock (sediment)
outcrops. Planning such a visit would greatly benefit from having a “living” 3D image
of the topography. Another advantage of the DEM visualization is that it is likely easier
for nontechnical audiences to understand than 2D contour maps, and it is therefore well
suited for presentations and reports prepared for the general public.
National Elevation Dataset (NED) high-resolution data from the USGS is typically derived
from LiDAR technology or digital photogrammetry and is often break line enforced to
account for linear relief features. If collected at a ground sample distance no coarser than
5 m, such data may also be available within the NED at a resolution of 1/9 arcsec, now
22 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.10
Top: View of 3D surface created from high-resolution digital elevation data for the same area shown in Figure
2.9. Bottom: Enlarged view of the 3D surface showing fine detail such as roads, meanders, terrace scarps, and
rows of crops.
downloadable for some portions of the United States (see USGS’s Seamless Data Warehouse
at http://seamless.usgs.gov/ or The National Map Viewer at http://viewer.nationalmap
.gov/viewer/). The new generation of USGS topographic maps at scale 1:24,000 (topographic
quadrangles) is based on these high-resolution elevation data, incorporates high-resolution
photo images, and has various layers of information all available in digital, georeferenced,
pdf format. An example is shown in Figures 2.11 through 2.15. Both the digital map with
all its layers, including the aerial photograph, and the accompanying high-resolution NED
(digital raster file) can be downloaded and analyzed as illustrated. The NED file can be
Conceptual Site Models 23
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.11
Part of the new 2011 edition of the USGS Lewisburg, WV, quadrangle showing contour and hydrography layers
only.
contoured and displayed in 3D with some of the commonly used programs such as Surfer
by Golden Software, Inc. or Esri ArcScene (see Chapter 4).
Figure 2.11 shows contours and hydrography layers of a part of the new 2011 USGS
Lewisburg, WV, quadrangle illustrating unique features of karst topography. Numerous
sinkholes including deep, uvala-like closed depressions developed in the Greenbrier
Limestone are visible in the left portion of the map and a smaller area in the southeast.
Closed depressions, such as these, and an absence of surface drainage (flowing streams)
are the main characteristics of mature karst terrains. In contrast, less permeable rocks
such as shales of the McCrady Formation and Pocono Group in the central and eastern
portions of the map have densely developed surface drainage. Figure 2.12 shows two
views of the high-resolution NED for another portion of the same quadrangle and some
of the adjacent areas where pronounced lines of sinkholes and other relief lineaments are
clearly visible. These linear features, likely formed by faulting and contrasting lithology,
are main indicators of preferential flow paths within the underlying karst aquifer. The
entire karst area of the Lewisburg quadrangle is draining at Davis Spring, the largest
spring in West Virginia (the spring is located to the southeast on the adjacent Asbury,
WV, quadrangle).
Because of the legibility issues, printed topographic maps are limited by the contour
interval even when a finer resolution of elevation data is available. This is illustrated with
Figures 2.13 through 2.15. The maps’ contour interval of 20 ft is not sufficient to depict
smaller sinkholes, which are clearly visible on the aerial photograph. Having the NED file
of the same quadrangle enables contouring at any desired interval including 3 ft (Figure
2.14), which should suffice for identifying virtually all sinkholes visible on the aerial
photograph. Better yet is to use a combination of contours and a color-shaded surface
map (Figure 2.15), which can be rotated, zoomed in, and displayed with different vertical
exaggeration. As a small incentive to our speleological (karst) colleagues, the authors will
24 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.12
3D view of a portion of the Lewisburg, WV, quadrangle and the adjacent areas to the west. Bottom: Blowup of
the 3D surface showing fine detail depicted by the high-resolution NED.
Conceptual Site Models 25
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.13
Karst features depicted with the 20-ft contours versus all features visible on the aerial photograph of the new
2011 edition of the Lewisburg, WV, quadrangle. The photograph has a resolution of approximately 0.5 m.
26 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.14
Contours of the same area shown in Figure 2.13 (top) created from the NED file contour interval is 3 ft.
FIGURE 2.15
Contours with the 5-ft interval superimposed on the shaded color surface of the same area shown in Figures
2.13 and 2.14.
Conceptual Site Models 27
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.16
Shaded relief map of the greater San Antonio, TX, area available from the National Atlas of the United States
(www.nationalatlas.gov). Arrows indicate some of the lineaments visible on this small-scale map.
donate a signed copy of this book (and possibly some other goodies as well) to the first
speleological team that sends us a nice-looking georeferenced map that shows the spatial
relationship between the land surface features visible on the previous figures and the lon-
gest explored cave in the area.
For various, and sometimes mystifying reasons, geologic maps do not always show other
wise obvious features that may have an underlying geologic reason and are of critical
importance to a hydrogeological conceptual site model. One such example is the Balcones
Fault Zone in Texas, which is responsible for the current geometry of the Edwards
Aquifer, one of the most extensive and prolific karst aquifers in the world. Most, if not
all, geologic maps of this large area, at varying scales, only show faults of the northeast–
southwest strike, which are universally interpreted as the most important geologically.
However, even very general relief maps at small scales such as in Figure 2.16 clearly show
long prominent lineaments trending in other directions, including perpendicular to the
main northeast–southwest system of faults. Even a nongeologist would be able to iden-
tify northwest–southeast striking faults in this figure. In addition to being perfect candi-
dates for preferential groundwater flow paths within the Edwards Aquifer, these faults
(deemed unimportant lineaments by some) may be transferring significant quantities of
groundwater from the adjacent Trinity Aquifer to the Edwards Aquifer. Figure 2.17 illus-
trates advantages of DEMs for analyzing topographic lineaments in the midsection of the
Balcones Fault Zone between San Antonio and Austin.
28 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.17
Two DEM views of the central portion of the Balcones Fault Zone between San Antonio and Austin, TX. Some
topographic lineaments may be more apparent when viewed from different angles. Note the two blue ellipses
for orientation.
Conceptual Site Models 29
2.2.2 Hydrology
Surface water and groundwater are inseparable parts of the same hydrologic cycle and, as
such, influence each other at practical (site-specific) scales for most projects. However, some
working professionals do not always appreciate this simple fact and may, for example, study
a groundwater problem without paying any attention to a surface stream in the vicinity.
Even when a project at hand involves a deep, confined aquifer seemingly separated from the
rest of the world, it is highly recommended to make an attempt to understand its recharge
and discharge areas. For all projects involving unconfined aquifers, it is mandatory to define
likely hydraulic roles of nearby surface water bodies (e.g., streams, lakes, drainage ditches)
with respect to the movement of groundwater. In this sense, the term hydrology refers to
flow rates and stages (water elevation) of surface water bodies and their influence on the
exchange of flow between surface water and groundwater. In the field of water resources
Downloaded by [University of Auckland] at 23:40 09 April 2014
management, groundwater and surface water are increasingly seen as a single intercon-
nected resource that must be managed holistically. Additional discussion regarding inte-
grated management of surface water and groundwater resources, including examples of
combined surface water–groundwater numeric models, is provided in Chapter 9.
The majority of perennial surface streams would not have permanent flow without
groundwater contribution called baseflow. Excessive withdrawal of groundwater may
cause depletion or complete cessation of baseflow and disappearance of wetlands abutting
the surface stream in question. Conversely, changes in land use, such as urban develop-
ment, may alter patterns of surface water runoff, increase average flow in the receiving
streams, and reduce aquifer recharge. Contamination of surface streams adjacent to well
fields used for water supply may threaten groundwater quality if the contaminant enters
the underlying aquifer, and discharge of contaminated groundwater into a surface water
body may have negative impact on human health and the environment.
Examples in Chapter 4 illustrate various hydraulic relationships between surface streams
and underlying unconfined aquifers including their importance for drawing contour maps
of the potentiometric surface and determining groundwater flow directions. However,
depending on local hydrogeologic and climatic conditions and human impacts, such
as stream regulation with dams and locks, the same stream may be losing or gaining
water in different sections (reaches), and this pattern may change in time. Major unregu-
lated meandering streams with large flood plains that are seasonally flooded (see Figure
2.18) may have quite complicated surface water–groundwater relationships, especially if
there are oxbow lakes and buried meanders. In addition, large streams are often regional
groundwater discharge locations for deeper confined aquifers, which complicates things
even further (Figure 2.19). Consequently, trying to determine representative groundwater
flow directions and contaminant plume geometry at a local site based solely on quarterly
water level measurements may be a daunting, if not impossible, task.
Streams without permanent gauges and sufficient data records present special chal-
lenges when determining characteristic flows needed for various calculations. Long-term
minimum baseflow is of particular importance for various applications, such as determin-
ing maximum allowable loading rates of a groundwater contaminant still protective of
in-stream water-quality standards. In the United States, this flow is typically referred to as
7Q10, which means seven-day, consecutive low flow with a 10-year return frequency (i.e.,
the lowest stream flow for seven consecutive days that would be expected to occur once in
10 years). Flow measurement at successive stream segments is a common method of deter-
mining if the stream is losing or gaining water between the segments. However, because
of the variability of flow conditions in the same stream and the associated potential
30 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.18
Two composite color satellite images of the White River, AR. ASTER on NASA’s Terra satellite captured the top
image on April 7, 2008, while the river was still rising before reaching one of the worst flood levels on record.
The bottom image is from April 14, 2006, when water levels were closer to normal. Bright green vegetation
flanks the river in the 2006 image. Much of the land is forested and preserved in the Cache River National
Wildlife Refuge. Around the forest, agricultural fields create a checkerboard of tan and green. Some of the fields
are flooded in the 2008 image. (NASA image created by Jesse Allen, using data provided courtesy of NASA/
GSFC/METI/ERSDAC/JAROS and U.S./Japan ASTER Science Team; caption by Holli Riebeek. From NASA,
Earth Observatory, http://earthobservatory.nasa.gov/, accessed March 2011.)
Conceptual Site Models 31
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.19
Groundwater flow directions (shown with arrows) in a flood plain of a meandering stream, which also acts as a
groundwater discharge zone for the regional aquifer. (Modified from Kresic, N., Hydrogeology and Groundwater
Modeling, Second Edition, CRC Press, Taylor & Francis Group, Boca Raton, FL, 807 pp., 2007.)
measurement errors, this method should be applied with great care in order to avoid false
conclusions. For example, if only one set of measurements is made at a few locations with
several or more hours separating them and the stream flow is under the influence of recent
precipitation, the results would almost certainly be misleading because the flow wave is
moving rapidly. It is therefore best if the flow measurements are performed after a long
period without precipitation and the applied method is based on a continuous record-
ing of the stream stage at successive stream segments. Flow hydrographs derived in this
way provide information on the actual change of volume of water between the segments,
which is the only real measure of gain or loss.
One of the most precise methods for measuring flow of smaller streams is tracer dilution
gauging, which can be performed with a fluorescent dye or, more simply, a salt solution. In
addition, when accompanied with chemical analyses, it is very useful when determining if
and where there is an increased inflow of contaminated groundwater. Figures 2.20 and 2.21
show results of a dye tracer study used to assess the dry-weather baseflow of a stream and
groundwater seepage inflows to the stream segment. The rhodamine dye was injected at the
most upstream location of the study segment at a constant rate and constant known concentra-
tion. A water-quality meter with a rhodamine sensor was placed in-stream at the most down-
stream sampling location and programmed to continuously record rhodamine concentrations.
The dye was injected continuously into the stream until the most downstream dye concentra-
tions reached a plateau. Note that if salt solution were used as the tracer, one would mea-
sure in-stream conductivity at the downstream location. Once dye concentrations along
the stream reached a plateau, surface water samples were collected at each sampling loca-
tion and analyzed for rhodamine concentrations and concentrations of a constituent of
concern (COC). Stream flow rate was determined for each stream sampling location using
the rhodamine concentration based on the observed dilution of the tracer. As can be seen
in Figure 2.21, near the middle of the study reach, the concentration of the COC increases
notably and then decreases gradually, indicating that an influx of impacted groundwater
to the stream is occurring along a preferential flow path through a short stream segment.
32 Hydrogeological Conceptual Site Models
5KRGDPLQH FRQFHQWUDWLRQ XJ/
Downloaded by [University of Auckland] at 23:40 09 April 2014
$0 30 $0 $0
7LPH
FIGURE 2.20
Rhodamine concentration at the downgradient location during an instream dye tracing study used to deter-
mine baseflow in a small stream. (Courtesy of Lisa Pfau, Larry Neal, and Margaret Tanner, AMEC).
)ORZ UDWH
)ORZ UDWH FIV
&RQFHQWUDWLRQ
&RQFHQWUDWLRQ XJ/
'LVWDQFH IURP LQMHFWLRQ SRLQW IW
FIGURE 2.21
Contaminant concentration and stream flow rate along a stream segment determined from the in-stream dye
tracer study. (Courtesy of Lisa Pfau, Larry Neal, and Margaret Tanner, AMEC.)
Conceptual Site Models 33
5HFKDUJH SHUFHQWDJHRISUHFLSLWDWLRQ
3UHFLSLWDWLRQ
5HFKDUJH 5 IWG
'UDLQDJHDUHD $ IW
5[$ IWG
FIGURE 2.22
Estimation of aquifer recharge from surface stream baseflow. (Modified from Kresic, N., Hydrogeology and
Groundwater Modeling, Second Edition, CRC Press, Taylor & Francis Group, Boca Raton, FL, 807 pp., 2007.)
However, its application is not always straightforward, and it should be based on a thor-
ough understanding of the geologic and hydrogeologic characteristics of the basin. The
following examples illustrate some situations where baseflow alone should not be used to
estimate actual groundwater recharge (Kresic 2007):
• Surface stream flows through a karst terrain where topographic and groundwater
divides are not the same. The groundwater recharge based on baseflow may be
grossly overestimated or underestimated depending on the circumstances.
• The stream is not permanent, or some river segments are losing water (either
always or seasonally); locations and timing of the flow measurements are not ade-
quate to assess such conditions.
• There is abundant riparian vegetation in the stream floodplain, which extracts a
significant portion of groundwater via evapotranspiration.
• There is discharge from deeper aquifers, which have remote recharge areas in
other drainage basins.
• A dam regulates the flow in the stream.
Most techniques for estimating baseflow are based on graphical separation of surface
stream hydrographs into two major components: flow generated by surface and near-
surface runoff and flow generated by discharge of groundwater. Although some profes-
sionals view this approach as a convenient fiction because of its subjectivity and lack of
rigorous theoretical basis, it does provide useful information in the absence of detailed
(and expensive) data on many surface water runoff processes and drainage basin char-
acteristics that contribute to streamflow generation. Risser et al. (2005) present a detailed
application and comparison of two automated methods of hydrograph separation for
estimating groundwater recharge based on data from 197 streamflow gauging stations in
Pennsylvania. The two computer programs—PART and RORA (Rutledge 1993, 1998, 2000)
developed by the USGS—are in public domain and available for free download from the
USGS Web site. The PART computer program uses a hydrograph separation technique
34 Hydrogeological Conceptual Site Models
to estimate baseflow from the streamflow record. The RORA computer program uses
the recession-curve displacement technique of Rorabaugh (1964) to estimate groundwa-
ter recharge from each storm period. The RORA program is not a hydrograph-separation
method; rather, recharge is determined from displacement of the streamflow–recession
curve according to the theory of groundwater drainage.
Rorabaugh’s method utilized by RORA is a one-dimensional analytical model of ground-
water discharge to a fully penetrating stream in an idealized, homogenous aquifer with
uniform spatial recharge. Because of the simplifying assumptions inherent to the equa-
tions, Halford and Mayer (2000) caution that RORA may not provide reasonable estimates
of recharge for some watersheds. In fact, in some extreme cases, RORA may estimate
recharge rates that are higher than the precipitation rates. Rutledge (2000) suggests that
estimates of mean monthly recharge from RORA are probably less reliable than estimates
for longer periods and recommends that results from RORA not be used at time scales
Downloaded by [University of Auckland] at 23:40 09 April 2014
smaller than seasonal (three months) because results differ most greatly from manual
application of the recession-curve displacement method at small time scales.
A method proposed by Pettyjohn and Henning (1979) includes the effects of riparian
evapotranspiration and, therefore, usually provides lower estimates than the hydrograph-
separation technique based on recession-curve displacement (Rutledge 1992). Groundwater
recharge estimates produced by the Pettyjohn–Henning and similar methods are some-
times called effective (or residual) groundwater recharge because the estimates represent
the difference between actual recharge and losses to riparian evapotranspiration.
As illustrated in Figure 2.23, graphical methods of baseflow separation may not be appli-
cable at all in some cases. A stream with alluvial sediments having significant bank storage
2 2
1 1
(a) (b)
Discharge
Discharge
Time Time
FIGURE 2.23
Stream hydrograph showing flow components after a major rise resulting from rainfall when (a) the stream
stage is higher than the water table; (b) the stream stage is higher than the water table in the shallow aquifer but
lower than the hydraulic head in the deeper aquifer, which is discharging into the stream. The initial stream
stage before rainfall is marked as 1, and the stream stage during peak flow is marked as 2. (Modified from Kresic,
N., Groundwater Resources: Sustainability, Management, and Restoration, McGraw Hill, New York, 235–292, 2009.)
Conceptual Site Models 35
capacity may, during floods or high river stages, lose water to the subsurface so that no
baseflow is occurring (Figure 2.23a). Or a stream may continuously receive baseflow from
a regional aquifer that has a different primary recharge area than the shallow aquifer and
maintains a higher head than the stream stage (Figure 2.23b). Although one may attempt to
graphically separate either of the two hydrographs using some common method, it would
not be possible to make any conclusions as to the groundwater component of the surface
stream flow without additional field investigations. One such field method is hydrochem-
ical separation of the streamflow hydrograph using dissolved chemical constituents or
environmental tracers. It is often more accurate than simple graphoanalytical techniques
because surface water and groundwater usually have significantly different chemical sig-
natures (Kresic and Mikszewski 2009).
The rate of flow exchange between surface water and groundwater depends on two
main factors: the hydraulic gradient between the two and the conductance of the riverbed
Downloaded by [University of Auckland] at 23:40 09 April 2014
(lakebed) sediments. This is schematically illustrated in Figure 2.24. The hydraulic gradi-
ent or the difference between the hydraulic head in the aquifer adjacent to the river and
the river stage (hydraulic head of the river) is the same in all four cases but with different
D E
+HDG LQ +HDG LQ
K DTXLIHU DTXLIHU K
5LYHU VWDJH
5LYHUE
H
VHGLPH G .&
QW
.&
. .
$TXLIHU 4 4
&!&
4!4
F G
K K
+HDG LQ +HDG LQ
DTXLIHU DTXLIHU
.& .&
. . .!.
& & 4 &!&
4
4 4 4!4
FIGURE 2.24
River hydraulic boundary represented with a head-dependent flux. K is hydraulic conductivity of the riverbed,
C is riverbed conductance, Q is the flow rate between the aquifer and the river, Δh is the hydraulic gradient
between the aquifer and the river (same in all four cases). (a, b) Gaining stream; (c, d) losing stream. Lower
hydraulic conductivity of the riverbed sediments and their greater thickness result in lower conductance and
a lower flow rate. (Modified from Kresic, N. Groundwater Resources: Sustainability, Management, and Restoration,
McGraw Hill, New York, pp. 235–292, 2009.)
36 Hydrogeological Conceptual Site Models
signs. In cases a and b, the hydraulic gradient is toward the river, which therefore gains water,
whereas in cases c and d, the hydraulic gradient is from the river toward the aquifer (the river
loses water). The lower conductance corresponds to more fines (e.g., silt) in the riverbed sedi-
ment and a lower hydraulic conductivity, resulting in a lower water flux between the aquifer
and the river. Thicker low-permeable riverbed sediments will have the same effect as shown
with cases b and d. All other things being equal, an increase in the hydraulic gradient will
result in an increased flux of water between the aquifer and the river.
As discussed earlier, one of the most important aspects of surface water stages is that
they change in time. Which time interval will be used for their inevitable averaging
depends upon the goals of each particular study. Seasonal or perhaps annual periods may
be adequate for a long-term water supply evaluation when considering recharge from pre-
cipitation. When a hydraulic boundary is quite dynamic and the required accuracy of pre-
dictions is high, the time interval for describing a changing river (lake) stage may have to
Downloaded by [University of Auckland] at 23:40 09 April 2014
be much shorter. For example, Figure 2.25 shows a comparison of two time intervals used
to model the interaction between a large river and a highly transmissive alluvial aquifer.
The Columbia River stage at this site is dominated by higher frequency diurnal fluctua-
tions that are principally the result of water released at Priest Rapid Dam to match power-
generation needs. The magnitude of these diurnal river-stage fluctuations can exceed the
5LYHU
= P
'LVWDQFH P
5LYHU
= P
'LVWDQFH P
FIGURE 2.25
River water tracer concentrations at the end of a model simulation. 2D numeric model of interaction between
the aquifer, vadose zone, and the Columbia River in the Hanford 300 area, Washington. Top: Hourly boundary
conditions. Bottom: Monthly boundary conditions. (Modified from Waichler, S. R., and Yabusaki, S. B., Flow
and Transport in the Hanford 300 Area Vadose Zone–Aquifer–River System, Pacific Northwest National Laboratory,
Richland, WA, 2005.)
Conceptual Site Models 37
seasonal fluctuation of monthly average river stages. During the simulation period, the
mean 24-hour change (difference between minimum and maximum hourly values) in
river stage was 0.48 m, and the maximum 24-hour change was 1.32 m. Groundwater levels
are significantly correlated with river stage although with a lag in time and decreased
amplitude of fluctuations. A two-dimensional, vertical, cross-sectional model domain was
developed to capture the principal dynamics of flow to and from the river as well as the
zone where groundwater and river water mix (Waichler and Yabusaki 2005).
Running the model with hourly boundary conditions resulted in frequent direction and
magnitude changes of water flux across the riverbed. In comparison, the velocity fluctua-
tions resulting from averaging the hourly boundary conditions over a day were consid-
erably attenuated, and for the month average, fluctuations were nonexistent. A similar
pattern held for the river tracer, which could enter the aquifer and then return to the river
later. Simulations based on hourly water-level boundary conditions predicted an aquifer–
Downloaded by [University of Auckland] at 23:40 09 April 2014
river water mixing zone that reached 150 m inland from the river based on the river tracer
concentration contours. In contrast, simulations based on daily and monthly averaging of
the hourly water levels at the river and interior model boundaries were shown to signifi-
cantly reduce predicted river water intrusion into the aquifer, resulting in underestimation
of the volume of the mixing zone. The relatively high-frequency river-stage changes asso-
ciated with diurnal release schedules at the dams generated significant mixing of the river
and groundwater tracers and flushing of the subsurface zone near the river. This mixing
was the essential mechanism for creating a fully developed mixing zone in the simula-
tions. Although the size and position of the mixing zone did not change significantly on a
diurnal basis, they did change in response to seasonal trends in the river stage. The largest
mixing zones occurred with the river-stage peaks in May–June and December–January,
and the smallest mixing zone occurred in September when the river stage was relatively
low (Waichler and Yabusaki 2005).
Urban areas present special challenges for understanding relevant hydrologic factors
and developing an accurate CSM. Examples include rerouted streams and filled stream-
beds resulting in complex groundwater flow patterns that may not be easily deciphered
based on current land surface topography. Considering these complicating factors is
particularly important for interpretation of present shapes of groundwater contaminant
plumes emanating from old historic source(s). A list of possible artificial hydrographic
features often only marginally (or not at all) described in CSMs includes storm water and
drainage ditches, storm water collection basins, leaky sewer and water lines, culverts, and
infrastructure tunnels. All of them can cause either local or regional effects and can sig-
nificantly influence groundwater recharge, discharge, and flow directions.
The main difference between weather and climate is the time scale at which these basic
elements change. Weather is constantly changing, sometimes from hour to hour, and these
changes create an almost infinite variety of weather conditions at any given time and place.
In comparison, climate changes are more subtle and were, until relatively recently, considered
important for time scales of hundreds of years or more and usually only discussed in aca-
demic circles. A more broad definition of climate is that it represents the long-term behavior
of the interactive climate system, which consists of the atmosphere, hydrosphere, lithosphere,
biosphere, and cryosphere or the ice and snow that are accumulated on the Earth’s surface
(Lutgens and Tarbuck 1995). At a minimum, and regardless of the project scale and scope, long-
term precipitation data and air temperatures for the closest available climate-gauging station
should be analyzed as they relate to the site’s groundwater. It is also generally recommended to
include a rain gauge at hydrogeological study sites as precipitation can vary greatly over small
distances. Barometric pressure is another parameter typically monitored by hydrogeologists at
Downloaded by [University of Auckland] at 23:40 09 April 2014
the site level, as many pressure transducers placed in monitoring or production wells are not
vented to the atmosphere, which means that the barometric pressure must be subtracted from
all measurements in order to obtain the water-level measurement.
An example illustrating interconnections between recharge from precipitation and
water table fluctuations in a fractured rock aquifer is given by Harned (1989) and shown
in Figure 2.26. In this example, the main period of groundwater recharge results from
heavy rains in late winter when evapotranspiration is low. It is reflected by a peak in the
water table hydrograph appearing a few days after heavy rainfall in late March. The time
:DWHU OHYHO LQ IHHW EHORZ ODQG VXUIDFH
:HOO '
3UHFLSLWDWLRQ LQ LQFKHV
3UHFLSLWDWLRQ DW $WODQWD
-$1 )(% 0$5 $35 0$< -81 -8/ $8* 6(3 2&7 129 '(&
FIGURE 2.26
Response of well water level change to rainfall. (From Cressler, C. W. et al., Ground Water in the Greater Atlanta
Region, Georgia, Georgia Department of Natural Resources, U.S. Environmental Protection Agency, and The
Georgia Geologic Survey, in cooperation with U.S. Geological Survey, Information Circular 63, 1983.)
Conceptual Site Models 39
after a storm when the peak appears in the water level is directly related to the vertical
hydraulic conductivity of the material in the unsaturated zone and the depth-to-water
table. The hydrograph also shows that little recharge took place during the growing sea-
son (April through September) even though the area received significant rainfall during
these months. The declining water level indicates continuing groundwater discharge that
is not equaled or exceeded by recharge until fall when evapotranspiration is low.
While the above example represents a very useful form of analysis, it is important to
remember that water-level fluctuations in a well are not entirely caused by aerial recharge
(i.e., recharge from infiltration of precipitation immediately above the well). Local and
regional recharge from up-gradient areas will cause a water table rise across an aquifer
independent of aerial recharge in some locations. It is often counterintuitive to the non-
hydrogeologist when a paved area experiences a significant water table rise, for example.
For groundwater remediation applications, areas are often paved to limit aerial recharge
Downloaded by [University of Auckland] at 23:40 09 April 2014
to support dewatering; however, recharge in other areas of the watershed must also be
accounted for to predict the relative impact of the paving on water table fluctuations.
Failure to do so may result in the compromise of a dewatering system from unacceptable
water table rise. Additional related discussion is provided in Section 2.3.1.2.
One common limiting factor for many groundwater projects is a relatively short period
of site-specific data on the hydraulic head (water table, potentiometric surface) fluctua-
tions. Even when this record is longer than several years, for example, it usually has
only quarterly water-level data, thus not allowing for a more detailed analysis of aquifer
recharge and its natural cycles. In order to determine where the site-specific data belong
in terms of natural hydrologic and climatic cycles, it is very helpful to analyze long-term
data from wells or springs in the same or similar hydrogeologic and climate settings
that may be available from government agencies. Figure 2.27 shows the daily discharge
)ORZ FIV
3UHFLSLWDWLRQ LQ
<HDU
FIGURE 2.27
Hydrograph of daily flows of Annie Spring near Crater Lake, OR, versus daily precipitation at Crater Lake. Red
line represents polynomial sixth-order trend.
40 Hydrogeological Conceptual Site Models
rate of Annie Spring in Oregon versus daily precipitation at Crater Lake gauging sta-
tion for 28 years. The graphs illustrate the presence of both long-term and annual natu-
ral cycles and, in this case, delay in the spring’s response to precipitation of about six
months. The highest precipitation, mainly in the form of snowfall, is in November and
December, whereas the discharge peaks in June following a complete snowmelt. The
Annie Spring’s hydrologic system behaves like a perfect clock without any visible long-
term trend unrelated to natural periodicity. However, in some other settings, such a
trend may be present, indicating possible anthropogenic influences as illustrated in the
next section.
rent land use and land cover at both regional and local scales. Understanding historic
land use/land cover and comparing it with current and projected land use/land cover is
therefore of critical importance to the project at hand. Three main trends in land use and
the associated human-induced changes in land cover have been taking place worldwide
and disrupting natural hydrologic cycles:
Urban development and the creation of impervious surfaces beyond a city core inevita-
bly result in increase in runoff and soil erosion. In turn, this reduces infiltration poten-
tial and groundwater recharge. Increased sediment load carried by surface streams
often results in the formation of fine-sediment deposits along stream channels, which
reduces hydraulic connectivity and exchange between surface water and groundwater
in river flood plains. Clear-cutting of forests also alters the hydrologic cycle and results
in increased erosion and sediment loading to surface streams. Conversion of low-lying
forests into agricultural land may increase groundwater recharge, especially if it is fol-
lowed by irrigation. (Although it is important to remember that often such recharge is
irrigation return, and the origin of water may be the underlying aquifer in question.
Irrigation return may be a small fraction of the groundwater originally pumped.) In con-
trast, cutting of forests in areas with steeper slopes will generally decrease groundwater
recharge because of the increased runoff except in cases of very permeable bedrock such
as karstified limestone.
In general, urban development results in decreased infiltration rates and increased
surface runoff because of the increasing area of various impervious surfaces (rooftops,
asphalt, concrete). However, the infiltration rate varies significantly within an urban area
based on actual land use. This is particularly important when evaluating fate and trans-
port of contaminant plumes, including development of groundwater models for such
diverse areas. For example, a contaminant plume may originate at an industrial facility
with a high percentage of impervious surfaces, resulting in negligible infiltration, and then
migrate toward a residential area where infiltration rates may be rather high because of the
Conceptual Site Models 41
open space (yards) and watering of lawns. By eliminating infiltration as a driving force,
paving and rooftops may play a positive role in preventing further downward migration
of contaminants through the vadose zone. In undeveloped countries, where the booming
of megacities and associated slums causes many societal and environmental problems,
groundwater recharge rates may actually increase as a result of leaky water lines and
unregulated sewage disposal, contaminating groundwater resources (particularly shal-
low and dug wells) and adding yet another problem to the list.
Agricultural activities have had direct and indirect effects on the rates and compositions
of groundwater recharge and aquifer biogeochemistry. Direct effects include dissolution
and transport of excess quantities of fertilizers and associated materials and hydrologic
alterations related to irrigation and drainage. Some indirect effects include changes in
water-rock reactions in soils and aquifers caused by increased concentrations of dis-
solved oxidants, protons, and major ions. Agricultural activities have directly or indirectly
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.28
Pesticide application on leaf lettuce in Yuma, AZ. (Courtesy of Jeff Vanuga, National Resources Conservation
Service.)
42 Hydrogeological Conceptual Site Models
Figure 2.29 is a false color image of central Florida east-northeast of Orlando created
from data collected by the Enhanced Thematic Mapper Plus (ETM+) instrument on the
Landsat 7 satellite. The image combines ETM+ bands 5, 4, and 3. Green shows vegetation,
exposed land is pink, lakes and water are deep blue, and concrete-based structures (roads,
towns) appear in various tones of purple and gray. The area on the image is part of the
main recharge zone of the Floridan Aquifer, which is increasingly under stress because
of clear-cutting of natural vegetative cover for residential and agricultural development,
both of which increase groundwater extraction from the aquifer. A new major subdivi-
sion, about 3 mi across, is noted with a circle; this subdivision, being developed north of
Lacoochee and State Road 50, east of U.S. Interstate 75, is just one of many similar ones
changing the Florida landscape forever.
Figure 2.30 shows land cover/land use change in the area of Rainbow Springs near
Dunnellon, FL, between 1995 and 2007. Discharge of this first magnitude spring exhibits
Downloaded by [University of Auckland] at 23:40 09 April 2014
a decreasing linear trend as illustrated in Figure 2.31. Likely explanations for this trend
include reduced aquifer recharge because of increased residential, industrial, commercial,
and transportation developments (i.e., significantly more red color on the 2007 map north
and northeast of the spring) and possibly larger groundwater withdrawals via wells for
water supply and irrigation. In any case, as can be seen in Figure 2.31, the precipitation
pattern did not change notably during the same period indicating a reason other than
reduced natural recharge.
Figure 2.32 illustrates land cover/land use change in the greater San Antonio, TX, area
between 1992 and 2006 where urban development is encroaching on the recharge zone
FIGURE 2.29
False color Landsat ETM+ image of central Florida east-northeast of Orlando. Explanation in text.
Conceptual Site Models 43
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.30
Land use/land cover maps for the area of Rainbow Springs, FL. These maps are simplified versions of the
original shapefiles provided by the Southwest Florida Water Management District. Both maps were categorized
according to the Florida Land Use and Cover Classification System. The 1995 top features (top) were photoin-
terpreted from 1:12,000 USGS color infrared (CIR) digital orthophoto quarter quadrangles by the Mapping and
GIS Section, Southwest Florida Water Management District. The 2007 features (bottom) were photointerpreted
at 1:8000 using 2007 1-ft CIR digital aerial photographs.
44 Hydrogeological Conceptual Site Models
)ORZ FIV
3UHFLSLWDWLRQ LQ
Downloaded by [University of Auckland] at 23:40 09 April 2014
<HDU
FIGURE 2.31
Hydrograph of the daily flow of Rainbow Springs near Dunnellon, FL, versus daily precipitation at Blitchton
Tower (October 1975 to July 1996) and Rainbow Springs (August 1996 to January 2009) gauging stations. Red
lines represent the linear trend.
of the Edwards Aquifer north of the city. The maps were derived from satellite imagery
using supervised classification of different reflective signatures of the land surface (cover).
The 2006 map is based on newer satellite imagery and appears sharper, providing ground
resolution of 30 × 30 m. For comparison, land use/land cover maps of the Rainbow Springs
area in Figure 2.30 are based on high-resolution aerial photographs and their direct visual
interpretation resulting in depiction of fine detail, sometimes showing features smaller
than 10 × 10 ft.
For hazardous waste site investigation and remediation, the current and historical indus-
trial/commercial land uses at a site comprise what is typically termed the facility profile
of the CSM. The nature of manufacturing processes and historical waste handling prac-
tices must be understood to identify potential contaminants and their disposal locations.
Historical aerial photographs are often useful in identifying former burn pits, infiltration
ponds, surface water discharge points, and lagoons or cesspools. In New England, which
has long been a manufacturing center, legacy contamination at hazardous waste sites may
date back to the 1700s. In many cases, these sites have been converted for different indus-
trial uses over time, leading to a wide variety of potential contaminants that could be
found in soil or groundwater. A classic example is the use of mill buildings dating from
the 1700s to 1800s for electronics or other manufacturing purposes in the 1950s through
the 1970s. Metal contamination may be caused by the former use, and chlorinated solvent
contamination may be caused by the latter. The presence of urban fill material at such sites
from different eras, involving various workings and reworkings of the surface topography,
may also complicate the CSM as contaminant origins may be unclear.
Conceptual Site Models 45
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.32
Top: Greater San Antonio, TX, land cover map for year 1992. Bottom: Land cover map for year 2006. Shades of
red denote developed areas of different intensity (deep red = highest intensity); shades of green denote forests;
shades of ocher, yellow, and light gray are shrubs/scrubs and grassland, pasture, and barren land, respectively;
brown is cultivated crops; open water is blue. Note that the 2006 map is derived from higher-resolution satel-
lite imagery and appears sharper. This is based on the National Land Cover Database, available for download
at http://seamless.usgs.gov. Explanation of land-use classes, color legend, and various related publications are
available at http://www.mrlc.gov/nlcd_definitions.php.
46 Hydrogeological Conceptual Site Models
face. This water is sometimes called potential recharge, indicating that only a portion of it
may eventually reach the water table (saturated zone). The term actual recharge is being
increasingly used to avoid any possible confusion; it is the portion of infiltrated water that
reaches the aquifer, and it is confirmed based on groundwater studies. The most obvi-
ous confirmation that actual groundwater recharge is taking place is a rise in water table
(hydraulic head). Effective (net) infiltration or deep percolation refers to water movement
below the root zone and is often equated to actual recharge. In hydrologic studies, the term
effective rainfall describes the portion of precipitation that reaches surface streams via
direct overland flow or near-surface flow (interflow). Rainfall excess describes the compo-
nent of rainfall that generates surface runoff, and it does not infiltrate into the subsurface.
3UHFLSLWDWLRQ
3
6QRZ SDFN
6XEOLPDWLRQ
(YDSRUDWLRQ
65VS
(YDSR 6XUIDFH (UHV
WUDQVSLUDWLRQ 5XQRII
(7 65 65UHV
3XPSLQJ
,QILOWUDWLRQ
Z
4RXW ,QILOWUDWLRQ ,VS
(IIHFWLYH ,QILOWUDWLRQ ,VU
60' , ,QILOWUDWLRQ
3UHFLSLWDWLRQ
5HFKDUJH ,UHV
3HI
5 :DWHU WDEOH
,QWHUIORZ (7ZW
65 6
,IO &KDQJH LQ VWRUDJH
4VV
8QFRQILQHG DTXLIHU /DWHUDO JURXQGZDWHU
'LVFKDUJH XD
LQIORZ 4LQ
XD
4RXW
&RQILQLQJ OD\HU
/HDNDQFH
'LVFKDUJH /
/DWHUDO JURXQGZDWHU
FD
4RXW &RQILQHG DTXLIHU FD
LQIORZ 4LQ
FIGURE 2.33
Elements of the water budget of a groundwater system. (Modified from Kresic, N., Groundwater Resources:
Sustainability, Management, and Restoration, McGraw Hill, New York, 852 pp., 2009.)
Conceptual Site Models 47
Interception is the part of rainfall intercepted by vegetative cover before it reaches the
ground surface, and it is not available for either infiltration or surface runoff. The term net
recharge (synonymous with actual recharge) is being used to distinguish between the follow-
ing two water fluxes: recharge reaching the water table via vertical downward flux from the
unsaturated zone and evapotranspiration from the water table, which is an upward flux (nega-
tive recharge). Aerial (or diffuse) recharge refers to recharge derived from precipitation and
irrigation that occurs fairly uniformly over large areas, whereas concentrated recharge refers
to loss of stagnant (ponded) water from playas, lakes, and recharge basins or loss of flowing
stream water to the subsurface via sinks.
The complexity of the water budget determination depends on many natural and anthro-
pogenic factors present in the general area of interest:
• Climate
Downloaded by [University of Auckland] at 23:40 09 April 2014
The most general equation of water budget that can be applied to any water system has the
following form:
Water budget equations can be written in terms of volumes (for a fixed time interval),
fluxes (volume per time, such as cubic meters per day or acre-feet per year), and flux densi-
ties (volume per unit area of land surface per time, such as millimeters per day). Following
are some of the relationships between the components shown in Figure 2.33 that can be
utilized in quantitative water budget analyses:
I = P − SR − ET
I = I sr + I res + I sp
R = I − SMD − ETwt
Pef = SR + I fl
(2.2)
ua ca
Qsss = Pef + Qout + Qout
ua
Qout = R + Qinua − L
ca
Qout = Qinca + L − Qout
w
∆S = R + Qinua − L − Qout
ua
where I is the infiltration in general, SR is the surface water runoff, ET is the evapotrans-
piration, Isr is the infiltration from surface runoff, Ires is the infiltration from surface water
48 Hydrogeological Conceptual Site Models
reservoirs, Isp is the infiltration from snow pack and glaciers, R is the groundwater recharge,
SMD is the soil moisture deficit, ETwt is the evapotranspiration from the water table, Pef is
the effective precipitation, Iif is the interflow (near-surface flow), Qss is the surface stream
ua ca
flow, Qout is the direct discharge of the unconfined aquifer, Qout is the direct discharge of the
ua
confined aquifer, Qin is the lateral groundwater inflow to the unconfined aquifer, L is the
leakage from the unconfined aquifer to the underlying confined aquifer, Qinca is the lateral
w
groundwater inflow to the confined aquifer, Qout is the well pumpage from the confined
aquifer, and ΔS is the change in storage of the unconfined aquifer. If the area is irrigated, two
more components would be added to the list: infiltration and runoff of the irrigation water.
Ideally, all applicable relationships at a given site would have to be established to fully
quantify the processes governing the water budget, including volumes of water stored in,
and flowing between, three general reservoirs—surface water, the vadose zone, and the
saturated zone. By default, change in one of the many water budget components causes a
Downloaded by [University of Auckland] at 23:40 09 April 2014
chain reaction and influences all other components. These reactions take place with more
or less delay, depending on both the actual physical movement of water and the hydraulic
characteristics of the three general reservoirs.
2.3 Hydrogeology
For reasons of simplicity or feasibility, one may decide that the groundwater system under con-
sideration, including any of its parts (e.g., individual aquifers, aquitards, layers and lenses of
different permeability), could be represented by a volume that includes all important aspects
of heterogeneity and anisotropy of the porous media present. Such volume is sometimes called
representative elementary volume (REV) and is defined by only one value for each of the many
quantitative parameters describing groundwater flow and fate and transport of contaminants.
The REV concept is considered by many to be rather theoretical because it is not independent
of the nature of the practical problem to be solved. For example, < 1 m3 (several cubic feet) of
rock may be more than enough for quantifying the phenomena of contaminant diffusion into
the rock matrix, whereas this volume would be completely inadequate for calculating ground-
water flow rate in a fractured rock aquifer where major transmissive fractures are spaced more
than 1 m apart. In contrast, dimensionless parameters commonly used in fluid mechanics,
such as the Reynolds number, are independent of the nature of the problem to be solved (e.g.,
the Reynolds number is applicable to a conduit or a channel of any size).
Deciding on the representative volume will also depend on the funds and time available
for collecting field data and performing laboratory tests. Extrapolations and interpolations
based on data from several borings or monitoring wells will be very different from those
using data from tens of wells. Another related difficulty, which always presents a major
challenge, is upscaling. This term refers to assumptions made when using parameter val-
ues obtained from small volumes of porous media, such as laboratory samples, to solve
larger, field-scale problems. The problem of upscaling has led to mathematic constructs
such as dispersivity, which is described in detail in Chapter 5. Whatever the final choice
for each quantitative parameter may be, every attempt should be made to fully describe
and quantify the associated uncertainty and sensitivity of that parameter.
Regardless of the scale at which a CSM is being developed, the first step is to identify the
presence of any aquifers and low-permeable porous media in the area of interest. Even when
the site is relatively small, which is often the case when investigating point sources of shallow
Conceptual Site Models 49
groundwater contamination, such as a leaky underground storage tank at a gas station, the
CSM should include all major water-bearing zones underlying the site at different depths.
Their description will be more general in nature when based on regional studies performed by
government agencies such as geologic surveys. When field investigations, including drilling,
are conducted at the site, the associated boring logs, aquifer tests, and laboratory data enable
detailed quantitative evaluation of the porous media underlying the site and, therefore, pro-
vide for accurate hydrogeological classification of aquifers and aquitards.
An aquifer, the focal point of any hydrogeologic CSM, is defined as a geologic formation, or
a group of hydraulically connected geologic formations, storing and transmitting significant
quantities of potable groundwater. However, two key terms in this definition, significant and
potable, are not easily quantifiable. The common understanding is that an aquifer should pro-
vide more than just several gallons or liters per minute to individual wells and springs, and
that water should have less than 1000 mg of dissolved solids. However, in many parts of the
Downloaded by [University of Auckland] at 23:40 09 April 2014
world, these rules of thumb do not apply. Examples include saturated fractured rock forma-
tions that can reliably provide one or two gallons per minute to individual wells or springs and
are thus often referred to as fractured rock aquifers (bedrock aquifers). The issue of ground-
water quality is similarly relative. For example, if the groundwater has naturally elevated total
dissolved solids of 1000–4000 mg/L, it is traditionally disqualified from consideration as a
significant source of potable water in water-rich regions regardless of the groundwater quan-
tity. At the same time, such water is routinely utilized for both human and livestock consump-
tion in water-scarce areas. Moreover, advanced water treatment technologies, such as reverse
osmosis, enable development of aquifers containing brackish groundwater, which are increas-
ingly considered integral parts of water-resources management around the world.
When developing a detailed, site-specific CSM, the terms hydrostratigraphic unit and water-
bearing unit are sometimes used to differentiate between more or less transmissive zones
within an aquifer or aquifer system containing lenses or thin layers of low-permeable porous
media.
Aquitards, like aquifers, do store water and are capable of transmitting it but at a much slower
rate, so they cannot provide significant quantities of potable groundwater to wells and springs.
Determining the nature and the role of aquitards in groundwater systems is very important in
both water supply and groundwater contamination studies. When the available information
suggests there is a high probability for water and contaminants to move through an aquitard
within a practical timeframe of, say, less than 100 years, such an aquitard is called leaky (semi-
confining bed is often used as a synonym). When the potential movement of groundwater and
contaminants through an aquitard is estimated in hundreds or thousands of years, such an
aquitard is called competent, of high integrity, or nonleaky.
Aquiclude is another related term, generally much less used today in the United States but
still in relatively wide use elsewhere (the Latin word claudo means to confine, close, make
inaccessible). An aquiclude is equivalent to an aquitard of very low permeability, which, for all
practical purposes, acts as an impermeable barrier to groundwater flow. (Note that there still
is some groundwater stored in an aquiclude, but it moves very, very slowly.) Aquicludes and
aquitards are often referred to as confining beds. As opposed to lenses of low-permeable mate-
rials, they extend over relatively large areas and have major impact on the regional groundwa-
ter flow directions.
aquifers. Depending on the predominance of certain grain fraction, such aquifers may be
called sand aquifers or sand-and-gravel aquifers, for example. It is also common to call a par-
ticular intergranular aquifer by the depositional process that created it. One such classification
by the USGS groups unconsolidated sand-and-gravel aquifers into four broad categories: (1)
stream-valley, or alluvial aquifers located beneath channels, floodplains, and terraces in the
valleys of major streams; (2) basin-fill aquifers, also referred to as valley-fill aquifers because
they commonly occupy topographic valleys; (3) blanket sand-and-gravel aquifers; (4) aquifers
in semiconsolidated sediments; and (5) glacial-deposit aquifers. In many cases, more than just
one process is responsible for creating unconsolidated deposits (e.g., stratified glacial-drift
aquifer systems in stream valleys), and every attempt should be made to at least understand
the most important depositional mechanisms. This is because important characteristics of the
intergranular porous media, such as anisotropy and heterogeneity, are a direct result of deposi-
tional processes. For example, an aquifer developed in thick aeolian sands (former sand dunes)
Downloaded by [University of Auckland] at 23:40 09 April 2014
should be very prolific, given enough historic or current natural recharge, because of the high
storage capacity and effective porosity of uniform (homogeneous) clean sands. On the other
hand, alluvial deposits around a stream in a drainage area consisting of many different rock
types may create very heterogeneous local flood plain aquifers.
:HVW (DVW
$OOXYLDO DQG (VWXDULQH 9DOOH\ $OWLWXGH LQ PHWHUV DERYHEHORZ VHD OHYHO
$OWLWXGH LQ PHWHUV DERYHEHORZ VHD OHYHO
6LOW
DQG ILQH
VDQG
3RWRPDF 5LYHU
6DQG 6DQG DQG JUDYHO
DQG
JUDYHO
6LOW DQG ILQH VDQG
3RWRPDF *URXS
6DQG
5HJLRQDO
GLVFKDUJH
9HUWLFDO VFDOH [ (;3/$1$7,21
*HQHUDOL]HG :DWHU WDEOH
JURXQGZDWHU IORZ
FIGURE 2.34
Generalized hydrogeologic section showing idealized flow through Potomac River alluvial deposits near
Washington, DC. (Modified from Ator, S. W. et al., A Surficial Hydrogeologic Framework for the Mid-Atlantic
Coastal Plain, U.S. Geological Survey Professional Paper 1680, 44 pp., 2005.)
Conceptual Site Models 51
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.35
Alluvial aquifers often consist of layers and lenses of gravel, sand, silt, and clay mixed in various proportions
and arranged spatially in many different ways depending on mechanisms of sediment deposition.
aquifers show some degree of heterogeneity and stratification as illustrated in Figure 2.35.
This is particularly important at contaminated sites where dissolved contaminants may
move faster through layers of more permeable porous media, creating convoluted pref-
erential pathways intersecting a well at discrete intervals (Figure 2.36). Detecting such
pathways, although difficult, is often the key for successful groundwater remediation,
whereas it may not be of much importance when quantifying groundwater flow rates
3XPSLQJ ZHOO
&RQWDPLQDQW
OHDN
:DWHU WDEOH
3OXPH
FIGURE 2.36
Aquifer consisting of predominantly gravel and sand provides water to a well through the entire screen length.
At the same time, dissolved contaminants may enter the well through just a few discrete intervals following
more permeable (preferential) pathways.
52 Hydrogeological Conceptual Site Models
for water supply. As a result, regulators often require vertical profiling of groundwater
at hazardous waste sites (for example, collecting a groundwater sample every 5 or 10 ft
of depth) to ensure that the screened interval of a monitoring well does not miss a lens
of preferential contaminant transport. Notoriously difficult to characterize are aquifers
developed in deposits left by braided streams. Such deposits exhibit rapid vertical and
horizontal changes in sediment type and should therefore be represented by anything
other than continuous layers of sand, gravel, or clay. This is illustrated in Figure 2.37,
which shows a draft cross section prepared as part of the subsurface characterization at a
site in the Los Angeles basin.
The areal extent and thickness of an alluvial aquifer depend on the size of the parent
stream and the aquifer’s location in the drainage area. Aquifers in flood plains of smaller
streams and in higher upstream areas are of limited extent, rarely exceeding 10 m in thick-
ness (Figure 2.38). On the other hand, alluvial aquifers developed in flood plains of major
Downloaded by [University of Auckland] at 23:40 09 April 2014
rivers (Figure 2.39) are among the most prolific and widely used for water supply through-
out the world. In addition to thick, extensive deposits of sand and gravel, they are typi-
cally in direct hydraulic connection with the river, which provides for abundant aquifer
FIGURE 2.37
Draft cross section based on data from geotechnical borings completed at a site in the Los Angeles basin.
Relatively more permeable sediments, including poorly graded sand, silty sand, and gravel, are colored in red.
(Courtesy of Tony Marino, AMEC.)
Conceptual Site Models 53
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.38
Floodplain of a small perennial river near Culpeper, VA. Fine alluvial sands with darker bands (indicating
the likely presence of organic material and/or different chemical composition) are exposed in the river cut.
Determining the presence and contents of natural soil organic carbon is important for quantifying sorption of
most organic contaminants.
FIGURE 2.39
Merging of two large floodplains at the confluence of the Sava and Danube rivers at Belgrade, Serbia, provides
favorable conditions for groundwater supply. A recharge basin serving one portion of the city’s well field is in
the lower center.
54 Hydrogeological Conceptual Site Models
DG
UR
9$
H
YH
6$
/H
6DQG 6DQGDQG*UDYHO
Downloaded by [University of Auckland] at 23:40 09 April 2014
&OD\
FIGURE 2.40
Schematics of one of the numerous Ranney (collector) wells in Makis, the Sava River floodplain, serving city of
Belgrade, Serbia. (From Milojević, N., Hidrogeologija (Hydrogeology), Univerzitet u Beogradu, Zavod za izdavanje
udzbenika Socijalisticke Republike Srbije, Beograd, 1967.)
recharge. Large well fields for public and industrial water supply are often designed to
induce additional recharge from the river by creating increased hydraulic gradients from
the river to the aquifer as a result of pumping (Figures 2.40 and 2.41). This is a good exam-
ple of why surface water and groundwater should be managed as a singular resource.
Note that induced infiltration to alluvial aquifers from surface water is not always permis-
sible as dewatering of small streams may occur during low-flow conditions that severely
FIGURE 2.41
Three of many Ranney wells lining the shores of the recharge basin shown in Figure 2.38.
Conceptual Site Models 55
threatens the ecological health of the stream and/or affects downstream human consump-
tive uses. In some cases, alluvial water supply wells that have been operating for decades
are now under increased regulatory scrutiny for inducing infiltration during low-flow
conditions.
ited by streams that flow through the basins. Coarser sediment (boulders, gravel, sand) is
deposited near the basin margins, and finer sediment (silt, clay) is deposited in the central
parts of the basins. This depositional pattern occurs because finer particles have longer
settling times and, therefore, do not fall out of suspension until more quiescent flow is
achieved in areas of flatter topography further from erosional sources. Some basins con-
tain lakes or playas (dry lakes) at or near their centers; it is not uncommon for very thick
clay deposits (e.g., up to 100 ft or more in thickness) to accumulate beneath these current
or historical (e.g., glacial) lake centers. Windblown sand might be present as local beach
or dune deposits along the shores of lakes. Deposits from mountain, or alpine, glaciers
locally form permeable beds where the deposits consist of outwash transported by glacial
meltwater. Sand and gravel of fluvial origin are common in and adjacent to the channels of
through-flowing streams. Basins in arid regions might contain deposits of salt, anhydrite,
gypsum, or borate produced by evaporation of mineralized water in their central parts
(Miller 1999).
Basin and range provinces in the western United States, basins in Southern California
(see Figure 2.42), and the northern Rocky Mountain basins are examples of basin-fill aqui-
fers that are heavily utilized for drinking water supply and irrigation. Large-scale ground-
water extraction is usually from deeper portions of basins, which can potentially cause
such unwanted effects as induced upconing (vertical upward migration) of highly miner-
alized saline groundwater. This water resides at greater depths where there is no flushing
by fresh meteoric water. Another negative effect of groundwater extraction from basin-fill
aquifers in arid climates is aquifer mining because of the lack of significant present-day
natural aquifer recharge.
Where precipitation is significant on the surrounding mountains, concentrated recharge
occurs when surface water flowing from the mountains infiltrates into the permeable
coarse-fill deposits, such as colluvial/alluvial fans along mountain fronts (Figure 2.43).
In many cases, however, this recharge, called mountain-front recharge, is intermittent
because the streamflow that enters the basins is also mostly intermittent. As the streams
exit their bedrock channels and flow across the surface of the alluvial fans, infiltration
occurs through the permeable deposits on the fans and moves downward to the water
table. In arid or semiarid basins, much of the infiltrating water is lost by evaporation or as
transpiration by riparian vegetation (plants on or near stream banks).
Open basins contain through-flowing streams and commonly are hydraulically con-
nected to adjacent basins. Some recharge may occur in an open basin as streamflow infil-
tration from the through-flowing stream and as underflow (groundwater that moves in
the same direction as streamflow) from an upgradient basin. Before development, water
56 Hydrogeological Conceptual Site Models
West East
S. Gabriel R.
V. View Ave
Alameda St
L.A. River
200 200
itio n
Ga Expos
0 ge Gaspur
0
Exposition Gage
Lyn
wo
-200 od Ga -200
Holly rde
-800 -800
Downloaded by [University of Auckland] at 23:40 09 April 2014
-1000 -1000
-1200 -1200
e
sid
n ny
-1400 0 4000 8000 ft Su -1400
FIGURE 2.42
Generalized cross section of Central Basin, Los Angeles, CA, showing different aquifers in shades of blue;
brown color indicates low-permeable aquitards in between. (Modified from Santa Ana Watershed Project
Authority, Chapter IV, Groundwater Basin Reports, Los Angeles County Coastal Plain Basins, http://www.sawpa.org,
accessed October 2007. From Water Replenishment District of Southern California (WRD), Technical Bulletin—
An Introduction to the Central and West Coast Groundwater Basins, 2004. Modified from California Department of
Water Resources, Bulletin 104: Planned Utilization of the Ground Water Basins of Coastal Plain of Los Angeles County,
Appendix A, Ground Water Geology, 1962, Plate 4.)
FIGURE 2.43
Perspective block diagram of the Gallatin Local Water Quality District, Montana. (Modified from Taylor, C. J.,
and Alley, W. M., Ground-Water-Level Monitoring and the Importance of Long-Term Water-Level Data, U.S.
Geological Survey Circular 1217, Denver, CO, 2001; Kendy, E., Magnitude, Extent, and Potential Sources of
Nitrate in Ground Water in the Gallatin Local Water Quality District, Southwestern Montana, 1997–98, U.S.
Geological Survey Water-Resources Investigations Report 01-4037, 2001.)
groundwater movement in the basins as something other than simple intergranular flow
through sand and gravel.
Figure 2.46 illustrates another related difficulty caused by faults in different portions of
the basins and at varying scales. Because it is hard to associate faults with loose sand and
gravel, and even harder to consider them as barriers to groundwater flow within deposits
of loose (real) sand and gravel, any information collected in the field similar to the situa-
tion presented in Figure 2.46 may be misinterpreted because of the erroneously precon-
ceived concept.
Faults often form hydraulic boundaries for groundwater flow in both consolidated and
unconsolidated rocks. In general, however, they may have one of the following three roles:
(1) conduits for groundwater flow, (2) storage of groundwater because of increased porosity
within the fault (fault zone), and (3) barriers to groundwater flow because of decrease in
porosity within the fault. The following discussion by Meinzer (1923) illustrates this point:
“Faults differ greatly in their lateral extent, in the depth to which they reach, and in the
amount of displacement. Minute faults do not have much significance with respect to
ground water except, as they may, like other fractures, serve as containers of water. But
the large faults that can be traced over the surface for many miles, that extend down to
great depths below the surface, and that have displacements of hundreds or thousands
58 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.44
Flowing artesian well at the Bonham Ranch, southern Smoke Creek Desert, NV. (Courtesy of Terri Garside.)
FIGURE 2.45
Rock core samples collected from a fault zone in an aquifer developed in semiconsolidated quaternary sedi-
ments near Phoenix, AZ (core diameter approximately 6 cm). Top left photo depicts conglomerate clasts verti-
cally oriented along their long axis, likely a post-depositional deformational feature. Top right photo illustrates
potential groundwater circulation pathways through sediments partially cemented with calcium carbonate.
Bottom photo depicts clear evidence of brittle deformation (faulting) with the observation of slickensides within
the rock core. The cores are from borings completed in sediments routinely described as sand and gravel by
water supply–well drillers and in various published reports. (Courtesy of Jeff Manuszak.)
Conceptual Site Models 59
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.46
Hydrogeologic cross section at a basin margin near Phoenix, AZ.
of feet are very important in their influence on the occurrence and circulation of ground
water. Not only do they affect the distribution and position of aquifers, but they may
also act as subterranean dams, impounding the ground water, or as conduits that reach
into the bowels of the earth and allow the escape to the surface of deep-seated waters,
often in large quantities. In some places, instead of a single sharply defined fault, there
is a fault zone in which there are numerous small parallel faults or masses of broken
rock called fault breccia. Such fault zones may represent a large aggregate displacement
and may afford good water passages.”
Mozley et al. (1996) discuss reduction in hydraulic conductivity associated with high-angle
normal faults that cut poorly consolidated sediments in the Albuquerque Basin in New
Mexico. Such fault zones are commonly cemented by calcite, and their cemented thickness
ranges from a few centimeters to several meters as a function of the sediment grain size
on either side of the fault. Cement is typically thickest where the host sediment is coarse
grained and thinnest where it is fine grained. In addition, the fault zone is widest where
it cuts coarser-grained sediments. Extensive discussion on deformation mechanisms and
60 Hydrogeological Conceptual Site Models
FIGURE 2.47
Groundwater basins in Southern California separated by impermeable faults developed in alluvial-fill sedi-
ments. White lines are contours of hydraulic head; white arrows are general directions of groundwater flow;
dashed lines are surface streams; bold black lines are major faults. Also shown are state highways 10, 15, and
215. (Modified from Danskin, W. R. et al., Hydrology, Description of Computer Models, and Evaluation of
Selected Water-Management Alternatives in the San Bernardino Area, California, U.S. Geological Survey Open-
File Report 2005-1278, Reston, VA, 178 pp., 2006.)
Conceptual Site Models 61
Rialto-Colton basin, which is heavily pumped for water supply, is almost completely sur-
rounded by impermeable fault barriers and receives negligible recharge from precipitation
and very little lateral inflow in the far northwest from the percolating Lytle Creek waters.
In contrast, the Bunker-Hill basin to the north, which is also heavily pumped for water
supply, receives most of its significant recharge from numerous losing surface streams and
runoff from the mountain front. As a result, the hydraulic heads in the Rialto-Colton basin
are tens of feet lower than in the Bunker-Hill basin.
One important concept that often confounds the nonhydrogeologist is the impact of
focused recharge on groundwater levels in portions of a basin that receive limited or no
infiltration from directly overlying soils, such as sites with paved surfaces or very thick
vadose zones (>100 ft). For example, in semiarid and arid groundwater basins in the
American southwest, aerial recharge above a well is often negligible, yet over the course
of the year, significant seasonal water-level fluctuations may be seen in a well even in the
Downloaded by [University of Auckland] at 23:40 09 April 2014
Valley alluvial aquifer, which consists of sand and gravel deposited by the Mississippi
River as it meandered over an extremely wide floodplain; and the Pecos River Basin allu-
vial aquifer, which is mostly stream-deposited sand and gravel but locally contains dune
sands (Miller 1999).
ranged from fluvial to deltaic to shallow marine, and the exact location of each environment
depends upon the relative position of landmasses, shorelines, and streams at given points
in geologic time. Consequently, the position, shape, and number of the bodies of sand and
gravel that form aquifers in these sediments vary greatly from place to place (Miller 1999).
A good example of the complexity of coastal plain aquifers is the Potomac aquifer, described
by McFarland and Bruce (2006) as a heterogeneous aquifer composed of sediments deposited
by braided streams, meandering streams, and deltas that exhibit sharp contrasts in texture
across small distances as a result of highly variable and frequently changing depositional envi-
ronments (Figure 2.48). The Potomac aquifer is hydraulically continuous on a regional scale but
locally exhibits discontinuities where flow is impeded by fine-grained interbeds. Designation
of Potomac Formation sediments as composing a single Potomac aquifer or the equivalent
Homogeneous
aquifer
Confining
unit
Confining
zone
Heterogeneous
aquifer
FIGURE 2.48
Simplified cross section showing conceptualized flow relations among homogeneous and heterogeneous aqui-
fers, confining units, and confining zones in the Virginia Coastal Plain. (Modified from McFarland, E. R., and
Bruce, T. S., The Virginia Coastal Plain Hydrogeologic Framework, U.S. Geological Survey Professional Paper
1731, 118 pp., 25 pls., 2006.)
Conceptual Site Models 63
was made in earlier studies of part or all of the Virginia Coastal Plain by the USGS and by the
Virginia Division of Mineral Resources. As more field information became available, subse-
quent studies in Virginia by the USGS subdivided the Potomac aquifer into upper, middle, and
lower aquifers separated by intervening confining units.
2.49 illustrates typical sediment types associated with river valley glacial deposits that are
still being formed in glaciated terrains worldwide (Figure 2.50). Some of these deposits are
formed at the ice-bedrock contact, and some fill cracks or crevasses in the ice. As the ice
melts, outwash deposits of sand and gravel form deltas at the ice front or in glacial lakes
and fluvial valley-train deposits downstream from the ice front.
The glacial ice and meltwater derived from the ice deposit several types of sediments,
which are collectively called glacial drift. Till, which consists of dense, unsorted, and
unstratified material that ranges in size from boulders to clay, is deposited directly by the
Crevasse Ice-contact
fillings deposits
Fluvial valley-train
deposits
Lake-bottom
fine-grained
deposits
Bedrock
Fluvial valley-train
deposits
Delta Collapsed
deposits ice-contact
deposits
Lake-bottom
fine-grained deposits
Bedrock
FIGURE 2.49
Perpendicular (a) and longitudinal (b) cross sections of typical glacial deposits formed in bedrock valleys.
(From Trapp, H., Jr., and Horn, M. A., Delaware, Maryland, New Jersey, North Carolina, Pennsylvania, Virginia,
West Virginia. Ground Water Atlas of the United States, U.S. Geological Survey, HA 730-L, 1997. Modified from
Lyford, F. P., In Regional Aquifer–System Analysis Program of the US Geological Survey–Summary of Projects, 1978–
1984, edited by R. J. Sun, U.S. Geological Survey Circular 1002, 162–167, 1986.)
64 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.50
Glaciated terrain in Alaska with bedrock valleys filled with glacial deposits. (Courtesy of Jeff Manuszak.)
ice (Figure 2.51). Outwash, which is mostly stratified sand and gravel (Figure 2.52), and
glacial-lake deposits consisting mostly of clay, silt, and fine sand are deposited by meltwa-
ter. Ice-contact deposits consisting of local bodies of sand and gravel are deposited at the
face of the ice sheet or in cracks in the ice.
The distribution of the numerous sand-and-gravel beds that make up the glacial-deposit
aquifers and the clay and silt confining units that are interbedded with them is extremely
complex. The multiple advances of lobes of continental ice originated from different direc-
tions, and different materials were eroded, transported, and deposited by the ice, depend-
ing upon the predominant rock types in its path. When the ice melted, coarse-grained
FIGURE 2.51
Glacial till of the Harbor Hill terminal moraine (Wisconsin interglacial period) exposed on Long Island Motor
Parkway, half a mile north of Creedmoor, Hempstead quadrangle, Queens County, NY, October 29, 1917.
(Courtesy of USGS Photographic Library 2007.)
Conceptual Site Models 65
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.52
Stratified glacial sand and gravel, 1 mi north of Asticou, northeast of Lower Hadley Pond, Acadia National Park,
ME, September 14, 1907. (Courtesy of USGS Photographic Library 2007.)
sand-and-gravel outwash was deposited near the ice front, and the meltwater streams
deposited successively finer material farther and farther downstream. During the next
ice advance, heterogenous deposits of poorly permeable till might be laid down atop the
sand-and-gravel outwash. Small ice patches or terminal moraines dammed some of the
meltwater streams, causing large lakes to form. Thick deposits of clay, silt, and fine sand
accumulated in some of the lakes, and these deposits form confining units where they
overlie sand-and-gravel beds. The glacial-deposit aquifers are either localized in bedrock
valleys or are in sheet-like deposits on outwash plains (Miller 1999).
The glacial sand-and-gravel deposits form numerous local but highly productive aquifers.
Yields of wells completed in aquifers formed by continental glaciers are as much as 3000 gallons
per minute (1000 gallons per minute equals 63 L/s) where the aquifers consist of thick sand and
gravel. Locally, yields of 5000 gallons per minute have been obtained from wells completed in
glacial-deposit aquifers that are located adjacent to rivers and can obtain recharge from sur-
face water. Aquifers that were formed by mountain glaciers yield as much as 3500 gallons per
minute in Idaho and Montana, and wells completed in mountain-glacier deposits in the Puget
Sound, WA, area yield as much as 10,000 gallons per minute (Miller 1999).
FIGURE 2.53
Arenaceous sandstone, showing aeolian cross bedding, Sheridan County, MT, 1915. (Courtesy of USGS Photo
graphic Library 2007.)
FIGURE 2.54
One of the canyons cut deep into the sandstones of the Permian De Chelly Formation, Canyon De Chelly
National Monument, AZ.
Conceptual Site Models 67
capacity of such deposits is high because of the considerable thickness of major sandstone
basins.
Sandstone aquifers are highly productive in many places and provide large volumes of
water for all uses. The Cambrian–Ordovician aquifer system in the north-central United States
is composed of large-scale, predominantly sandstone aquifers that extend over parts of seven
states. The aquifer system consists of layered rocks that are deeply buried where they dip into
large structural basins. It is a classic confined, or artesian, system and contains three aquifers
(Figure 2.55). In descending order, these are the St. Peter-Prairie du Chien-Jordan aquifer (sand-
stone with some dolomite), the Ironton-Galesville aquifer (sandstone), and the Mount Simon
aquifer (sandstone). Confining units of poorly permeable sandstone and dolomite separate
the aquifers. Low-permeability shale and dolomite compose the Maquoketa confining unit
that overlies the uppermost aquifer and is considered part of the aquifer system. Wells that
Downloaded by [University of Auckland] at 23:40 09 April 2014
Feet
1,200
Cannon River
80
850
800 0
900 75
75
0
0
600
700
650
850
700
400 700
0
0
75
80
200 800
Sea
level
–200
Vertical scale greatly exaggerated
0 5 10 15 miles
0 5 10 15 kilometers
FIGURE 2.55
Regional hydrogeologic cross section through part of the Cambrian–Ordovician aquifer system showing
groundwater flow toward the Mississippi River, the main discharge area for several aquifers. (Modified from
Miller, J. A., Introduction and National Summary. Ground-Water Atlas of the United States, United States
Geological Survey, A6, 1999, http://caap.water.usgs.gov/gwa/index.html.)
68 Hydrogeological Conceptual Site Models
penetrate the Cambrian–Ordovician aquifer system commonly are open to all three aquifers,
which are collectively called “the sandstone aquifer” in many reports.
The rocks of the sandstone aquifer system are exposed in large areas of northern
Wisconsin and eastern Minnesota. Regionally, groundwater in the system flows from
these topographically high recharge areas eastward and southeastward toward the
Michigan and Illinois Basins. Subregionally, groundwater flows toward major streams,
such as the Mississippi and Wisconsin rivers, and toward major well-pumping centers,
such as those at Chicago, IL, and Green Bay and Milwaukee, WI. One of the most dra-
matic effects of groundwater use known in the United States was caused by withdrawals
from the Cambrian–Ordovician aquifer system, primarily for industrial use in Milwaukee
and Chicago. This excessive sandstone aquifer pumping caused declines in water levels
of more than 375 ft in Milwaukee and more than 800 ft in Chicago from 1864 to 1980 with
the pumping influence extending over 70 mi. Beginning in the early 1980s, withdrawals
Downloaded by [University of Auckland] at 23:40 09 April 2014
from the aquifer system decreased as some users, including the city of Chicago, switched
to Lake Michigan as a supply source. Water levels in the aquifer system began to rise in
1985 as a result of decreased withdrawals (Miller 1999).
The chemical quality of the water in large parts of the sandstone aquifer system is suit-
able for most uses. The water is not highly mineralized in areas where the aquifers crop
out or are buried to shallow depths, but mineralization generally increases as the water
moves downgradient toward the structural basins. The deeply buried parts of the aquifer
system contain saline water.
Other large layered sandstone aquifers in the United States that are exposed adjacent
to domes and uplifts or that extend into large structural basins or both are the Colorado
Plateau aquifers; the Denver Basin aquifer system; the Upper and Lower Cretaceous aqui-
fers in North and South Dakota, Wyoming, and Montana; the Wyoming Tertiary aquifers;
the Mississippian aquifer of Michigan; and the New York sandstone aquifers (Miller 1999).
NEW YORK
Valley and Ridge PENNSYLVANIA
Blue Ridge
Piedmont
NEW
JERSEY
DELA-
WEST D.C. WARE
VIRGINIA
MARYLAND
KENTUCKY
Downloaded by [University of Auckland] at 23:40 09 April 2014
VIRGINIA
TENNESSEE
NORTH
CAROLINA
SOUTH
CAROLINA
ALABAMA GEORGIA
0 100 200 mi
0 100 200 km
FLORIDA
FIGURE 2.56
Crystalline metamorphic and magmatic rocks (fractured-rock aquifer type) are predominant in the Piedmont
and Blue Ridge provinces, whereas carbonate rocks and sandstone are predominant in the Valley and Ridge
province where there are fully developed karst aquifers in limestone and dolomite. (Modified from Daniel, C. C.,
III et al., Hydrogeology and Simulation of Ground-Water Flow in the Thick Regolith-Fractured Crystalline Rock
Aquifer System of Indian Creek Basin, North Carolina, Chapter C, Ground-Water Resources of The Piedmont-
Blue Ridge Provinces of North Carolina, U.S. Geological Survey Water-Supply Paper 2341, C1–C137, 1997.)
bedrock much less or not at all affected some distance from the contaminant source zone. A
similar phenomenon can occur in glaciated fractured rock environments where weathered,
rotten bedrock acts as a zone of preferential flow between overlying, low-permeability till and
underlying competent bedrock.
Because regolith has a much higher storage capacity than bedrock, the regolith can be
thought of as a groundwater reservoir or sponge that feeds the underlying bedrock dis-
continuities. Joint concentrations, fractures enhanced by dissolution, and other disconti-
nuities in bedrock and combinations of these features also can store a substantial quantity
70 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.57
Residuum on Piedmont crystalline rocks, North Carolina, showing the preserved texture of the parent rock,
such as vertical fractures and horizontal layering. Bottom left: evidence of a partially dissolved portion of the
bedrock, now part of saprolite. Bottom right: a diffusion stain around vertical fracture left by percolating fluids.
of water. The storage capacity of the regolith/bedrock system is mainly influenced by dif-
ferences in the weathering characteristics of various rock types. Thin saprolite is devel-
oped on more resistant, quartz-rich rock types, whereas thick saprolite is developed on
less resistant rock types rich in potassium feldspar. The biotite gneiss unit is particularly
susceptible to deep weathering and typically has a thick saprolite cover. Mafic rocks (such
as amphibolite) typically are characterized by a thin saprolite cover because of the gen-
eral lack of potassium feldspar. In compositionally layered rocks, saprolite may develop
between layers of more competent rock. This weathering profile is common where less
Conceptual Site Models 71
Relative Increases
permeability
Soil zone
Clay, silt,
Clay fraction
and sand
increases
Degree of weathering increases
Residual
Regolith
quartz vein
Downloaded by [University of Auckland] at 23:40 09 April 2014
Transition
zone
Weathered
boulders Maximum
at 30-40 feet
depth
Bedrock
FIGURE 2.58
Idealized weathering profile through the regolith showing relative permeability. (From Nutter, L. J., and
Otton, E. G., Ground-Water Occurrence in the Maryland Piedmont, Maryland Geological Survey Report of
Investigations no. 10, 56 pp., 1969.)
chemically resistant rock (such as biotite gneiss) is interlayered with more chemically
resistant rock such as amphibolites (Williams et al. 2005).
The bedrock, in which fractures typically decrease in number with increasing depth,
can be generally considered as a zone of low permeability. This is illustrated in Figure 2.60,
which shows gneiss exposed in a road cut in Atlanta, GA. The rock has a limited number
of fine fissures and an absence of any major fractures for tens of feet at this particular site.
For all practical purposes, portions of bedrock such as this one would be considered as
barriers to groundwater flow. In general, however, understanding the hydrogeology of
crystalline rocks is complicated because of complex structure and porosity that is almost
exclusively secondary. As a result, the hydraulic conductivity of fractured bedrock aqui-
fers is extremely variable and not easily defined for a particular geologic formation or even
a particular rock type (Daniel and Dahlen 2002). Consequently, the distinction between
aquifers and confining units, which is the usual approach for describing the hydrogeologic
framework of a CSM, is not applicable in fractured-bedrock aquifers.
When present, fractures typically occur in sets, which are often composed of two sets of
nearly vertical fractures at approximately right angles to each other and a third, nearly hori-
zontal, set (Figure 2.61). As discussed by LeGrand (1967, 2007), in gneiss and schist, the orien-
tation of some joints and fractures tends to parallel the foliation and compositional layering,
which are rarely horizontal. In massive rocks, particularly granite, nearly horizontal ten-
sion joints often occur in the upper 100 ft of bedrock. Many nonhorizontal fracture patterns
can be traced by observing their topographic expression on the ground or on topographic
maps. Almost invariably, fractures that are not horizontal are represented by depressions
in the topography or by an alignment of topographic features such as stream segments (see
Figure 2.8 in Section 2.2.1). LeGrand (1954) demonstrated that many fractures are enlarged
72 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.59
Photograph taken during the excavation phase of a coal tar remediation project shows distinctive reddish-
brown mottling in a weathered siltstone where oil seeps occur along shallow dipping (3°–4°) bedding plane
fractures. (Courtesy of Peter Thompson, AMEC; photograph by Leo E. Arbaugh, Jr., of the U.S. Army Corps of
Engineers.)
by dissolution, especially in gneiss and schist containing silicates of calcium. Many of these
enlarged fractures underlie draws or linear depressions in surface topography.
Because fractures in the bedrock decrease in size and abundance with depth, con-
tamination of these aquifers is difficult to remediate, especially if the contaminants are
heavier than water and have low solubility in water (dense, nonaqueous phase liquids or
DNAPLs). Contaminants that settle or move into deeper parts of fractured-rock aquifers
tend to become trapped as fracture widths become narrower and groundwater velocities
diminish (Daniel and Dahlen 2002).
Prediction of the natural direction of groundwater flow in fractured rock aquifers can be
related to surface topography. Groundwater moves continuously from uphill areas toward
streams where it discharges as small springs and as bank channel seepage into streams. Small
springs and seeps are also common in draws and other topographic depressions, especially
near the base of valleys (Figure 2.62). Springs and seeps at higher elevations are commonly of
the wet-weather type and may suggest poorly fractured rocks below (LeGrand 2007).
As shown earlier in Figure 2.2, the perennial-stream drainage basin is a complete flow-
system cell, similar to and yet generally separate from surrounding basins (LeGrand 1958,
2007). Described differently, the hydraulic head beneath upland areas decreases with
depth, resulting in the overall downward movement of groundwater and providing the
Conceptual Site Models 73
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.60
Gneiss exposed in a road cut in Atlanta, GA (note lens cap for scale).
FIGURE 2.61
Granitic rocks in the Acadia National Park, ME, observed at two different scales (note lens cap on the bottom
photo for scale).
74 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.62
Two small springs issuing from fractured crystalline aquifers in the Piedmont physiographic province, north-
central Virginia. Top: a simple capture of a low-yielding spring tapped for the water supply of a farmhouse
before the Civil War. Many similar springs are still used as sources of potable water, although drilled wells are
now the main form of water supply in the region. Bottom: Small family farms in Virginia have been relying on
springs like this for their continuing operation.
mechanism for recharge to the aquifer. For example, a well 75 ft deep is likely to have a higher
water level than a well 300 ft deep at the same site. The hydraulic head beneath lowland areas
increases with depth, indicating upward movement of groundwater. For example, a well 300
ft deep is likely to have a higher water level than a well 75 ft deep at the same site. Although
this concept has been verified in the field countless times, every so often a regulatory agency
will insist that site-specific data, including installation of expensive, deep bedrock wells, be
collected to verify the same concept yet one more time. Figure 2.63 shows one such example
Conceptual Site Models 75
PZ-2 PZ-1
PZ-3
674.12 G-8 Unnamed Branch 671.92
665.73
666.38 667.58
LEGEND
Well with
screen interval
Water elevation in
674.12 deep bedrock well
Water elevation in
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.63
Cross section showing monitoring wells and groundwater flow directions at a fractured rock site in Athens,
GA. The wells were installed to prove the concept of groundwater discharge into the stream from both sides
and to demonstrate an absence of risk for groundwater users located thousands of feet from the site, across
several small stream drainages like this one.
of groundwater flow
W at
er ta
ble
Leachate
Transition
zone
Stream
Bedrock
re
c tu
ra
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.64
Concept of contaminant migration in fractured rock aquifers with a stream receiving groundwater discharge
from both sides. Based on the hydraulics of groundwater flow, Mr. Smith’s well is the unlikely receptor of
contaminated groundwater. (Modified from LeGrand, H. W., Sr., A Master Conceptual Model for Hydrogeological
Site Characterization in the Piedmont and Mountain Region of North Carolina: A Guidance Manual, North Carolina
Department of Environment and Natural Resources, Division of Water Quality, Groundwater Section, 40 pp.,
2007.)
creek is a hydraulic boundary for natural flow, and it is extremely unlikely that the
water table during pumping of Mr. Smith’s well would be depressed to the level of the
creek. Even if that were possible, his well should theoretically dry the creek before a
true hydraulic gradient from the waste site to the well would be possible. The contami-
nated plume from the waste site may reach the creek, where the discharging contami-
nated groundwater would mix with downstream creek flow. Based on this CSM, it seems
unlikely that Mr. Smith’s well water would be contaminated by the waste disposal site
(LeGrand 2007).
Well yields in igneous and metamorphic fractured rock aquifers are generally low com-
pared to those of sedimentary rock terrains, so use of groundwater as a supply has usually
been restricted to domestic wells or small municipal and industrial supplies. However, in
the Piedmont physiographic province of the United States, there is a growing interest in
the use of bedrock groundwater for larger supplies, especially during emergencies such
as prolonged droughts. This is mainly because most favorable surface-water sites have
already been developed and because of increased concerns regarding the environmental
impacts of reservoir construction, interbasin transfer of water, and declining surface-water
quality.
As discussed by Harned (1989) and Lindsay et al. (1991), although crystalline rocks in
the Piedmont and Blue Ridge physiographic provinces are typically described as yield-
ing only small quantities of groundwater to wells, this description is based upon large
numbers of shallow wells drilled for domestic supplies requiring 2–10 gallons per min-
ute. Unfortunately, the reported yield of a well drilled for a domestic supply is rarely an
indication of the full aquifer potential at that site. Thus, if many of these domestic wells
were drilled deeper, it is probable that additional water-producing fractures within an
aquifer would be intersected. In addition, most homes and their wells are located on
Conceptual Site Models 77
ridges, hilltops, or slopes, which are the least favorable areas for obtaining large yields of
groundwater.
Results of studies in several areas of the Piedmont physiographic province show that
the aquifers can sometimes provide significant amounts of water to wells. Daniel and
Sharpless (1983) identify more than 300 wells in an eight-county area of central North
Carolina with yields at or above 50 gallons per minute. Cressler et al. (1983) report that a
significant number of wells in the Piedmont physiographic province in Georgia yielded
more than 100 gallons per minute, and some wells yielded nearly 500 gallons per minute.
They also identify 66 wells used primarily for industrial and municipal supplies, at flow
rates significantly above those of domestic consumption, that had been in use for periods
of 12 to more than 30 years without declines in yield. Similarly, Cederstrom (1972) reports
that well yields of 100–300 gallons per minute were common for bedrock wells in the
Piedmont and Blue Ridge physiographic provinces from Maine to Virginia. Williams et al.
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.65
One of more than a dozen high-yielding wells in the Lawrenceville, GA, area completed in the crystalline rock aquifer.
78 Hydrogeological Conceptual Site Models
100
150
200
Downloaded by [University of Auckland] at 23:40 09 April 2014
250
300
FIGURE 2.66
Left: Heat-pulse flowmeter for characterization of fracture-specific groundwater flows in open bedrock bore-
holes under ambient and pumping conditions (shallow pump intake ~20 ft pumping at +0.1 gallons per minute).
Right: Graph of ambient versus pumping flow rates at discrete intervals in an open bedrock borehole. (Courtesy
of Peter Thompson and Scott Culkin, AMEC.)
Degrees
Feet
0 30 60 90
250 Downview at 266 feet
bg-bs
Fracture
opening below land surface looking
at a subhorizontal fracture
formed along foliation-
compositional layering
Biotite gneiss unit
a-w/bg
300
A. 297 feet B. 297 feet
bg
bg-w/a
a-w/bg
350
EXPLANATION
FIGURE 2.67
Subsurface lithologic characteristics, water-bearing zones, a structural tadpole plot, and borehole camera images
for part of the geophysical log of well 14FF59 in Lawrenceville, GA. Images A and B show a subhorizontal frac-
ture at 297 ft below land surface formed parallel to compositional layering; high-angle joints (black arrows)
terminate into the fracture from below; the aperture is 3–4 in. (Modified from Williams, L. J. et al., Influence
of Geologic Setting on Ground-Water Availability in the Lawrenceville Area, Gwinnett County, Georgia, U.S.
Geological Survey Scientific Investigations Report 2005-5136, Reston, VA, 50 pp., 2005.)
Conceptual Site Models 79
Po
we
r Li
ne
Approximate
MW101R
Landfill Boundary
No
Celeron Square
rth
Apartment Complex MW103R Hi
lls
id e
MW123SR Ro
MW104R MW109R
a d
MW121R
Downloaded by [University of Auckland] at 23:40 09 April 2014
MW122R
Water Pollution
Control Facility
Motor
MW105R
W156 Pool
N F-Lot
MN
14.5o
W125
EXPLANATION
W80
Borehole in Bedrock
Po
we
Power Line
Orientation of Transmissive r Li
Fracture(s) ne
0 400 800 1200 feet
Multiple Orientation of
Transmissive Fracture(s) 0 100 200 300 meters
FIGURE 2.68
Orientation of transmissive fractures in bedrock boreholes at the University of Connecticut landfill study
area, Storrs, CT. (Modified from Johnson, C. D. et al., Borehole-Geophysical Investigation of the University of
Connecticut Landfill, U.S. Geological Survey Water-Resources Investigations Report 01-4033, Storrs, CT, 42 pp.,
2002.)
FIGURE 2.69
Montejaque Dam in the Sierra de Grazalema, Spain, was abandoned before the Second World War because of
large losses of water from the reservoir and at the dam site through numerous sinks and other karst features.
This arc dam on the River Compobuche-Guadares, a tributary of the Guadiaro River, is 83.75 m high with an
84-m-long crest. After heavy rains, the reservoir behind the dam fills with water only to lose it soon after rains
stop. (Courtesy of Dr. Petar Milanović.)
• Attempts to treat karst as an equivalent porous medium (EPM) and develop hydro-
geologic concepts only applicable to intergranular aquifers (sand and gravel).
This leads to numeric modeling of karst aquifers with inappropriate models (see
Chapter 5).
• Regulatory policies and politics that purposely or inadvertently result in mis-
classifying a site as nonkarstic. Typical examples include use of terms such as
carbonate aquifer or fractured-rock aquifer while ignoring clear evidence of
karstification.
Unfortunately, both unnatural reasons listed above are still very common in the hydrogeo-
logic practice despite significant advances in characterization, quantification, and manage-
ment of karst hydrogeologic systems made over the last several decades worldwide. While
there is no excuse for a working professional hydrogeologist to treat karst as an EPM or
fractured rock, some stakeholders may be less critical when it is government regulatory
agencies engaging in this erroneous practice. It is suspected that regulators often take
this stance because the very presence of karst may mean that a task at hand is infeasible
(a related term is impracticable; see discussion in Chapter 8). In many parts of the world,
however, the general public living in karst environments understands and can appreciate
this simple fact and is given enough credit by various government agencies when there is
a common problem to solve. This, however, is not the case in some highly litigious societ-
ies, such as the United States, where there are examples of continuous, misguided, and
Conceptual Site Models 81
expensive efforts in trying to deal with groundwater contamination in karst without even
acknowledging the nature of the problem. More discussions on various difficulties associ-
ated with groundwater remediation in karst, including technical impracticability, are given in
Chapter 8.
As discussed in Section 2.3.3, it is not uncommon that at fractured bedrock sites a zone
of concentrated groundwater flow and preferential flow of contaminants exists within
the weathered transition zone between the overlying clayey saprolite and the competent
unweathered rock (see Figure 2.59). In such cases, the saprolite and the underlying bed-
rock are much less or not at all affected downgradient of the contaminant source zone.
Unfortunately, although this concept is not applicable to karst, it has been applied indis-
criminately at some contaminated sites in a misguided effort to localize the problem and
justify various attempts of active groundwater remediation at any cost. Such approaches,
aimed at pleasing certain stakeholders and/or avoiding hard decisions, have a high likeli-
Downloaded by [University of Auckland] at 23:40 09 April 2014
hood of failure in karst. This is simply because the karstification process creates preferen-
tial flow paths deep into the underlying bedrock (such as limestone and dolomite). These
preferential flow paths can develop and interconnect at various depths and can be vertical,
horizontal, and anything in between. They also share one common element that may be
the reason for all the controversy: Their formation is often unpredictable in orientation
(Figure 2.70). This is true for karst aquifers worldwide: in Texas, Tennessee, Kentucky,
Virginia, or Florida in the United States; in France, China, Russia, or Jamaica; and in clas-
sic karst of the Dinarides (the area in The Balkans in Europe where the word karst comes
from). The reader can extend the list to include all areas, large and small, that are under-
lain by carbonate and other soluble rocks.
What is still hard to understand by some, however, is that the terms karst and karst
aquifer do not necessarily imply the presence of sinkholes at the land surface and/or caves
that can be easily accessed by a passerby or a spelunker (Figures 2.71 through 2.74). In
other words, in an aquifer developed in limestone, dolomite, and other soluble rocks, there
will always be conduits and voids of varying size and (unknown) spatial extent deep in
the subsurface that even the most skillful caver cannot access (Figure 2.75) or the most
advanced technology cannot discover. Consequently, the role of a karst hydrogeologist in
developing CSMs is to cost-effectively use multiple investigative tools applicable to karst,
narrow down the inevitable uncertainties, and minimize risk associated with management
FIGURE 2.70
Image of surveyed cave passages of the Wind Cave created by the WinKarst computer program. (Courtesy of
Resurgent Software, available at http://www.resurgentsoftware.com/winkarst.html.)
82 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.71
Two sinkholes formed in surficial sand overlying the karstic Floridan aquifer in Florida. Left: Dairy Farm Sink
southeast of Bartow. Right: Sink in an orchard in south Hillsborough County. (Courtesy of Francis Sowers;
photographs by George Sowers.)
FIGURE 2.72
Convoluted pattern of sinkholes and residual limestone hills in an aerial photograph taken by George Sowers
in the Dominican Republic. Note the few houses on the right, near patches of brown bare soil. (Courtesy of
Francis Sowers.)
Conceptual Site Models 83
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.73
Naturally formed sinkholes of varying sizes and hydrologic function in different parts of the world all have one
thing in common: they are the main signatures of the underlying karst aquifers.
FIGURE 2.74
Hydrogeologist Radisav Golubović in front of the easily accessible Petnica Cave near Valjevo, Serbia.
84 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.75
Small karst conduits entering a larger passage in a cave developed in Edwards Limestone, TX. Note the camera
lens for scale. (From Kresic, N., Groundwater Resources: Sustainability, Management, and Restoration, McGraw Hill,
New York, 852 pp., 2009. With permission.)
decisions. Understanding that the risk can never be eliminated and conveying this to all
interested parties is arguably the most important aspect of karst hydrogeology regardless
of the specific project goal.
As discussed by Worthington and Ford (2009) and based on extensive research, includ-
ing laboratory and field experiments and geochemical modeling, the invariable result of
flow through limestone aquifers is that the positive feedback between increasing flow and
increasing dissolution results in a high-permeability, self-organized network of channels
(conduits). If the flux of water through an aquifer is minimal, such as in some confined
aquifers with limited recharge, then the formation of channel networks will be retarded.
Conversely, in confined aquifers where there is a substantial flux of water and in uncon-
fined aquifers, channel/conduit networks are likely to convey most of the flux of water
through the aquifer after periods of 103–106 years following the onset of substantial flow
through the aquifer. This range of times is short in comparison to the time that most
unconfined limestone aquifers have been functioning, so it is reasonable to infer that most
such aquifers should have well-developed channel/conduit networks.
The density, size, and geometry of individual conduits in the karst aquifer network can
vary by many orders of magnitude depending on site-specific characteristics such as min-
eral composition, stratigraphy, tectonic fabric, recharge mechanisms, and position of the
erosional base. One of the most common errors made by hydrogeologists not experienced
in karst is to rely solely on investigative borings and monitoring wells when developing a
CSM for a limestone (carbonate) aquifer site. Declaring that the aquifer (site) is nonkarstic
based on three, five, or even ten borings that did not encounter a large void is a typical
example. Similarly, a nonkarst hydrogeologist may conclude that the aquifer is not karstic
because test wells cannot produce large yields that, somehow, are expected from a karst
aquifer. Figure 2.76 illustrates these points. One can easily imagine a boring advanced
into the large cavity shown in the top photograph or the smaller cavity in the bottom
Conceptual Site Models 85
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.76
Cavities of varying sizes at a deep construction site in Paleozoic limestone, Hartsville, TN. The measured matrix
porosity is 2%. (Courtesy of Francis Sowers; photographs by George Sowers.)
photograph. In such case, hardly any hydrogeologist would argue that the limestone is
not karstified (an inexperienced driller not equipped for drilling in karst may even lose
some of his/her equipment when this happens). One could also imagine another boring
completed in solid rock adjacent to either cavity that could be used, by some, as evidence of
nonkarst. Figure 2.77 shows that, unfortunately, surprises can happen even after extensive
and expensive investigations performed by professionals well versed in the nature of karst.
86 Hydrogeological Conceptual Site Models
Dam
#1
Reservoir
#4 #2
#3
Cave
Downloaded by [University of Auckland] at 23:40 09 April 2014
Borehole
Limestone Axis of
N grout curtain
0 50 m
FIGURE 2.77
Cave in the left abutment of Sklope Dam, Croatia, discovered with an exploration gallery excavated after exten-
sive grouting failed to achieve backpressure. (Modified from Božičević, S., Primjena Speleologije Pri Injektiranjima
u Krsu (Application of Speleology in Grouting of Karst Terranes), First Yugoslav Symposium on Hydrogeology
and Engineering Geology, Herceg Novi, 1971.)
Borings #1 and #4 at a future dam site encountered single small voids, whereas borings #2 and
#3 were completed in solid limestone. A grout curtain subsequently designed to deal with lim-
ited karstification failed to achieve results even after injecting disproportionally more grout
than what was designed. A 20-m-long exploration gallery was then excavated into the dam
abutment, and a cave was discovered with a large hall up to 20 m high. The cave was surveyed
in detail, resulting in a complete redesign of the grout curtain. As can be seen, parts of the cave
almost completely surround boring #3. Many similar examples illustrating the elusive nature
of karst aquifers, including a detailed discussion of various risks associated with engineering
projects in karst, are provided in an excellent book by Milanović (2004).
The situations shown in Figures 2.76 and 2.77 can happen at any depth within a karst
aquifer including hundreds or even thousands of feet below the ground surface and below
the water table, provided, of course, that the carbonate sediments are that thick. The depth
to which karstification has progressed, the so-called “base of karstification,” is not uni-
form and varies from site to site depending on usual factors—rock (mineral) composi-
tion and stratigraphy, tectonic fabric, and recharge/discharge mechanisms. The degrees
of karstification and fracture density generally decrease with depth but can locally reach
much greater depths in fault zones as shown in Figure 2.78 (Milanović 2004). As the ero-
sional basis for karst, groundwater discharge is not necessarily at a static elevation and
may be constantly lowered as a result of surface stream incision, sea level regression, or
other factors. The depth of karstification and the depth to the water table would conse-
quently increase in time as well. For example, in some areas of the Dinaric carbonate plat-
form in Europe, the depth to the water table exceeds 1000 meters (Figure 2.79), and cavities
and voids of varying size have been discovered with investigative drilling at depths of
several thousand meters. Numerous large ascending and submarine springs along the
Conceptual Site Models 87
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.78
Schematic representation of the depth of karstification: (1) relationship between karstification and depth;
(2) zone of base flow; (3) zone of water table fluctuation; (4) distribution of active karst porosity; (5) base of
karstification; (6) curve of electrical sounding; (7) nonkarstified limestone; (8) level of water table. (Modified
from Milanović, P. T., Water Resources Engineering in Karst, CRC Press, Boca Raton, FL, 312 pp., 2004.)
Mediterranean coast such as the one shown in Figure 2.80 testify to deep karstification
below the current sea level. Similarly, along the coast of Florida in the United States and
Yucatan in Mexico there are numerous submarine springs and submerged cave passages
testifying that the sea level in the past was also lower (Figure 2.81).
An additional complicating factor in analyzing karst aquifer characteristics is the pos-
sible presence of paleokarst. This term usually describes karst features, now buried below
noncarbonate rocks, which were developed during periods when carbonates were exposed
at the surface. The same general area with carbonate rocks may have also been subject to
multiple periods of karstification, depending on the depositional and tectonic history and
fluctuations of sea level. In all these cases, it is possible to find very transmissive zones,
together with karst conduits, at varying aquifer depths and/or below overlying noncar-
bonates. It is therefore not advisable to make generalizations, based on some karst litera-
ture examples, as to the expected depth of karstification at any particular site.
The ill-defined term epikarst has recently gained popularity, bringing more confusion
into the already complicated task of developing hydrogeological CSMs in karst. For many
karst hydrogeologists involved in solving practical engineering problems throughout the
world, this term simply refers to the shallow, more weathered (more karstified) portion
of the subsurface. Higher karstification is a result of a higher density of fractures at and
near the land surface (which is typical for any solidified rock) and an abundant supply of
carbon dioxide from the atmosphere and the soil layer, where present (dissolution of CO2
in water creates carbonic acid, the main agent of carbonate rock dissolution and karstifica-
tion). Figure 2.82 illustrates one such highly weathered zone, equivalent to epikarst, with
an abundance of unconsolidated residuum and large infilled dissolutional features.
Detailed description of the epikarst concept is given by Ford and Williams (2007), among
others. Common to this and other descriptions is that epikarst is viewed as a perched
aquifer sitting well above the main saturated zone of the karst aquifer (the main, per-
manent water table). From this perched aquifer (epikarst), water percolates to the main,
88 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.79
Top: Deep jamas (potholes) on the Velebit Mountain, Croatia. As of August 2010, the depth of Lukina Jama, the
deepest Croatian pothole, is 1421 m including 40 m of the submerged passage at the bottom. This part of the
Dinaric karst is drained by a number of coastal and submarine springs in the Adriatic Sea, such as Jablanac
(shown schematically in the left). (Courtesy of Croatian Speleological Server, www.speleologija.hr, drawing
by Darko Bakšić.) Bottom: The entrance to the 1421-m-deep Lukina Jama in the Velebit National Park, Croatia.
(Courtesy of Vlado Božić.)
deeper aquifer via isolated vertical drains (shafts). Because of the contrast in permeability
between the epikarst (permeability is higher in the shallower, more weathered rock) and
the underlying competent rock, as well as limited transfer capacity of the isolated drains
(which are also narrowing down with depth), the epikarst is underdrained. The result of
this is formation of a permanent or semipermanent perched saturated zone (perched aqui-
fer), the characteristics of which can vary based on recharge pattern (climate of the site)
and the degree of weathering and presence of soil/residuum.
Conceptual Site Models 89
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.80
Submarine spring in the Adriatic Sea near Brela, Croatia.
FIGURE 2.81
Cave divers in submerged passages of the Nohoch Nah Chich in the Yucatan Peninsula. The abundant speleo-
thems, stalactites, stalagmites, flowstone, and columns were formed prior to cave submergence. (Courtesy of
David Rhea, Global Underwater Explorers.)
90 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.82
Highly weathered and karstified carbonates (epikarst) exposed in a road cut near Knoxville, TN. (Courtesy of
Francis Sowers; photograph by George Sowers.)
FIGURE 2.83
Red Lake near Imotski, Croatia. Total depth of this deepest sinkhole in the world is 518 m, and the depth of
water in the sinkhole during average low conditions is about 250 m. The lake water level, which is also the level
of the water table in the karst aquifer, fluctuates approximately 22 m. (Courtesy of the Croatian National Tourist
Board, www.croatia.hr.)
FIGURE 2.84
Stone Forest in Shilin County, 90 km from Kunming, China. (Courtesy of Francis Sowers; photograph by George
Sowers.)
92 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.85
Completely dry railroad cut in limestone, Dominican Republic, showing no signs of perched epikarst zone.
(Photograph by George Sowers; printed with kind permission of Francis Sowers.)
As discussed in detail by Milanović (1979, 1981, 2004, 2006) and Kresic (2007, 2009, 2010,
2012), groundwater level measurements in monitoring wells (piezometers) in karst must be
designed, performed, and interpreted with great care. The most common conceptual error
in karst hydrogeology is to draw maps of the potentiometric surface (water table, piezomet-
ric surface, and hydraulic head map are all interchangeably used terms) and use them to
estimate groundwater flow directions and hydraulic gradients (Figure 2.86; see also Figure
4.32 and Figure 4.33). Preferential flow paths create local troughs (linear depressions) in
the potentiometric surface that may be detected only if a sufficient number of piezometers
(monitoring wells) screened at the right depths and on either side of the conduit are avail-
able. This, however, is not the case at many sites because of the costs and difficulties associ-
ated with drilling in karst. In addition, the preferential flow path(s) of greatest importance
to the specific project may never be found because of the many uncertainties discussed
previously. In any case, extensive drilling in karst should be performed only after some
preliminary investigations using noninvasive techniques such as geophysics, remote sens-
ing, field mapping, and dye tracing are conducted to locate possible preferential flow paths
(see Figures 2.87, 2.88, 2.89a, and 2.89b).
Karst conduits filled with water partially or fully (under pressure) may act as strong
hydraulic head–dependent sources of water (so-called equipotential boundaries), just like
Conceptual Site Models 93
1
2
Downloaded by [University of Auckland] at 23:40 09 April 2014
3
4
5
6
7
FIGURE 2.86
Groundwater flow and its map presentation in a karst (left) and an intergranular (right) aquifer. (1) Preferential
flow path, such as a fracture, fault zone, or karst conduit/channel; (2) fracture/fault; (3) local flow direction;
(4) general flow direction; (5) position of hydraulic head (water table); (6) hydraulic head contour line; (7)
groundwater divide. (Modified from Kresic, N., Kvantitativna hidrogeologija karsta sa elementima zastite podzemnih
voda (Quantitative Karst Hydrogeology with Elements of Groundwater Protection; in Serbo-Croatian), Naucna
Knjiga, Belgrade, 192 pp., 1991.)
FIGURE 2.87
Alignment of sinkholes marked by arrows may indicate a preferential flow path in the underlying karst aqui-
fer. This color-shaded surface map of an area near Ocala, FL, was created from high-resolution LiDAR digital
elevation data.
94 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.88
Example of electrical resistivity imaging in karst. Two parallel resistivity inversion sections (20-ft line separa-
tion) show large low resistivity zones (blue), indicating the location, depth, and approximate dimensions of the
dissolution and large cavity structures. (Courtesy of Finn Michelsen, AMEC.)
(a)
FIGURE 2.89
(a) This 3D apparent resistivity model shows high-resistivity zones (orange-red) representing competent and
weathered dolomitic limestone. The low-resistivity zones (blue) represent the locations of cavities and solution
zones. (Courtesy of Finn Michelsen, AMEC.)
Conceptual Site Models 95
Downloaded by [University of Auckland] at 23:40 09 April 2014
(b)
surface streams do when hydraulically connected with the underlying aquifer. Conduits
can also be connected with surface streams, providing for an even stronger equipotential
boundary as illustrated by the following example. Figures 2.90 and 2.91 show the general
setup and time-drawdown data for a karst aquifer test performed in the Sabana Seca/Vega
Baja area in Puerto Rico (Torres and Diaz 1984). There is little doubt that the drawdown
recorded at the pumping Ceiba well was almost immediately influenced by a strong equipoten-
tial boundary (after less than 10 minutes) and then remained unchanged for the remainder of
the test. Just a few hours after the test was initiated, freshwater shrimp and fish started flowing
out of the test well, clearly indicating a cavernous hydraulic connection to Rio Cibuco located
approximately 500 ft southwest of the test well (Arturo Torres, personal communication).
Karst conduits and voids do not have to be of such size to allow passage and provide
habitat for fish or other creatures, small or large (Figure 2.92). Regardless of their size,
however, they are the main reason for the unique characteristics of karst aquifers, that is,
groundwater flow under pressure akin to pipe flow in conventional fluid mechanics. This
pressure can be propagated quickly and to great distances (Figure 2.93) following major
recharge episodes. The hydraulic head can sometimes rise tens of meters, resulting in a
complete flooding of all interconnected voids (channels, conduits) in a matter of hours
(Figure 2.94). This buildup of pressure in the hydraulic system is typical for karst aquifers
96 Hydrogeological Conceptual Site Models
0
Feet, above sea level
-100
-150 Limestone,
Cavernous at Site
-200
-250
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.90
Vertical cross section of observation wells and a pumping well of the Cibuco aquifer test near Vega Baja, Puerto
Rico. (From Torres, A. G., and Diaz, J. R., Water Resources of the Sabana Seca to Vega Baja Area, Puerto Rico, U.S.
Geological Survey Water Resources Investigations Report 82-4115, 53 pp., 1984.)
developed in compacted older carbonates of Paleozoic and Mesozoic age having a low
matrix porosity of a few percent. A monitoring well located near and hydraulically con-
nected to the conduit system (preferential flow paths) and/or land surface will have a fast
response time to recharge episodes as illustrated in Figure 2.95. At the same time, a well
completed in less fractured or solid rock may show very little response or a lack of it. In
10
Drawdown, in feet
1 Ceiba Well
OW-1A Q = 1,800 gpm
80 feet
0.1 OW-1
104 feet
OW-3
860 feet
OW-2
230 feet
0.01
0.01 1 10 100 1,000 10,000
Time, in minutes
FIGURE 2.91
Time-drawdown curves at a Ceiba well and four observation wells recorded during the Cibuco aquifer test near
Vega Baja, Puerto Rico. (From Torres, A. G., and Diaz, J. R., Water Resources of the Sabana Seca to Vega Baja
Area, Puerto Rico, U.S. Geological Survey Water Resources Investigations Report 82-4115, 53 pp., 1984.)
Conceptual Site Models 97
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.92
Karst underground is inhabited and visited by various creatures that may have competing interests regarding
development of a CSM. (Courtesy of Marin Kresic.)
contrast, monitoring wells in young carbonates with high matrix porosity (see Figure 2.96)
will often have much less pronounced water-level fluctuations regardless of their location
in the aquifer. This is because groundwater storage and flow rates are equally significant
in both the rock matrix and the conduits.
As discussed by Kresic (2007, 2012), Darcy’s law for calculation of groundwater velocity
and flow rate in porous media is not valid for karst. For example, the total energy line of
the flow can only decrease from the upgradient cross section toward the downgradient
98 Hydrogeological Conceptual Site Models
Recharge
t0 C0 t1
C1
t2
C2
FIGURE 2.93
Formation and movement of a groundwater wave caused by a localized recharge event. Velocity of the wave is
C0 at time t0, C1 at time t1, and C2 at time t2, where C0 > C1 > C2 because of decreasing hydraulic gradients. A is the
volume of old water stored in the aquifer and discharged under pressure at the spring because of the recharge
event. The newly infiltrated water will start discharging at the spring with delay. A hypothetical conduit sys-
tem representing the flow under pressure is depicted in yellow. (Modified from Yevjevich, V. M., Karst Water
Research Needs, Water Resources Publications, Littleton, CO, 1981.)
cross section of the same pipe (conduit) because of energy losses. On the other hand,
the hydraulic head may go up and down along the same pipe as the cross-sectional area
increases or decreases, respectively. The total energy of the flow, which includes the flow
velocity component, can be directly measured only by the Pitot device, the installation of
which is not feasible in most field conditions. Monitoring wells and piezometers, on the
other hand, only record the hydraulic head, which does not include the flow velocity com-
ponent. It is therefore conceivable that two piezometers in or near the same karst conduit
with rapid flow may not provide useful information for calculation of the real flow velocity
and flow rate between them and may even falsely indicate the opposite flow direction. In
fact, as discussed by Bögli (1980, p. 87), it is possible that water rising through a tube in an
enlargement passage can flow backward over the main flow conduit and into another tube
that begins at a narrow passage in the same main conduit.
In addition, even when one attempts to apply the Bernoulli equation for real viscous flu-
ids, many uncertainties associated with the geometry of karst conduits make calculations
of groundwater velocity in the conduit network not feasible.
There are additional complicating factors when attempting to calculate flow through
natural karst conduits using the pipe approach, even when Pitot tubes are correctly used:
1. Flow through the same conduit may be under pressure (i.e., full) or as an open
channel (i.e., with a free surface) at different locations.
2. Because pipe/conduit walls are more or less irregular (rough), the related coeffi-
cient of roughness has to be estimated and inserted into the general flow equation.
3. Conduit cross section may vary significantly over short distances and in the same
general area.
4. The flow may be both laminar and turbulent in the same conduit, depending on
the flow velocity, cross-sectional area, and wall roughness.
Conceptual Site Models 99
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.94
Encounter with a summer storm in Bat River Cave, Minnesota Cave Preserve. (1) Ladder extending from the
new drilled vertical entrance to the cave; visible is normal, typically low and calm water flow; (2) looking up
from the cave; (3) after sudden intense storm on August 18, 2007, a waterfall occurred where dry ceiling existed
before (4:03 pm); (4) the water begins to rise and becomes turbulent (4:17 pm); (5) cavers round the bend and
struggle against the current to retreat (4:22 pm); (6) seconds after this photo was taken, it was almost impossible
to stand in the passage without getting swept away (4:59 pm). (Courtesy of John Ackerman; available at http://
www.karstpreserve.com/index.html.)
5. More than one conduit is usually responsible for transferring groundwater in the
aquifer from one general area to another. The difficulties described earlier mul-
tiply when attempting to calculate groundwater flow rates through a network of
conduits. While appropriate fluid mechanics solutions exist for pipe networks,
the major challenge is accurately identifying and characterizing all the disparate
branches in the field.
100 Hydrogeological Conceptual Site Models
A B
Hydraulic
Hydraulic
Head
Head
Rainfall
Rainfall
Time Time
FIGURE 2.95
Response of the hydraulic head in monitoring wells to different types of flow in karst aquifers. (a) Rapid
conduit flow after major recharge events and no significant storage in the matrix; (b) delayed and dampened
response of the aquifer matrix. Flow dominated by fractures may include any combination of these two
extremes, whereas monitoring wells completed in solid rock may show no response at all. (From Kresic, N.,
Hydrogeology and Groundwater Modeling, Second Edition, CRC Press, Taylor & Francis Group, Boca Raton, FL,
807 pp., 2007.)
Whatever the method of estimating groundwater velocity and flow rate in a karst aquifer
may be, one should be very careful when making a (surprisingly common) statement such
as “groundwater velocity in karst is generally very high.” Although this may be true for
fast confined flow taking place in karst conduits, a disproportionately larger volume of any
karst aquifer has relatively low groundwater velocities (laminar flow) through small frac-
tures or fissures and the rock matrix. One should therefore always have in mind that karst
aquifers, unlike any other aquifer types, have three types of porosity (matrix, fracture, and
solutional or conduit/channel), and the groundwater velocity can vary over many orders
of magnitude in the same aquifer.
One common method for determining groundwater flow directions and apparent flow
velocities in karst is dye tracing (Figure 2.97). However, most dye tracing tests in karst
are designed to analyze possible connections between known (or suspect) locations of
Conceptual Site Models 101
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.96
Miami oolitic limestone of the Biscayne aquifer, Florida, with hydraulic conductivity >1000 ft/day. High pri-
mary porosity of 40%–50% is further increased by karstification. (Courtesy of Francis Sowers; photograph by
George Sowers.)
surface water sinking and locations of groundwater discharge (springs). Because such
connections involve some kind of preferential flow paths (sink-spring type), the appar-
ent velocities calculated from the dye tracing data are usually biased toward the high
end. Based on results of 43 tracing tests in karst regions of West Virginia (Jones 1997), the
median groundwater velocity is 716 m/day, while 50% of the tests show values between
429 and 2655 m/day (25th and 75th percentile of the experimental distribution, respec-
tively). It is interesting that, based on 281 dye tracing tests, the most frequent velocity
(14% of all cases) in the classic Dinaric karst of Herzegovina (Europe), as reported by
Milanović (1979), is quite similar: between 864 and 1,728 m/day. Twenty-five percent of
the results show groundwater velocity greater than 2655 m/day in West Virginia and
greater than 5184 m/day in Herzegovina. The West Virginia data do not show any obvi-
ous relationship between the apparent groundwater velocity and the hydraulic gradient,
102 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.97
Dye tracing test with uranine in the karst system of the Alta Cadena range near Malaga, Spain. (Courtesy of
Dr. Bartolome Andre Navarro, CEHIUMA.)
again disproving the notion that Darcian EPM techniques can be used to characterize or
quantify flow.
Confined karst aquifers, which do not have major concentrated discharge points in the
form of large springs, generally have significantly lower groundwater flow velocities. This
is regardless of the predominant porosity type because the whole system is under pres-
sure, and the actual displacement of old aquifer water with the newly recharged one is
rather slow. For example, Hanshaw and Back (1974) discuss the results of groundwater flow
velocity estimates using carbon-14 isotope dating for the confined portion of the Floridan
aquifer in central Florida. The average groundwater velocity based on 40 measurement
points is 6.9 m/year or 0.019 m/day.
One very important consequence of karstification is that the conduit/channel network
continues to expand both laterally and vertically. As a result, karst aquifers can often
develop across topographic drainage divides where deposits are sufficiently thick and
extensive. The two most obvious results of this inconsistency between topographic and
groundwater divides in karst terrains are sinking streams and large springs (Cvijić 1893,
1918, 1924). In fact, karst aquifers give rise to the largest springs in the world (Kresic and
Stevanović 2010; see also Figure 2.98) and are, in general, drained by only one or several
springs because of the self-organized conduit network (Figure 2.99). Dye tracing, the only
reliable method for determining drainage areas of karst springs and karst groundwater
drainage areas in general, often results in unexpected connections between surface water
and groundwater (Figure 2.100). These connections also often vary in time depending on
Conceptual Site Models 103
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.98
Buna Spring near Mostar, Herzegovina, the largest in the Dinaric karst and one of the largest in the world, with
a maximum flow rate >300 m3/s. This ascending spring is issuing from a siphonal cave explored by cave divers
to depth of 68 m. Note the oblique fault and former, higher spring cave, now dry.
FIGURE 2.99
Karst spring in the Cirque de Consolation in the French Jura Mountains during high-flow conditions. The water
emerges from conduits in Jurassic karst limestone and flows into a cave-like hall located in a steep cliff. Shortly
after emerging at the spring, the water forms an impressive waterfall. (Courtesy of Dr. Nico Goldscheider.)
104 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.100
Results of dye-tracing investigations that successfully identified a hydrologic connection between a sinkhole
or a losing stream reach and a major spring in south-central Missouri. Dye traces are in pink; different color
polygons represent various land uses. (Modified from Imes, J. L. et al., Recharge Area, Base-Flow and Quick-
Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter County, Missouri, 2000–04,
U.S. Geological Survey Scientific Investigations Report 2007-5049, 80 pp., 2007.)
the season, so dye tracing tests must be repeated with different dyes and for different
hydrologic conditions to accurately delineate surface and subsurface areas contributing
water to a spring or a stream. Assessing temporal variation is a critical step at any site for
which a CSM is being developed.
More detail on various challenging aspects of karst hydrogeology and hydrology can be
found in Milanović (1979, 1981, 2004), Ford and Williams (2007), and Kresic (2012).
The Snake River Plain regional aquifer system in southern Idaho and southeastern
Oregon is a large graben-like structure that is filled with basalt of Miocene and younger
age. The basalt consists of a large number of flows, the youngest of which was extruded
about 2000 years ago. The maximum thickness of the basalt, as estimated by using electri-
cal resistivity surveys, is about 5500 ft (Miller 1999). The permeability of basaltic rocks is
highly variable and depends largely on the following factors: the cooling rate of the basal-
tic lava flow, the number and character of interflow zones, and the thickness of the flow.
The cooling rate is most rapid when a basaltic lava flow enters water. The rapid cooling
results in pillow basalt, in which ball-shaped masses of basalt form with numerous inter-
connected open spaces at the tops and bottoms of the balls. Large springs that discharge
thousands of gallons per minute issue from pillow basalt along the walls of the Snake
River Canyon in the general area of Twin Falls, ID.
In general, highly permeable but relatively thin rubbly or fractured lavas act as excellent
Downloaded by [University of Auckland] at 23:40 09 April 2014
preferential flow paths but have only limited storage (Figure 2.101). Overlying and under-
lying, thick, porous but poorly permeable volcanic ash may act as the storage medium for
this dual system. In glaciated terrains, permeable gravels and sands of outwash plains
may be found interbedded with multiple lava flows and volcanic ash deposits resulting in
thick, prolific, heterogeneous aquifer systems (Figure 2.102).
As discussed by Whitehead (1994), Pliocene and younger basaltic rocks are mainly flows,
but in many places in the Cascade Range, the rocks contain thick interbeds of basaltic ash
as well as sand-and-gravel beds deposited by streams and glaciers. Most of the Pliocene
and younger basaltic rocks were extruded as lava flows from numerous vents and fissures
FIGURE 2.101
Columnar basalt near Gardiner, MT. Inset shows individual columns, approximately 2 ft wide.
106 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.102
Columnar basalt topping Yellowstone River canyon in Wyoming. Glacial gravels and volcanic ash are beneath
the basalt. Bottom photo shows contact between the basalt and highly permeable glacial gravels.
concentrated along rift or major fault zones in the Snake River Plain. The lava flows spread
for as much as 50 mi from some vents and fissures. Overlapping shield volcanoes that
formed around major vents extruded a complex of basaltic lava flows in some places.
Thick soil, much of which is loess, covers the flows in many places. Where exposed at
the land surface, the top of a flow typically is undulating and nearly barren of vegetation.
The barrenness of such flows contrasts markedly with those covered by thick soil where
Conceptual Site Models 107
agricultural development is intensive. The thickness of the individual flows is variable; the
thickness of flows of Holocene and Pleistocene age averages about 25 ft, whereas that of
Pliocene-age flows averages about 40 ft.
In some shield-volcano eruptions, basaltic lava pours out quietly from long fissures
instead of central vents and floods the surrounding countryside with lava flow upon lava
flow, forming broad plateaus. Lava plateaus of this type can be seen in Iceland, southeast-
ern Washington, eastern Oregon, and southern Idaho. Along the Snake River in Idaho and
the Columbia River in Washington and Oregon, these lava flows are beautifully exposed
and measure more than a mile in total thickness. This geologic environment has, in many
ways, acted hydrogeologically as karst because of the existence of sinking streams and the
development of integrated networks of overlapping and intersecting lava flows that drain
at some of the most spectacular large springs in the United States (Figure 2.103). Often
present in lava flows are interconnected lava tubes at various depths below the water table,
Downloaded by [University of Auckland] at 23:40 09 April 2014
which may act similarly to karst conduits, thus feeding springs of variable discharge rate
FIGURE 2.103
Niagara Falls Springs issuing from basalts in the Snake River valley near Twin Falls, ID. (Courtesy of Clear
Foods, Inc.)
108 Hydrogeological Conceptual Site Models
that react quickly to rainfall events. For this reason, some practitioners describe such an
environment as pseudokarst.
Silicic volcanic rocks in the United States are present chiefly in southwestern Idaho
and southeastern Oregon where they consist of thick flows interspersed with unconsoli-
dated deposits of volcanic ash and sand. Silicic volcanic rocks also are the host rock for
much of the geothermal water in Idaho and Oregon. Big Springs in Fremont County, ID, is
the source of the South Fork of the Henrys Fork River. Designated as a National Natural
Landmark in 1980, it is the only first magnitude spring in the United States that emanates
from rhyolitic lava flows of the Madison Plateau.
2.3.6 Aquitards
Although aquitards play a very important role in groundwater systems, in many cases, they
Downloaded by [University of Auckland] at 23:40 09 April 2014
are still evaluated qualitatively rather than quantitatively. Field and laboratory research
studies have only recently started including the role of aquitards in the fate and transport
of various contaminants in the subsurface. A similar effort has yet to materialize in the
evaluation of the role of aquitards in the storage of groundwater available for water sup-
ply. Aquitards can release significant volumes of water to adjacent aquifers that are being
stressed by pumping; they can also transfer water from one aquifer to another, both under
natural conditions and as a result of artificial groundwater withdrawal. Understanding
various roles aquitards can play in a hydraulically stressed groundwater system is espe-
cially important when designing artificial aquifer recharge systems and predicting long-
term exploitable reserves of groundwater (Kresic 2010).
One usually thinks of an aquitard, when continuous and thick and when overlying a
highly productive confined aquifer, as a perfect protector of the valuable groundwater
resource. Various sedimentary, magmatic, and metamorphic rocks can act as aquita-
rds, depending on their mineral composition and effective porosity. Clay-rich rocks are
always good candidates provided they are relatively extensive (Figure 2.104). However,
some professionals would argue that every aquitard leaks, and it is only a matter of time
before existing shallow groundwater contamination could enter the confined aquifer and
threaten the source. Of course, it does not help anyone (e.g., interested stakeholders) if
such professionals rely only on their best professional judgment and are much less specific
in terms of the reasonable amount of time after which the contamination would break
through the aquitard. If confronted with some field-based data, such as the thickness and
the hydraulic conductivity of the aquitard porous material, they may have the best answer
ready in hand: “But the measurements did not include flow through the fractures, and
we all know that all rocks and sediments comprising an aquitard, including clay, do have
some fractures, somewhere.”
The truth is, as always, somewhere in between. There are perfectly protective, competent,
thick aquitards of high integrity, which would not allow migration of shallow contamina-
tion to the underlying aquifer for thousands of years or more. Good examples are regional
aquitards stretching for hundreds of miles along and off the Atlantic coast of the eastern
United States. They prevent downward vertical migration of seawater into the freshwater
aquifers utilized for more than 150 years as main sources of water for millions of people. If
viewed in terms of hundreds of thousands of years or more, this fact may not entirely con-
tradict statements made by some that any aquitard will leak given enough time. However,
such statements, often made because of some underlying agenda such as a lawsuit, should
be ignored by working hydrogeologists. Instead, when applicable to the CSM, full atten-
tion should be given to site-specific aquitard evaluations.
Conceptual Site Models 109
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 2.104
Outcrop in eastern Alaska illustrating interbedded lithified submarine fine sands (light color) and clay (darker
color). The sediments comprising the rock were deposited by successive turbidity flows, which form repeti-
tive graded sequences often capped by substantial clay layers. Individual stratigraphic horizons are laterally
continuous over tens to hundreds of meters. Continuous layers like these can often form effective aquitards for
circulating fluids. (Courtesy of Jeff Manuszak.)
For example, Hendry and Wassenaar (2010) used high-resolution, one-dimensional pro-
files of naturally occurring environmental tracers (3H, δD, δ18O, 14C-DIC, 14C-DOC, 36Cl,
4He, and major ions) and hydraulic data to study residence times, transport mechanisms,
and sources of pore water and solutes in an aquitard system in Saskatchewan, Canada.
The aquitard system consisted of 80 m of plastic, clay-rich Battleford till unconformably
110 Hydrogeological Conceptual Site Models
0
Oxidized Till
Unoxidized Till
10
Depth (m)
20
9 ka
11 ka
30
40
0 20 40 60 80 100
14C-DOC (pmC)
FIGURE 2.105
Contents of 14C-DOC versus depth in an aquitard system in Saskatchewan, Canada. Best-fit simulated diffusive
transport + radioactive decay profiles are presented as lines. The red lines represent a transport time of 11,000
years, and the blue line represents a transport time of 9000 years. pmC denotes percent of modern carbon.
(From Hendry, M. J., and Wassenaar, L. I., Water Resources Research, 41 WO2021, doi:10.1029/2004WR003157, 2005.
Reproduced with permission of the American Geophysical Union.)
Conceptual Site Models 111
FIGURE 2.106
Results of a 3D particle-tracking model showing the effects of a high-conductivity zone in an aquitard on par-
ticle flowpaths. (Modified from Chiang, W. H. et al., 3D Master—A Computer Program for 3D Visualization and
Downloaded by [University of Auckland] at 23:40 09 April 2014
Real-Time Animation of Environmental Data, Excel Info Tech, Inc., 146 pp., 2002.)
be used to reasonably accurately assess the rates of groundwater movement through it.
However, caution should be used when relying on hydraulic head data collected from
monitoring wells that are not completed in the aquitard itself. A difference in the hydraulic
heads measured in the overlying and underlying aquifers does not necessarily mean that
groundwater is moving between them at any appreciable rate. The existence of the actual
flow can be indirectly confirmed only by hydraulically stressing (pumping) one of the
aquifers and confirming the obvious related hydraulic head change in the other two units
(i.e., including the aquitard itself). When interpreting the hydraulic head changes (fluctua-
tions) caused by pumping, all possible natural causes such as barometric pressure changes
or tidal influences should be accounted for.
Figure 2.107 illustrates how possibly misleading conclusions can result from measuring
the hydraulic heads at only one depth in the surficial aquifer (say, at MP-4A, where the
head is 180.07 ft) and only one depth in the confined aquifer (at MP-4F, the head is 61.77 ft).
The vertical difference between these two hydraulic heads is 118.3 ft, which may lead one
to believe that there must be a significant vertical flow downward through the aquitard
caused by such a strong vertical hydraulic gradient (incidentally, the confined aquifer is
Feet Feet
MW-3 MP-1 MW-5 MP-4 MW-9 MP-7
200 200
A 184.12 A 180.07
160 A 173.84 160
B 179.47
B 174.12 B 170.21
C 171.03 Unconfined
120 C 169.34 Aquifer C 167.80 120
D 170.98
D 169.31 D 167.82
80 Aquitard 80
E
76.61 E 72.54 E 67.38
40 Confined 40
F 65.39
F 61.77 Aquifer F 56.22
0 0 500 ft 0
FIGURE 2.107
Measurements of the hydraulic head at multiport monitoring wells screened above and below an aquitard. The
confined aquifer is being pumped for water supply with an extraction well located approximately 4600 ft from
MP-7. (From Kresic, N., Hydrogeology and Groundwater Modeling, Second Edition, CRC Press, Taylor & Francis
Group, Boca Raton, FL, 807 pp., 2007. With permission.)
112 Hydrogeological Conceptual Site Models
being pumped for water supply). However, the head difference between the last two ports
in the aquifer above the aquitard, at all multiport wells, is absent for all practical purposes:
it is within 1/100 of 1 ft, upward or downward. The flow is strictly horizontal indicating an
absence of advective flow (free gravity flow) of groundwater from the unconfined aquifer
into the underlying aquitard. The higher downward vertical gradients at shallow depths
in the unconfined aquifer may be the result of recharge, possibly combined with the influ-
ence of some lateral pumping (boundary) in the unconfined aquifer. When measurements
of the hydraulic head are available at various depths within an aquitard, a more defini-
tive conclusion as to the probable rates and velocities of groundwater flow through it can
be made, including the presence of possibly varying hydraulic head inside the aquitard
caused by heterogeneities.
A detailed discussion on the hydrogeologic role of aquitards including various methods of
their characterization is given in the works of Cherry et al. (2006) and Bradbury et al. (2006).
Downloaded by [University of Auckland] at 23:40 09 April 2014
References
ASTM International, 2008. Standard Guide for Developing Conceptual Site Models for Contaminated
Sites. E 1689-95, West Conshohocken, PA, 8 p.
Ator, S. W., Denver, J. M., Krantz, D. E., Newell, W. L., and Martucci, S. K., 2005. A Surficial
Hydrogeologic Framework for the Mid-Atlantic Coastal Plain. U.S. Geological Survey
Professional Paper 1680, 44 pp.
Bense, V. F., Van den Berg, E. H., and Van Balen, R. T., 2003. Deformation
����������������������������������
mechanisms and hydrau-
lic properties of fault zones in unconsolidated sediments; the Roer Valley Rift System, The
Netherlands. Hydrogeol. J., 11, 319–332.
Božičević, S., 1971. Primjena speleologije pri injektiranjima u krsu (Application of speleology in
grouting of karst terranes; in Croatian). First Yugoslav Symposium on Hydrogeology and
Engineering Geology, Herceg Novi.
Bögli, A., 1980. Karst Hydrology and Physical Speleology. Springer-Verlag, New York.
Böhlke, J. K., 2002. Groundwater recharge and agricultural contamination. Hydrogeol. J., 10(1),
153–179.
Bradbury, K. R., Gotkowitz, M. B., Hart, D. J., Eaton, T. T., Cherry, J. A., Parker, B. L., and Borchardt,
M. A., 2006. Contaminant Transport through Aquitards: Technical Guidance for Aquitard
Assessment. American Water Works Association Research Association (AwwaRF), Denver,
Colorado, 144 pp.
California Department of Water Resources (CADWR), 1962. Bulletin 104: Planned Utilization of the
Ground Water Basins of Coastal Plain of Los Angeles County. Appendix A, Ground Water
Geology.
Cederstrom, D. J., 1972. Evaluation of Yields of Wells in Consolidated Rocks, Virginia to Maine. U.S.
Geological Survey Water-Supply Paper 2021, 38 pp.
Cherry, J. A., Parker, B. L., Bradbury, K. R., Eaton, T. T., Gotkowitz, M. B., Hart, D. J., and Borchardt,
M. A., 2006. Contaminant transport through aquitards: A state of the science review. American
Water Works Association Research Association (AwwaRF), Denver, Colorado, 126 pp.
Chiang, W. H., Chen, J., and Lin, J., 2002. 3D Master—A Computer Program for 3D Visualization and
Real-Time Animation of Environmental Data. Excel Info Tech, Inc., 146 pp.
Cressler, C. W., Thurmond, C. J., and Hester, W. G., 1983. Ground Water in the Greater Atlanta Region,
Georgia. Georgia Department of Natural Resources, U.S. Environmental Protection Agency,
and The Georgia Geologic Survey, in cooperation with U.S. Geological Survey, Information
Circular 63, 143 pp.
Conceptual Site Models 113
Cvijić, J., 1893. Das Karstphänomen. Versuch einer morphologischen Monographie. Geographische
Abhandlungen herausgegeben von Prof. Dr A. Penck, Wien, Bd. V. Heft. 3, pp. 1–114.
Cvijić, J., 1918. Hydrographie souterraine et évolution morphologique du karst. Recueil des Travaux
de l’Institut de Géographie alpine, Grenoble, t. VI, fasc. 4, pp. 1–56.
Cvijić, J., 1924. Geomorfologija (Morphologie Terrestre). Knjiga druga (Tome Second). Beograd, 506 pp.
Daniel, C. C., III, and Sharpless, N. B., 1983. Ground-Water Supply Potential and Procedures for
Well-Site Selection Upper Cape Fear River Basin. Cape Fear River Basin Study 1981–83, North
Carolina Department of Natural Resources and Community Development and U.S. Water
Resources Council in cooperation with U.S. Geological Survey, 73 pp.
Daniel, C. C., III, Smith, D. G., and Eimers, J. L., 1997. Hydrogeology and Simulation of Ground-Water
Flow in the Thick Regolith-Fractured Crystalline Rock Aquifer System of Indian Creek Basin,
North Carolina, Chapter C, Ground-Water Resources of The Piedmont-Blue Ridge Provinces of
North Carolina. U.S. Geological Survey Water-Supply Paper 2341, pp. C1–C137.
Daniel, C. C., III, and Dahlen, P. R., 2002. Preliminary Hydrogeologic Assessment and Study Plan for
Downloaded by [University of Auckland] at 23:40 09 April 2014
a Regional Ground-Water Resource Investigation of the Blue Ridge and Piedmont Provinces of
North Carolina. U.S. Geological Survey Water-Resources Investigations Report 02-4105, 60 pp.
Danskin, W. R., McPherson, K. R., and Woolfenden, L. R., 2006. Hydrology, Description of Computer
Models, and Evaluation of Selected Water-Management Alternatives in the San Bernardino
Area, California. U.S. Geological Survey Open-File Report 2005-1278, Reston, VA, 178 pp.
Dimitrijević, M., 1978. Geolosko kartiranje (Geological Mapping, in Serbian). ICS, Beograd, 486 pp.
Ford, D., and Williams, P., 2007. Karst Hydrogeology and Geomorphology. John Wiley & Sons, Chichester,
West Sussex, England, 562 pp.
Halford, K. J., and Mayer, G. C., 2000. Problems associated with estimating ground-water discharge
and recharge from stream-discharge records. Ground Water, 38(3), 331–342.
Haneberg, W., Mozley, P., Moore, J., and Goodwin, L. (Eds.), 1999. Faults and Subsurface Fluid Flow
in the Shallow Crust. American Geophysical Union Monograph, 113, 51–68.
Hanshaw, B. B., and Back, W., 1974. Determination of Regional Hydraulic Conductivity through Use
of 14C Dating of Groundwater. Memoires, Tome X, 1. Communications. 10th Congress of the
International Association of Hydrogeologists, Montpellier, France, pp. 195–198.
Harned, D. A. 1989. The Hydrogeologic Framework and a Reconnaissance of Ground-Water Quality
in the Piedmont Province of North Carolina, with a Design for Future Study. U.S. Geological
Survey Water-Resources Investigations Report 88-4130, 55 pp.
Hendry, M. J., and Wassenaar, L. I., 2005. Origin and migration of dissolved organic carbon frac-
tions in a clay-rich aquitard: 14C and b13C evidence. Water Resources Research, 41, WO2021, doi:
10.1029/2004WR003157.
Hendry, M. J., and Wassenaar, L. I., 2010. Millennial-scale diffusive migration of solutes in thick
clay-rich aquitards: evidence from multiple environmental tracers. Hydrogeol. J., DOI 10.1007/
s10040-010-0647-4.
Imes, J. L., Plummer, L. N., Kleeschulte, M. J., and Schumacher, J. G., 2007. Recharge Area, Base-Flow
and Quick-Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter
County, Missouri, 2000–04. U.S. Geological Survey Scientific Investigations Report 2007-5049,
80 pp.
Johnson, C. D., Haeni, F. P., Lane, Jr. J. W., and White, E. A., 2002. Borehole-Geophysical Investigation
of the University of Connecticut Landfill, Storrs, Connecticut. U.S. Geological Survey Water-
Resources Investigations Report 01-4033, 42 pp.
Jones, W. K., 1997. Karst Hydrology Atlas of West Virginia. Karst Waters Institute, Special Publication
4, Charles Town, WV, 111 pp.
Kendy, E., 2001. Magnitude, Extent, and Potential Sources of Nitrate in Ground Water in the Gallatin
Local Water Quality District, Southwestern Montana, 1997–98. U.S. Geological Survey Water-
Resources Investigations Report 01-4037.
Kresic, N., 1991. Kvantitativna hidrogeologija karsta sa elementima zastite podzemnih voda (Quantitative
Karst Hydrogeology with Elements of Groundwater Protection; in Serbo-Croatian). Naucna Knjiga,
Belgrade, 192 pp.
114 Hydrogeological Conceptual Site Models
Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC Press, Taylor & Francis
Group, Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources: Sustainability, Management, and Restoration. McGraw Hill,
New York, 852 pp.
Kresic, N., 2010. Chapter 2, Types and Classification of Springs. In: Kresic, N., and Stevanovic, Z.,
eds., Groundwater Hydrology of Springs; Engineering, Theory, Management, and Sustainability.
Elsevier, Butterworth-Heinemann, Amsterdam, pp. 31–85.
Kresic, N., 2012. Water in Karst: Management, Vulnerability, and Restoration. McGraw Hill, New York,
in press.
Kresic, N., and Mikszewski, A., 2009. Chapter 3, Groundwater Recharge. In: Kresic, N., Groundwater
Resources. Sustainability, Management, and Restoration. McGraw Hill, New York, pp. 235–292.
Kresic, N., and Stevanovic, Z., eds., 2010. Groundwater Hydrology of Springs; Engineering, Theory,
Management, and Sustainability. Elsevier, Butterworth-Heinemann, Amsterdam, 573 pp.
LeGrand, H. E., 1954. Geology and Ground Water in the Statesville Area, North Carolina. North
Downloaded by [University of Auckland] at 23:40 09 April 2014
Nutter, L. J., and Otton, E. G., 1969. Ground-Water Occurrence in the Maryland Piedmont. Maryland
Geological Survey Report of Investigations no. 10, 56 pp.
Pettyjohn, W. A., and Henning, R., 1979. Preliminary Estimate of Regional Effective Ground-Water
Recharge Rates, Related Streamflow, and Water Quality in Ohio. Ohio State University Water
Resources Center Project Completion Report No. 552, Columbus, OH, 333 pp.
Risser, D. W., Conger, R. W., Ulricj, J. E., and Asmussen, M. P., 2005. Estimates of Ground-Water
Recharge Based on Streamflow-Hydrograph Methods: Pennsylvania. U.S. Geological Survey
Open File Report 2005-1333, Reston, VA, 30 pp.
Rorabaugh, M. I., 1964. Estimating Changes in Bank Storage and Ground-Water Contribution to
Streamflow. Intl. Assoc. Sci. Hydrol. Publ. 63, 432–441.
Rutledge, A. T., 1992. Methods of Using Streamflow Records for Estimating Total and Effective
Recharge in the Appalachian Valley and Ridge, Piedmont, and Blue Ridge Physiographic
Provinces. In: Hotchkiss, W. R., and Johnson, A. I., eds., Regional Aquifer Systems of the
United States—Aquifers of Southern and Eastern States. American Water Resources Association
Downloaded by [University of Auckland] at 23:40 09 April 2014
3.1 Introduction
The computer age has revolutionized the fields of hydrogeology and environmental engi-
neering. Advanced mapping and numerical modeling software are readily available to
modern-day hydrogeologists, complete with user-friendly preprocessing and postprocess-
ing interfaces. The practicing professional is no longer burdened by hard-copy data storage,
hand calculation, and hand-mapping, or even numerical method formulation and com-
puter coding. The implications of this transformation are far-reaching. Most importantly,
hydrogeological data, which are inherently spatial in nature, can easily be stored and visu-
alized through geographic information systems (GIS). The United States Geological Survey
(USGS) defines GIS as “computer system(s) capable of capturing, storing, analyzing, and
displaying geographically referenced information; that is, data identified according to
location” (USGS 2007). In a sense, GIS and associated software create an environment for
the hydrogeologist’s data that is a simulation of the real world in three dimensions: lon-
gitude, latitude, and elevation. All data in the field of hydrogeology possess these three
defining dimensional features, which can be accurately represented in GIS. However, the
real-world representation offered by GIS is temporally discrete in nature; the data stored
and visualized in GIS represent snapshots in time. Whether a GIS map is being used to
visualize a water table surface, the structure of a river channel, or land surface elevations
in a watershed, the hydrogeologist is depicting a discrete representation of these data. In
other words, the hydrogeologist can show the average potentiometric surface of the High
Plains Aquifer in 1930, the river channel geometry of the Mississippi River in 2005, or the
predevelopment elevation surface of the watershed 100 years ago.
The addition of computer-assisted numeric models to the GIS environment completes
the four-dimensional simulation of the natural world, adding the critical element of time.
Numeric models can simulate the transient (time-dependent) physical, chemical, and
biological processes of Earth, all of which can be integrated with the three-dimensional
structure provided by GIS. The hydrogeologist can now represent the fluctuation of a
water-table surface, the evolving geomorphology of a watershed, or the creation or erosion
of mountain ranges. To continue with the examples previously listed, the hydrogeologist
can use numeric models and GIS to demonstrate the dewatering of the High Plains Aquifer
in the 20th century, predict changes in the Mississippi River caused by dam decommis-
sioning, or determine the rate of erosion in the Appalachian Mountains. The role of GIS
and numeric models is to provide the hydrogeologist with the means to store and visual-
ize four-dimensional data collected in the real world and to simulate the behavior of fun-
damental transient processes, which dictate the creation of data collected in the past and
of data to be collected in the future. Through GIS and numerical modeling software, the
117
118 Hydrogeological Conceptual Site Models
hydrogeologist can quantitatively and visually represent the conceptual site model (CSM)
and better communicate results and recommendations to others.
Figures 3.1 through 3.3 illustrate the concepts of discrete and transient data visualization.
Figures 3.1 and 3.2 are discrete visualizations of depth-to-water measurements across the
High Plains Aquifer predevelopment (Figure 3.1) and in 2007 (Figure 3.2). Data collected
from the two data sets are treated separately and form two distinct snapshots in time
of hydrogeological conditions. Figure 3.3 depicts the change in depth-to-water measure-
ments between the two years, illustrating the effects of intensive aquifer pumping for irri-
gation purposes. Figure 3.3 connects the predevelopment and 2007 data sets and illustrates
the transient processes at work during the years in between when significant groundwater
pumping occurred. The ability to quantify transient processes through monitoring data is
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.1
Contour map of water-table depth in feet below ground surface measured prior to groundwater development
between the 1930s and 1980s. (Data from USGS, USGS High Plains Aquifer WLMS: Water-Level Data by Water
Year (October 1 to September 30). US Department of the Interior, US Geological Survey, 2010. Available at http://
ne.water.usgs.gov/ogw/hpwlms/data.html, accessed October 2, 2010.)
Data Management, GIS, and GIS Modules 119
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.2
Contour map of water-table depth in feet below ground surface measured in 2007. (Data from USGS, USGS
High Plains Aquifer WLMS: Water-Level Data by Water Year (October 1 to September 30). US Department of the
Interior, US Geological Survey, 2010. Available at http://ne.water.usgs.gov/ogw/hpwlms/data.html, accessed
October 2, 2010.)
limited by the frequency of measurement. Conversely, numerical models can represent data
that are nearly continuous in time, filling in the gaps and, most importantly, allowing predic-
tion of future conditions. Discrete, disjointed measurements can thus be turned into smooth
data animations using any available video creation software such as Windows Movie Maker.
With the widespread availability of GIS software and numeric models integrated in a
GIS environment, the practicing professional faces increasing demands from clients and a
higher quality standard for technical deliverables. Hand-drawn figures and calculations
are no longer appropriate or feasible in most cases, and the expectation is that, with an
equivalent amount of financial resources, the hydrogeologist must deliver a superior prod-
uct. Use of three-dimensional visualizations of site data, animations of modeling output,
color-rich graphics at myriad scales, and other products are now considered to be standard
120 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.3
Contour map of the drawdown in water-table depth (feet) between predevelopment and 2007 conditions.
practice. Unfortunately, there are few professional standards or official technical guidance
documents regarding the use of GIS and numeric models in hydrogeology. The ones that
do exist, such as those produced by ASTM International, are often too broad to offer much
help on specific projects. Hence, the professional is faced with a surplus of computer tools
without sufficient direction for how they should be used. The aim of this chapter is to
identify tools and methods for using databases, GIS, and spatial data analysis programs
(termed GIS modules) to produce high-quality deliverables for projects in hydrogeology
and environmental engineering. This section is not a tutorial in GIS software; rather, this
is a guide for developing the structure of geodatabase and GIS systems, producing GIS
graphics for hydrogeological applications, and integrating GIS with GIS modules.
Data Management, GIS, and GIS Modules 121
always have enough of the right type of data, often he or she is completely inundated
by physical, chemical, and biological sampling data associated with site investigations.
Further complicating the issue, data in different formats (e.g., units, coordinate systems)
are often received from previous investigations conducted by different consultants and
public government agencies, such as the USGS.
To avoid costly and embarrassing mistakes, the hydrogeologist must have a transparent,
well-organized data management system in place for each and every project. Fatal errors
such as working with the wrong units can be avoided with appropriate data management.
Case studies of data management failures and suggestions for their avoidance are pre-
sented in the next section.
The data are used to calibrate the model, and all required simulations are executed. During
deposition, the hydrogeologist is questioned about the model, and it is ultimately exposed
that the data used in the model calibration were in the wrong units. The hydrogeologist had
not cross-checked the database against raw laboratory reports and, consequentially, made
major errors in the model calibration, which led to erroneous future simulations.
Figures 3.4 and 3.5 demonstrate how the confusion in units led to flawed modeling and
an incorrect conclusion by the hydrogeologist. Figure 3.4 displays the modeling output of
the hydrogeologist, in which contaminant concentrations for calibration were assumed
to be in milligrams per kilogram of soil (mg/kg). Figure 3.5 depicts what the modeling
output would look like in the correct units, micrograms per kilogram of soil (µg/kg). As
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.4
Output from modeling software when wrong units are used to represent the contaminant source term.
FIGURE 3.5
Output from modeling software when correct units are used to represent the contaminant source term.
Data Management, GIS, and GIS Modules 123
shown by the figures, the use of incorrect units results in the erroneous prediction that a
private water-supply well (Johnson well) and a surface water feature (Little River) will be
impacted by the resultant groundwater plume.
To avoid this mistake, databases should clearly denote both the units and the source of
the data, commonly accomplished by listing the analytical laboratory and an associated
sample delivery group number. When provided with a foreign database, the hydrogeolo-
gist should always check the accuracy of the data against the original sources. While check-
ing every single piece of data may not be practical, a more efficient quality assurance/
quality control (QA/QC) program, such as spot-checking 1%–10% of all data, can be quite
effective in catching global errors, such as using incorrect units.
A hydrogeologist is provided with the existing geodatabase for a newly inherited site.
After making initial plots of the data with GIS software, it becomes apparent that some
of the data are not being displayed in the correct location. After locating these misplaced
data in the database, the hydrogeologist realizes that they are most likely in a different
coordinate system from the rest of the data. Unfortunately, there is no field in the database
identifying the coordinate system of these data, and it is not feasible to contact the origi-
nal architect of the database. As a result, the hydrogeologist spends many trial-and-error
hours reprojecting the data until they align properly.
To avoid this problem, a coordinate system field/label should be provided with all
location coordinates in a database or spreadsheet table. Furthermore, while modern GIS
software, such as ArcMap by Esri, is capable of displaying layers in different coordinate
systems on the same map (termed projecting on the fly), external quantitative data analysis
programs may not have that capability (Ormsby et al. 2004). If at all possible, all data in a
database should be in the same coordinate system to facilitate data export to coordinate-
dependent GIS modules, such as contouring or groundwater modeling software. Even
ArcMap cannot display data properly when different coordinate systems are used within
the same layer as demonstrated by Figures 3.6 and 3.7.
FIGURE 3.6
Map of soil boring locations at a hypothetical site shown at the extent of the site limits. Without further exami-
nation, it appears that all boring locations are shown.
124 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.7
When zooming to the full extent of the soil boring layer, it is apparent that some of the boring locations are
in the wrong coordinate system as they are shown over a million feet away from the data correctly overlying
the site.
FIGURE 3.8
Contour map of contaminant concentrations in shallow soils with nondetect results erroneously excluded.
FIGURE 3.9
Contour map of contaminant concentrations in shallow soils with nondetect results included. Note significant
increase in interpolated area less than 1 mg/kg. This would have significant cost implications if the action level
for remediation were 1 mg/kg. The reporting limit of 0.5 mg/kg was substituted for nondetect results to create
contour maps.
126 Hydrogeological Conceptual Site Models
perform simple queries can greatly increase project delivery efficiency. At the minimum,
increasing technical literacy on both sides of the equation will reduce the incidence of
communication breakdowns when the hydrogeologist does not understand what instruc-
tion the database manager needs to design a query and the database manager does not
understand what data the hydrogeologist needs to perform the analysis in question.
and execution of architectures, policies, practices, and procedures that properly manage
the full data lifecycle needs of an enterprise” (DAMA 2007). This rigorous, comprehen-
sive mindset is necessary because hydrogeologists, as technical professionals, base all their
decisions on raw data collected in the field or on quantitative inferences made with those
field data. In other words, data quality and accessibility are of the utmost importance to
any practicing hydrogeologist. Quite simply, data management can make or break a project.
It is somewhat tedious and of minimal value to the reader to offer technical standards
for a data management system. As noted above, the concept of a data management system
extends beyond a geodatabase and covers tasks high-level professionals may associate
with entry level personnel. It is more useful to outline the steps for creating a successful
data management system for a typical hydrogeological project in chronological order. The
necessary steps are as follows:
Downloaded by [University of Auckland] at 23:40 09 April 2014
The steps illustrate how data management is involved at each stage of project execution
and how data management, collection, and analysis are all synergistic processes.
FIGURE 3.10
Systematic planning table. (From United States Environmental Protection Agency (U.S. EPA). Guidance on Systematic
Planning Using the Data Quality Objectives Process, Office of Environmental Information, Washington DC, 2006.)
FIGURE 3.11
Site plan for hypothetical landfill site with proposed sediment sampling locations in the wetlands downgradi-
ent of the landfill, abutting a small river. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS
(http://www.mass.gov/mgis/colororthos2008.htm).
Data Management, GIS, and GIS Modules 129
3.2.2.2 Determine the Quantity and Type of Field and Laboratory Data to Be Collected
This is arguably the most important step of any project and can be quite difficult with the
budgetary constraints commonly faced. Too often sacrifices are made on the scope of data
collection efforts in order to lower costs and increase the likelihood of winning projects
or to keep existing clients happy. Regardless of political and budgetary concerns, this step
will drive the data management system and lead to the formulation and answering of key
questions. In keeping with the cadmium example from step 1, the hydrogeologist might
develop the following questions (Q) and answers (A) to determine data requirements:
be differentially corrected and uploaded from the handheld device into the project
database to enable data linkage with GIS software for visual data presentation.
data verification and validation occurs after initial compilation of laboratory and field
data and before final import of the data into a geodatabase. This ensures that all data in
the geodatabase meet quality standards. Different projects will have different levels of
data verification and validation requirements, often driven by applicable regulations. For
example, the Massachusetts Department of Environmental Protection (MassDEP) requires
Representativeness Evaluations and Data Usability Assessments be conducted in support
of hazardous-waste site cleanups in Massachusetts (MassDEP 2007).
As stated in MassDEP (2007),
“The Representativeness Evaluation determines whether the data set in total suffi-
ciently characterizes conditions at the disposal site and supports a coherent Conceptual
Site Model. The Representativeness Evaluation determines whether there is enough
information from the right locations, both spatially and temporally, to support the (site
closure).”
Key to the above definition is the interconnection of the data collection and evaluation
process with the CSM as described in Chapter 2. Data usability is evaluated through the
results of QA/QC samples collected in the field, such as blanks, spiked samples, and/or
duplicates. This is a field-based QA/QC component, in which the accuracy and precision of
the sampling methods are assessed. Data usability is also evaluated through analysis of
raw laboratory data to ensure that reported sampling results are qualified appropriately
(MassDEP 2007). This is an analytical-based QA/QC component, in which the accuracy and
precision of the laboratory analysis are assessed through examination of parameters, such
as initial instrument calibration results, surrogate spike recoveries, matrix spike/matrix
spike duplicate recoveries, and laboratory control sample recovery (MassDEP 2007).
When performing data verification and validation, it is easy to get bogged down in labora-
tory analytical minutiae and lose sight of the big picture, forgetting why the process is being
conducted in the first place. The authors recommend that focus be placed on the following
areas to ensure that project decisions are made with reliable, accurately quantified data:
• Comparison between primary and duplicate samples to ensure that the sample
technique and matrix do not exhibit unacceptable variability.
• Evaluation of blank samples to ensure that cross-contamination is not occurring
between samples.
• Evaluation of the chain of custody to ensure that all samples that were supposed
to be analyzed were indeed analyzed.
Data Management, GIS, and GIS Modules 131
• Evaluation of sample hold times to ensure that samples did not sit idle in a refrig-
erator too long prior to analysis.
• Evaluation of the attained reporting limits for individual constituents to ensure
that a sample is not listed as nondetect at a reporting limit above the applicable
regulatory standard. For example, if the cleanup level for a contaminant in soil is
5 mg/kg, a nondetect result at a reporting limit of 10 mg/kg is inadequate.
• Evaluation of surrogate recoveries to quantify potential bias. For example, if low
surrogate recoveries were observed for an individual constituent, it is likely that
the reported result is biased low. This may be significant if the reported concentra-
tion is just below the applicable regulatory standard. Qualifiers such as J, to repre-
sent estimated concentration, may be appropriate in these circumstances.
tant to designate a more qualified person to perform this analysis and ensure that data
quality is acceptable. Regardless, the hydrogeologist must be literate in the data verifica-
tion and validation process so that he or she understands big picture concepts and how
data quality can impact project decisions.
SED-10 SED-8
SED-9
SED-2
SED-7
SED-6
SED-5
SED-4
SED-1
SED-3
Legend
Landfill Boundary
Measured Cadmium Concentrations
0 75 150 300
< 5 mg/kg
10 - 20 mg/kg
Feet
FIGURE 3.12
Results of the hypothetical sediment investigation at the landfill site with sample identifications labeled and
results depicted through color coding. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS
(http://www.mass.gov/mgis/colororthos2008.htm).
132 Hydrogeological Conceptual Site Models
Access database can be imported into GIS software to assess the spatial correlation of data,
if necessary, and to produce high-quality figures illustrating sampling results. Figure 3.12
presents an example data visualization for the project that clearly displays the sampling
results and the study conclusion.
The four data management steps presented in this section are summarized in the flow
chart shown in Figure 3.13. The steps effectively demonstrate the life cycle of data manage-
ment issues in hydrogeology. All aspects of project execution, from data collection in the
field to graphics generation in the office, must be mindful of data characteristics, such as
sample ID nomenclature, coordinates and coordinate system selection, units, data preci-
sion, and accuracy, and other important characteristics, such as the date/time of collection
(and the format of the date/time). A successful data management system would define
standard protocol for all of the above data fields to ensure seamless integration of field
and laboratory data with databases capable of performing queries and exporting results to
Downloaded by [University of Auckland] at 23:40 09 April 2014
Import data into Query, sort & export Perform data analysis,
geodatabase data as needed answer questions
FIGURE 3.13
Data management flow chart created by the authors.
Data Management, GIS, and GIS Modules 133
erful. A geodatabase provides the underlying data structure for visualization in ArcMap by
Esri. In other words, a GIS translates raw data in a geodatabase into visual data depicted on
maps. One can think of GIS as a visual processor for a geodatabase, and without the geodata-
base, GIS would not exist. Computer graphics would be no more than electronic renderings
of hand drawings. Because of the interdependence of GIS and the geodatabase, designing
functional geodatabases is of the utmost importance to practicing hydrogeologists.
For most hydrogeological applications, a personal Microsoft Access or Esri file geoda-
tabase is sufficient. An Esri file geodatabase has the same basic definition as a standard
geodatabase and is best thought of as a proprietary Esri geodatabase (Esri 2010b). Esri
file geodatabases are more efficient at storing spatial data than a personal geodatabase
and are, therefore, becoming the predominant geodatabase in professional practice for
applications with extensive mapping. However, in order to view or manipulate a file geo-
database, the user must have an ArcGIS license. Therefore, Microsoft Access will remain
an important database engine for many projects in hydrogeology. The practicing hydroge-
ologist should be literate in both programs and be able to design and operate geodatabases
for project work. Some very large projects with multiple database users may benefit from
more advanced (and complex) systems run through Oracle or Microsoft SQL Server. The
operation of these advanced programs is commonly beyond the training of the typical
hydrogeologist; hence, external help from database specialists would be required.
A geodatabase is used to store spatial data. The three primary data sets in a geodatabase
are
• Tables
• Feature classes
• Rasters
3.3.1 Tables
Tables in a geodatabase are similar to what one might find in a nonspatial database. A
table is simply a series of rows with unique data fields specified by a column header. An
example of a table of monitoring well-location information is provided in Figure 3.14.
In a geodatabase, tables may specify X, Y, and Z coordinates to identify the spatial posi-
tion of data in the real world. Esri (2010c) defines a table as a storage vehicle for attributes
based on the following relational concepts:
134 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.14
Table of monitoringwell-location information. The aquifer field corresponds to the screened interval of the
monitoring wells.
Nearly all hydrogeological data are stored in tables; it is therefore very important for the
hydrogeologist to develop standard table formats that are compatible with the following
data source examples:
FIGURE 3.15
Feature class types.
with common attributes (Esri 2010d). For example, a feature class may be a point file con-
taining the locations and construction properties of monitoring wells, a polyline file speci-
fying an isocontour line, or a polygon file detailing the orientation and properties of a
building. Feature classes are generally used to specify vector data or data sets defined by
discrete points in space. Features in a feature class have both shape and size (Ormsby et al.
2004). A slightly different definition of the term vector as applied in a GIS environment is
“data that are comprised of lines or arcs, defined by beginning and end points, which meet
at nodes. The locations of these nodes and the topological structure are usually stored
explicitly” (de Smith et al. 2006–2011). Figure 3.15 depicts the common feature class types.
Feature classes are the most widely used elements in hydrogeological GIS applications
as they enable visualization of data collected in the field or in a laboratory. Point, polygon,
and polyline feature classes are digital elements with numerically defined locations and
orientations. The digital structure of feature classes is the characteristic trait of a GIS as
opposed to a manually drawn map or a drawing in a different computer program.
A shapefile is a single feature class that exists outside of a geodatabase (Ormsby et al.
2004). As defined by Esri, “A shapefile is a simple, nontopological format for storing the
geometric location and attribute information of geographic features” (Esri 2010l). Shapefiles
are efficient ways to export and share data from a map or geodatabase as access to the
entire geodatabase file is not required. However, many GIS practitioners continue to use
shapefiles as a primary means of storing and manipulating spatial data. This practice is
outdated as Esri file geodatabases can efficiently store multiple feature classes in addition
to annotations and data tables, decreasing storage requirements and facilitating the query-
ing and display of spatial data in the ArcGIS environment (Ormsby et al. 2004). Therefore,
the use of a singular file geodatabase is highly recommended over disparate shapefiles
for the mapping and analysis of spatial data. Shapefiles should only be used to transfer
data between users and between different computer programs (e.g., between ArcGIS and
groundwater models).
3.3.3 Rasters
Unlike vector data, rasters are used to represent continuous geographic data and are cre-
ated by dividing the spatial domain into gridded squares or rectangles (Esri 2010e). Rasters
do not have shape and instead have numeric values (Ormsby et al. 2004). They are matri-
ces of identically sized square cells and have at least one value associated with each cell
position (Ormsby et al. 2004; de Smith 2006–2011). The most common raster data elements
are aerial images (photographs), such as that depicted in Figure 3.16. Rasters are therefore
136 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.16
Aerial photograph raster with 30-cm resolution (~1 ft). USGS Color Ortho Imagery (2008/2009) downloaded
from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).
often used as base maps, or underlying background layers, for all types of figures. They
are of particular importance to hydrogeology as they are required to generate various
contour maps. Further discussion is provided in Chapter 4. Rasters are often termed grids
in hydrogeological applications and in some contouring programs such as Surfer (Golden
Software 2004). They may also be called surfaces.
Prior to creating contour lines, data must be interpolated to a continuous raster across the
X, Y extents of the data. The result is a raster file with a discrete value corresponding to each
grid cell. Figure 3.17 is a raster interpolated from surface soil concentrations of pesticides,
in which each value, and thereby each 10 ft × 10 ft raster cell, has its own color. This form
of raster display cannot be readily used to demonstrate results as there are far too many
individual colors to label clearly. Therefore, raster values must be grouped to create mean-
ingful figures. The pesticide concentration raster is grouped in concentration intervals in
Figure 3.18; however, the display is still discrete such that the individual cells remain visible.
Finally, the individual raster grid cell values can be smoothly interpolated for display in con-
tour form as in Figure 3.19, creating the illusion of a truly continuous surface. It is important
to remember that the raster used in Figure 3.19 is exactly the same as that used in Figure
3.17; the only difference is the grouping and smoothness of the display. One can consider the
resolution of this raster to be 10 ft. While digital elevation models (e.g., topographic surfaces
or water-table surfaces) and imagery (e.g., aerial photos) are the primary data represented
through rasters, myriad other applications exist. As rasters are, by definition, digital data
with values stored in grid cells, they can be efficiently stored in geodatabases for easy trans-
mittal and linkage with visual GIS processors, such as ArcMap by Esri, Inc.
The above description of data sets in a geodatabase is merely an introduction and does
not cover the many variations and extentions made possible in a GIS environment [i.e., the
Data Management, GIS, and GIS Modules 137
Downloaded by [University of Auckland] at 23:40 09 April 2014
0 60 120 240
Feet
FIGURE 3.17
Raster with unique value for each 10 ft × 10 ft cell.
FIGURE 3.18
Raster with grouped/classified values with unsmoothed display.
138 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.19
Raster with grouped/classified values with smoothed/interpolated display.
Separate tables should then be used to store data associated with those points, split again by
data type. For example, separate tables may be used to house well-construction information,
water-level elevation data, groundwater analytical data from laboratories (e.g., chemical
analyses results), soil analytical data from laboratories, and soil lithology data from boring
logs. When dealing with a project involving a large volume of laboratory analysis, it may
also be beneficial to have a table containing the description information for each sample,
including but not limited to, sample date/time; sample analysis parameters, such as vola-
tile organic compounds and metals; sample type (primary, duplication, equipment blank);
and sample depth or vertical interval.
Once tables are separated in a logical manner, it will be easier to import data from the
field or the laboratory as consistent table structures can be used. In Microsoft Access, a
database program included in the Microsoft Office package, data can be added to a table
from an exterior source (such as a spreadsheet or text file) as long as the column headers
Downloaded by [University of Auckland] at 23:40 09 April 2014
are exactly the same. Once the table structure of a geodatabase is established, the hydro-
geologist can communicate with the laboratory and field personnel to ensure that all data
are formatted properly for easy upload into the geodatabase.
FIGURE 3.20
Table in correct database format.
FIGURE 3.21
Table in incorrect format for database applications.
140 Hydrogeological Conceptual Site Models
Figure 3.20 is in correct database format. All possible data elements have their own unique
column. Conversely, Figure 3.21 is incorrectly formatted. Gauging dates are provided as col-
umn headers. While this may seem like a good idea, it precludes effective querying by sample
date. If a table is desired with gauging dates as column headers (with results below it in the
column itself), a cross-tab query should be used to make a new data export table. The basics of
querying in Microsoft Access are described in Sections 3.4.1.1 and 3.4.1.2.
The most critical element of geodatabase design is selecting the common data field that will
link tables in the geodatabase. Typically, this would be a field containing the name of the data
points in question, such as a well or soil boring identifier (e.g., MW-100 or SB-100). This identify-
ing field must appear in each of the tables in question to allow the hydrogeologist to establish
relationships and is often a primary key for at least one of the related tables. A primary key is a
field that uniquely identifies records (rows) in the table. In other words, values in the primary
key field cannot be repeated and cannot be null values (empty).
Downloaded by [University of Auckland] at 23:40 09 April 2014
The development of table relationships is the first step in performing data queries or
commands that select data from multiple tables and export the selection as a separate
table. A well-conceived relationship diagram for a real-life project geodatabase is pre-
sented in Figure 3.22.
A relationship is established by selecting the common field between two tables and then
joining these tables based on that field. Using these relationships, the hydrogeologist can
pull data from any table through a select or cross-tab query. Relationships are not required
elements of a geodatabase; however, without relationships, a database is no different from
a collection of flat files. A flat file is a single table containing all relevant information in a
series of records (Databasedev.co.uk 2007). Flat-file geodatabases have obvious limitations,
as information from different tables cannot be joined and queried and there is often gross
redundancy in data storage.
FIGURE 3.22
Example relationship diagram from a real-life project geodatabase. Fields used for the relationships include
“loc_id,” “samp_num,” and “con_id.” (Courtesy of Ted Chapin, Woodard & Curran.)
Data Management, GIS, and GIS Modules 141
FIGURE 3.23
Table of monitoring well–location information. Aquifer field corresponds to the screened interval of the moni-
toring wells.
FIGURE 3.24
Table of water-level elevation data. Note that this table does not have well-coordinate information or spatial
attributes and, therefore, cannot be used independently in external spatial analysis programs.
142 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.25
Screen shot of select query design, establishing a relationship between the two tables through the “Well_ID”
field. Only data measured on October 12, 2007, from wells screening the intermediate bedrock will be returned.
FIGURE 3.26
Table of queried water-level elevation data with well coordinates. This table can be used in external spatial
analysis programs to create groundwater contour maps.
Data Management, GIS, and GIS Modules 143
FIGURE 3.27
Screen shot of cross-tab query table selection in Microsoft Access. Note that other queries can also be included
in the cross-tab.
FIGURE 3.28
Well IDs are selected as the row headings for the cross-tab query.
144 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.29
Dates are selected as the column headings for the cross-tab query.
As hydrogeological studies are often concerned with the maximum, minimum, and
average values of myriad data fields, cross-tab queries are valuable tools and can sig-
nificantly reduce time spent manually organizing data in spreadsheet programs such as
Microsoft Excel or in text files. Groundwater modeling, in particular, is a discipline requir-
ing data averaging for steady-state calibration and the establishment of boundary condi-
tions. Therefore, mastery of the cross-tab query will serve the hydrogeologist well in his
or her project work.
FIGURE 3.30
Dates will be grouped by year for the query such that all results within one calendar year will be consolidated.
Data Management, GIS, and GIS Modules 145
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.31
Average water-level elevation at each well will be displayed for each calendar year. Row sums will be included
to provide an overall annual average water-level elevation.
The results of select and cross-tab queries can be easily exported into Excel tables or text
files for import into data analysis programs (or for incorporation into report tables and
figures). For the above listed example queries, the hydrogeologist may export the results
as Excel tables for import into a computer program used for generating surfaces and con-
tour maps. Select and cross-tab queries can also be converted into tables in a geodatabase
through a make-table query. This is necessary if one wishes to display the results of a
query in ArcMap, for example.
FIGURE 3.32
Table of cross-tab query results showing the average water-level elevations for 2007 and 2008 by well. The
“Total of Water_Level_Elevations” field represents the average water-level elevation for 2007–2008. The term
“Total” instead of “Average” is used in the column heading because the field is termed a row sum in cross-tab
nomenclature.
146 Hydrogeological Conceptual Site Models
3.4.1.3 Forms
While often not essential to project work, forms can be created in Microsoft Access to help
nonexperts enter data into or export data from a personal geodatabase. On the data-entry
side, a one-page form with easy-to-understand instructions and input fields can be used to
automatically populate data into multiple different geodatabase tables. This is particularly
useful for field applications such as borehole logging or water-quality monitoring, where
data can automatically and easily be entered into a database rather than written manually
into a field book and subsequently transcribed into multiple different database tables in
datasheet view, which is much less transparent. On the data export side, a one-page form
with simple buttons, selection boxes, or input fields can be used to execute complex queries
that would take considerable time and effort to perform individually. An example data
export form for a real-life geodatabase is presented in Figure 3.33.
Downloaded by [University of Auckland] at 23:40 09 April 2014
Because of their complexity, geodatabase forms are often created by GIS/database pro-
fessionals for nonexperts to use. The initial investment in easy-to-use forms is well worth
it, considering well-designed forms save considerable time and can prevent crippling data
entry and export errors. Furthermore, forms can be linked with Microsoft Access reports
such that, when prompted by the user, the form automatically populates a formatted table
for inclusion in a formal report. The use of this approach saves even more time typically
spent manually adjusting cell formats in Excel or other spreadsheet or word processing
software.
FIGURE 3.33
Example Microsoft Access form used to simply execute complicated queries through lists, buttons, and pull-
down menus. (Courtesy of Ted Chapin, Woodard & Curran.)
Data Management, GIS, and GIS Modules 147
mechanisms such as hand-drawn maps, manually edited site plans, or verbal discus-
sions. Once a geodatabase is set up, the display of data is the icing on the cake and makes
for rapid visualizations that save time and money and lead to engaging discussion and
informed decisions. Figure 3.34 presents a screen shot illustrating the inclusion of a geo-
database in ArcMap.
The relationship between the geodatabase and ArcMap is not a one-way street. Edits
can be made within ArcMap to the geodatabase itself. This is especially powerful when
plotting locations manually in ArcMap (often done during work-plan generation for pro-
posed sampling locations). These locations can be stored in the geodatabase for future use.
Queries can also be performed within ArcMap, which will be briefly discussed subse-
quently. In summation, all projects in hydrogeology would benefit tremendously by using
the geodatabase-GIS system. It is rapidly becoming (and arguably already is) the industry
standard for data management and visualization.
FIGURE 3.34
Screen shot of ArcMap table of contents showing the inclusion of the Microsoft Access database entitled
Database.mdb. Well Locations layer is displayed directly from the geodatabase. Esri® ArcGIS ArcMap graphical
user interface. Copyright © Esri. All rights reserved.
148 Hydrogeological Conceptual Site Models
text” errors and their associated frustrations. This error will also lead to a breakdown in
numeric queries, such as cross-tabs. A good way to circumvent this problem is to double
check field formats in the design view of tables.
table). Sometimes a hydrogeologist may need to use a dummy field (populated by “1,” for
example) to create a dummy header for the field to be averaged in the query. This would
be necessary if a table only had two columns, for example, Well_ID and Water_Level_
Elevation. To perform a cross-tab query, the hydrogeologist needs a column header, and
with only two fields, this is not possible. Inserting the dummy column enables Well ID to
be on the left-hand side reported in succeeding rows and the dummy value to be the col-
umn header for the average water-level elevations calculated for each well.
Query breakdowns also occur when a data field is present in multiple different units
(e.g., mg/L and µg/L). The query likely will not know this. The easy solution to this prob-
lem is to use consistent units throughout an entire table and to (please!) label units in each
table.
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.35
Map showing boundary of a hypothetical groundwater flow model domain. At this scale, the accuracy of the
well location does not need to be to the nearest foot, for example.
and/or surfaces with an associated set of rules to define spatial positions in two or three
dimensions. There are two primary types of coordinate systems: geographic coordinate
systems and projected coordinate systems. A geographic coordinate system uses latitude
and longitude to define a position on a three-dimensional spherical surface (Esri 2010f).
Latitude and longitude are angles, not distances, and are therefore commonly presented
in units of degrees, minutes, and seconds (Ormsby et al. 2004). There are innumerable geo-
graphic coordinate systems used in different places around the world with varying levels
of accuracy, but most assume the Earth is a spheroid, and they all have these common ele-
ments (Ormsby et al. 2004; Esri 2010f):
FIGURE 3.36
Map showing soil boring locations in a hypothetical residential area. As the soil sampling results will be used
to determine which residential properties are part of the hazardous-waste site and require remediation, boring
locations need to be as accurate as possible. Locations will have significant implications with respect to residen-
tial property values—a common occurrence in site remediation.
Common geographic coordinate systems used in modern mapping include the NAD 83
coordinate system for North America and the WGS 84 coordinate system for the world.
One important thing to remember when working with geographic coordinate systems is
that latitude and longitude measurements can be negative. Latitude values are zero at the
equator, increase to 90° moving to the north pole, and decrease to –90° moving to the south
pole (i.e., the northern hemisphere is positive, whereas the southern hemisphere is nega-
tive). Longitude values are zero at the prime meridian (passing through Greenwich, U.K.),
increase to 180° moving eastward, and decrease to –180° moving westward (180° and –180°
are the same meridian; Ormsby et al. 2004). Any location in the continental United States
will therefore have a positive latitude and a negative longitude.
The second type of coordinate system, the projected coordinate system, defines positions
on a flat, two-dimensional projection of the earth’s surface, akin to a grid with uniform lengths
and angles across its extent. As defined by the USGS, “A map projection is a systematic rep-
resentation of all or part of the surface of a round body, especially the Earth, on a plane”
(USGS 1987). In a projected coordinate system, points in the grid system are represented by
152 Hydrogeological Conceptual Site Models
X coordinates (eastings) along latitude lines (parallels) and Y coordinates (northings) along
longitude lines (meridians; USGS 1987). Projected coordinate systems are always based on
a geographic coordinate system (what is being projected), and therefore, knowledge of the
underlying geographic coordinate system and its datum is of critical importance (Esri 2010g).
Projected coordinate systems are more commonly used in hydrogeological practice than
raw geographic coordinate systems because of the advantages of the grid system. Data can
be projected onto a grid in distance units of meters or feet, which has major practical advan-
tages when making distance/area/volume calculations and performing geostatistical analy-
ses. Longitude and latitude are not uniform units of measure because the distance covered
by longitude lines gets smaller and smaller moving towards the poles. Projections make
advanced quantitative analysis of spatial data, such as finite-difference modeling, feasible.
Unfortunately, distortion of shape, distance, and direction is inherent to any map projec-
tion because of the impossibility of perfectly flattening a spheroid surface. Esri uses this com-
Downloaded by [University of Auckland] at 23:40 09 April 2014
mon analogy to describe projection inaccuracy: “A spheroid can’t be flattened to a plane any
more easily than a piece of orange peel can be flattened—it will rip” (Esri 2010h). This phe-
nomenon also explains why Greenland may appear gigantic on flat maps of the world as in
the plate carrée projection in Figure 3.37. Similarly, the Mercator projection makes Greenland
appear much larger than Brazil, which is not the case in reality (Ormsby et al. 2004). The
Robinson projection as shown in Figure 3.38 does a better job projecting the world near
the poles and has been used by Rand McNally since the 1960s and by National Geographic
between 1988 and 1998 for general and thematic world maps (Esri 2010i).
In the authors’ experience, the most common projected coordinate systems used in the United
States are the Universal Transverse Mercator (UTM) grid and the State Plane Coordinate System
(SPCS). UTM zones for the United States are presented in Figure 3.39. In general, boundaries
between grid zones within a coordinate system are set such that distortion errors within each
distinct zone are held below some fixed threshold (USGS 1987). The SPCS further minimizes
errors by using different projections by state based on the shape of the state in question. For
example, the ellipsoidal transverse Mercator projection is used for states with predominant
FIGURE 3.37
Map of the North Atlantic using the plate carrée projection. Greenland is unrealistically large because of distor-
tion near the poles. Data layers from Esri®, ArcWorld Supplement.
Data Management, GIS, and GIS Modules 153
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.38
Map of the North Atlantic using the Robinson projection. While distortion still exists, the degree of error is more accept-
able, and this projection is widely used for general and thematic maps. Data layers from Esri®, ArcWorld Supplement.
north–south extents, such as Vermont, whereas the Lambert Conformal Conic projection is
predominately used for other continental states (USGS 1987).
To reinforce the above concepts and minimize confusion, it is helpful to dissect the spa-
tial reference properties of data layers typically used in common hydrogeological mapping.
As stated in the above paragraph, the UTM grid and SPCS are the two most common X, Y
coordinate systems used in mapping. For a hypothetical site in southeast New Hampshire,
the most common projected coordinate systems would therefore be the NAD 83 UTM zone
19N projection in meters or the NAD 1983 StatePlane New Hampshire FIPS 2800 projection
in feet or meters. One advantage of the state-plane projections is the option to use units of
either meters or feet. Both of these coordinate systems are transverse Mercator projections
of the NAD 83 geographic coordinate system.
FIGURE 3.39
Map of UTM zones in the United States. (From USGS, The Universal Transverse Mercator (UTM) Grid, USGS
Fact Sheet 077-01, US Department of the Interior, US Geological Survey, 2001.)
154 Hydrogeological Conceptual Site Models
In summary, years and years of cartographic research and development have created
a system of projections that can be used to make maps and spatial calculations; however,
every map projection has some degree of embedded error. How does the hydrogeologist,
in many cases a relative novice cartographer, navigate the pitfalls of projection errors? As
stated by the USGS (1987), “The cartographer must choose the characteristic which is to
be shown accurately at the expense of others, or a compromise of several characteristics.”
In other words, accuracy should be maximized for those measurements most important
to the objectives of the analysis. In most modern-day applications conducted at the rela-
tively small scales (e.g., watersheds for tributaries of major rivers), projection error is often
ignored. As with most forms of qualitative and quantitative analysis, distortions and inac-
curacies increase at larger scales. Common-sense rules therefore apply when mapping at
larger scales. For example, one should not depict multiple states on a map when using the
coordinate system for one singular state. Similarly, one should not perform coordinate-
Downloaded by [University of Auckland] at 23:40 09 April 2014
based calculations in UTM coordinates for one singular UTM zone when data inputs to
the calculation are found across multiple UTM zones. Simply put, the selected coordinate
system for mapping and associated analysis should match the scale of that analysis.
The previous discussion on coordinate systems is applicable to positions on the surface of
the Earth with associated latitude (or Y coordinate) and longitude (or X coordinate) values.
Similar to horizontal coordinate systems, vertical coordinate systems are needed to con-
vey the elevation (or Z value) of data relative to the surface of the Earth. There are two
primary types of vertical coordinate systems: spheroidal or gravity-related (Esri 2010j). The
most widely used systems are gravity-related (geoidal) and use as a zero value a bench-
mark such as mean sea level. The most commonly seen vertical coordinate system is the
National Geodetic Vertical Datum of 1929.
The most important thing when working with coordinate systems is simply to know and
document the coordinate system associated with any X, Y, and Z data that one possesses.
It is hard to convey to nonprofessionals the frequency with which hydrogeologists obtain
data in an unknown/unspecified coordinate system. A decision must be made at the very
beginning of a project regarding which coordinate system will be used for every piece of
spatial data collected during project execution. When used properly, coordinate systems
endow the hydrogeologist with great power in terms of presenting and analyzing spatial
data. The hydrogeologist now has the ability to assign any number of time-dependent
hydrogeologic characteristics to X, Y, Z coordinates representing a position in the real
world. Such characteristic data include water-table elevations, contaminant concentrations,
well yields, and many others. To reiterate, all hydrogeological data are spatially oriented
with positions in the real world. Coordinate systems make the transition from the real
world to the numerical electronic world possible, which, in turn, creates the opportunity
for quantitative spatial analysis and advanced data visualization.
invariably find ourselves plotting data by hand in a field book, on printouts of site plans,
or even on the back of an envelope. In order to truly understand the associations intrinsic
to spatial data, we must see a visualization of data as they exist in the real world.
Data visualizations are invaluable when developing CSMs because they enable render-
ings of field conditions at different times, scales, and levels of detail. Tasks, such as delin-
eating potentially productive aquifer materials or assessing a groundwater contaminant
plume, would simply be impossible without data maps. Text can help describe the rationale
for a decision, but it lacks the obvious truths inherent to a map clearly illustrating the data
justifying the decision. For the above-listed example of delineating potentially productive
aquifer materials, one could explain in text, “Geologic exploration data indicate that the
high-yield aquifer encompasses the stratified drift deposits around the river in question.”
Yet this description is far too general on its own and must be refined in a somewhat absurd
manner to be useful in a technical sense. For example,
Downloaded by [University of Auckland] at 23:40 09 April 2014
“Geologic exploration data indicate that the high-yield aquifer is located along the river
east of the shopping mall and the sports complex, is approximately 1000 ft wide and
8000 ft long, is approximately bisected by Highway 99, and is approximately 4000 ft
west of the former landfill at its easternmost location near the unnamed pond.”
However, if the phrase “as demonstrated in the attached figure” is added to those sentences,
detailed results can be communicated to the target audience without long-winded, con-
fusing text that may confound even the most experienced hydrogeologist. Figure 3.40 is a
FIGURE 3.40
Simple visualization of productive aquifer materials in a watershed. USGS Color Ortho Imagery (2008/2009)
downloaded from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).
156 Hydrogeological Conceptual Site Models
simple visualization that clearly and concisely demonstrates the delineation of potentially
productive aquifer materials. No confusing, long-winded text is necessary.
Maps help communicate spatial data in an intangible way to technical and nontech-
nical audiences alike, activating a part of the brain that cannot be reached through text
alone. Furthermore, relatively recent advances made in computer GIS technology have
vastly improved the quality and reliability of hydrogeological maps, making them even
more useful in formulating and presenting study conclusions. This section provides an
overview of GIS tools available to the hydrogeologist and offers some mapping-standard
guidelines in the field of hydrogeology.
machine-drawn map can be related to an analog model of real-world data in which the
medium for data transmission is quite simply ink and paper. We, as humans, translate
these ink-and-paper renderings of data through our visual and cognitive abilities. The
obvious limitation of analog maps is that they are extremely difficult to reproduce by a
third party or even by the original author (especially before the advent of copy machines).
Furthermore, the underlying data represented in the ink and paper cannot be easily trans-
mitted to another party without the broader analog model (e.g., the map is often the only
means of data transmission). These shortcomings are caused by the absence of an under-
lying numerical structure. GIS has resolved this problem by replacing the analog model
with a digital model based on the binary code of zeroes and ones. This universal language
cannot be erroneously translated, and visual renderings of digital data are easily repro-
duced with computer software. Digital data transmission is seamless, particularly because
the Internet has become ubiquitous and essential to daily world operations.
Most importantly, the transition from the analog data model to the digital data model
has greatly reduced the incidence and extent of data translation and transmission errors.
Whenever data exist in analog form, such as hard-copy laboratory reports, there are count-
less opportunities for human error in the use of those data. For example, errors commonly
occur in the manual entry of hard-copy data into spreadsheets or text files, where results
can be mistyped, duplicated, or omitted entirely. The data-entry errors are compounded
when a hydrogeologist uses the flawed data set for quantitative analyses such as contour-
ing or numerical modeling. At some point, the house-of-cards system built on analog data
will collapse (typically when the errors are discovered) with catastrophic consequences for
the affected project.
The use of analog data methods persists in modern environmental consulting practice
despite widespread availability of better alternatives. Engineering-design drawing pro-
grams, such as AutoCAD, a computer-aided design (CAD) program produced by Autodesk,
Inc., are often used as stand-alone visual processing programs for projects in hydrogeology
and environmental science. Without linkage to an underlying geodatabase or a referenced
coordinate system, CAD does not constitute a true GIS in the opinion of the authors, and
hydrogeological maps created in CAD are not too dissimilar from analog maps created in
hard copy. In the opinion of the authors, CAD is best suited to detailed engineering appli-
cations and should be reserved for that purpose.
When using CAD for hydrogeological mapping, data such as monitoring-well locations
and groundwater-sampling results are typically manually placed into drawings, creat-
ing a whole separate layer of human-error potential. The failure to use a GIS for project
mapping also results in gross inefficiency as hydrogeologists must communicate back
Data Management, GIS, and GIS Modules 157
and forth with CAD technicians for hours on end. Under this system, the hydrogeologist
would first hand-draw a sketch for the CAD technician to copy, including the data to be
manually placed and/or labeled in the CAD drawing. The manual placement of data and
labels obviously takes longer in CAD than in a GIS system where symbols and labels are
automatically created by accessing the geodatabase. After the first draft of the figure is
complete, the hydrogeologist must then review it for accuracy and inevitably return to the
CAD technician with additional edits and revisions. The do loop of edits and revisions
continues as CAD technicians do not understand principles of hydrogeological mapping,
and hydrogeologists do not understand how drawings are created in CAD. For the record,
however, the authors are well aware of similar experiences with endless do loops involv-
ing GIS operators that are not trained hydrogeologists and hydrogeologists that do not
understand GIS capabilities. The inefficiency of interactions between hydrogeologists and
their support professionals again speaks to the importance of educating hydrogeologists
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.41
Flow chart produced by the authors depicting relationships and processing tools in the ArcGIS environment.
Data Management, GIS, and GIS Modules 159
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.42
Add Data button in ArcMap. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).
FIGURE 3.43
Data sets and layers, such as shapefiles (.shp), can be added to a map from folders in a GIS project directory on
one’s hard drive or network drive.
Data Management, GIS, and GIS Modules 161
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.44
Add Basemap button in ArcMap. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).
FIGURE 3.45
Different types of basemaps that can be instantly added to maps in ArcMap. Esri® ArcGIS ArcMap graphical
user interface. Copyright © Esri. All rights reserved.
162 Hydrogeological Conceptual Site Models
basemap data to a map, and Figures 3.46 through 3.48 illustrate the addition of Web data
to a map.
Zooming in and out on the screen changes the scale of the map in real time—akin to
changing the bird’s eye viewer’s elevation. Layers to the map can include feature classes
(e.g., well locations) and rasters (e.g., aerial photograph) from a geodatabase, standalone
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.46
Add Data from ArcGIS Online button in ArcMap. Esri® ArcGIS ArcMap graphical user interface. Copyright
© Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www
.mass.gov/mgis/colororthos2008.htm).
FIGURE 3.47
Few of the many data layers available on ArcGIS Online after searching for “geology.” Esri® ArcGIS Online
graphical user interface. Copyright © Esri. All rights reserved.
Data Management, GIS, and GIS Modules 163
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.48
Example bedrock geology layer (Kentucky Geology and Faults) obtained from ArcGIS Online. Source: Kentucky
Geological Survey.
shapefiles, and CAD features. Layers can be displayed using any number of symbols sup-
ported by the given shape type (e.g., points, polylines, polygons). Example symbols com-
monly used in hydrogeological maps are shown in Figure 3.49 for point data types and
Figure 3.50 for polygon data types. Hundreds of other symbols are available, all of which
can be selected and applied to data at the click of a button.
Different feature class data types and example symbols are also shown in Figure 3.51
to demonstrate their incorporation into a groundwater contour map, one of the most com-
mon hydrogeological figures.
For each data layer in a map, features can be symbolized with unique colors, markers,
shapes, sizes, and other properties (Ormsby et al. 2004). This is performed through the
Symbology tab of the layer properties dialogue. Different symbols can be used for every
unique value, categories of unique values, or quantities in a field. This is advantageous when
displaying different types of wells on a map, for example. An illustration of this capability is
provided in the screen shots in Figures 3.52 and 3.53. An example application in site remedia-
tion would be the use of different symbols and/or colors to represent soil or groundwater
sampling locations based on the sampling results. In other words, wells could be displayed
as blue circles if the concentration of a relevant constituent were below the maximum con-
taminant level (MCL) or as red triangles if the concentration were above the MCL. This can
be used to prevent excessive labeling that renders maps illegible and is described further in
Chapter 7. Symbology settings such as these are automatic in the ArcMap environment; no
manual changes are required. Displaying hundreds of symbols with properties based on
concentration results in CAD may take many hours of manual manipulation.
Rasters can be displayed in many different ways using black and white or colored shading
techniques to depict elevation or any other feature modeled by a raster. The shading intervals
164 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.49
Example of “Environmental” point symbols in ArcMap. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
FIGURE 3.50
Example of “Geology 24K” polygon symbols in ArcMap. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
Data Management, GIS, and GIS Modules 165
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.51
Example data types and symbols used in a hypothetical groundwater contour map.
FIGURE 3.52
Symbology tab of the Layer Properties dialogue. Different symbols can be selected for values in the Name field,
which represents a well type in this example. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri.
All rights reserved.
166 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.53
Data view screen shot of the symbology assignments. Esri® ArcGIS ArcMap graphical user interface. Copyright
© Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www
.mass.gov/mgis/colororthos2008.htm).
can be specified by the user and do not have to be uniformly incremented (a major advantage
compared to the raster display functionality of other computer programs). Figure 3.54 is an
example raster display using color shading to represent surface topography in a watershed.
It is important to reiterate that in the current version of ArcMap (version 10), data in
multiple different coordinate systems can be shown in the correct positions on the same
map (projected on the fly) as long as the coordinate systems are specified in each indi-
vidual piece of data. This is an improvement over earlier versions, in which all data had
to be in the same coordinate system to exist in the same location on a map. If a data layer
does not have a defined coordinate system, ArcMap will warn the user and prompt him or
her to define a projection. As described in Section 3.4.3.5, this capability can lead to errors
in quantitative data analysis, such as contouring or modeling, if database managers are
not cognizant of the need to use one consistent coordinate system for these applications.
Therefore, it is best to select one coordinate system and use it for all data associated with a
specific project (note that many GIS operators are becoming more and more lazy to do so
and instead rely on the ability to project on the fly).
Data from a shapefile or geodatabase can be added to the map through the Add Data
button or by dragging files into the layer list from ArcCatalog. ArcCatalog is a sepa
rate Esri program used to manage spatial data that also functions as an efficient GIS
browser or explorer (Esri 2010m). File geodatabases and shapefiles can be easily created
in ArcCatalog. ArcCatalog can also be used to copy, move, delete, and preview (visualize)
data (Ormsby et al. 2004). More importantly, ArcCatalog allows hydrogeologists to create a
well-organized GIS folder for each project on their company network. Data layers can also
Data Management, GIS, and GIS Modules 167
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.54
Map containing a raster representing surface topography in a watershed. Color-shaded topographic maps are
often easier for nontechnical audiences to understand than contour lines.
easily be converted between ArcGIS and CAD formats in ArcCatalog, most commonly
using the Export to CAD tool. A GIS directory with folder names is depicted in Figure 3.55
as viewed in ArcCatalog.
Standalone graphics (points, polylines, polygons, text, etc.) can be added to the data
view, which inherits scaled spatial properties inherent to the map. In other words, if a
50-ft length line is drawn in data view, that line will always be 50 ft in length regardless
of the chosen scale for viewing. Graphical additions are useful for marking up maps and
quickly adding elements that do not need permanent storage in a shapefile or geodatabase.
Additionally, any point, polyline, polygon, or annotation graphic created with the drawing
toolbar can be converted into a feature using the Convert Graphics to Features command.
This is a very powerful tool as the hydrogeologist can instantly convert hand-drawn
shapes into a shapefile or a geodatabase feature class. In other words, a simple drawing
can be converted into a digitally determined feature with prescribed spatial and other
attributes. A common use of this tool is to convert proposed monitoring-well locations
168 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.55
Example GIS directory as viewed in ArcCatalog. ArcCatalog is a better GIS browser than Windows Explorer as
ArcGIS files are consolidated for display, and data can be created, edited, or previewed. Esri® ArcGIS ArcCatalog
graphical user interface. Copyright © Esri. All rights reserved.
(drawn as point graphics) into a shapefile so that attributes such as well identifications,
construction information, and X, Y coordinates can be linked to that location.
A hypothetical example use of the Convert Graphics to Features command is illustrated
in Figures 3.56 through 3.58 to create a spatially referenced layer for a tailings pond at an
old industrial site.
FIGURE 3.56
As a first step, a red-hatched polygon is drawn around the tailings pond using the draw toolbar. Esri® ArcGIS
ArcMap graphical user interface. Copyright © Esri. All rights reserved. World Imagery Source: Esri, i-cubed,
USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User Community.
Data Management, GIS, and GIS Modules 169
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.57
After selecting the polygon, the Convert Graphics to Features option can be selected. Esri® ArcGIS ArcMap
Graphical user interface. Copyright © Esri. All rights reserved. World Imagery Source: Esri, i-cubed, USDA,
USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User Community.
FIGURE 3.58
Screen shot of the Convert Graphics to Features option window, where the drawings will be converted into a
shapefile (.shp extension). Note that the shapefile may assume the coordinate system for the entire data frame
or for any individual layer within the data frame. Esri® ArcGIS ArcMap graphical user interface. Copyright ©
Esri. All rights reserved.
170 Hydrogeological Conceptual Site Models
FIGURE 3.59
Example completed figure for a groundwater contour map at a hypothetical hazardous-waste site.
172 Hydrogeological Conceptual Site Models
Access can be considered a front-end query, and an ArcMap query can be considered a
back-end query. Oftentimes they can accomplish the same thing, and it is left to the user
to decide where a query is most appropriate for the application at hand. The most com-
mon location for querying within ArcMap is in the Definition Query tab of the Layer
Property dialogue. In this window, the user can specify which data in the selected layer
should be displayed and which data should be hidden from the mapping. Complicated
queries involving math and logical operations can easily be created in the Query Builder
module. One highly beneficial feature of the Definition Query tool is that its queries
translate to any form of spatial data analysis conducted on the queried layer in the
future.
Figure 3.60 is a screen shot illustrating the use of the Definition Query and Query Builder
features in ArcMap. In this example, the only wells to be shown in the map are those in the
shallow bedrock (BRS). In addition, well BRW-2U will be excluded. If this layer is used to
Downloaded by [University of Auckland] at 23:40 09 April 2014
generate a groundwater contour map in ArcMap, the only wells included in the contouring
FIGURE 3.60
Example use of the Definition Query tab of the Layer Properties dialogue. Note that buttons on the bottom of
the Query Builder box can be used to verify query language logic, to load queries from other layers, or to obtain
help regarding query language. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved.
Data Management, GIS, and GIS Modules 173
algorithm will be those in the shallow bedrock. Complex queries using multiple data lay-
ers can be created through the Join and Relate tools, which connect the attributes of dif-
ferent tables (similar to a select query in Microsoft Access). A nonspatial data table can be
linked to the attribute table of a spatial data layer using the Join and Relate tools. The Join
tool combines the separate tables into one new, larger table, and the Relate tool links attri-
butes but keeps the tables separate (Ormsby et al. 2004).
custom expressions can be used to label individual data categories differently and with user-
specified notes or additions. Labels are all placed automatically, such that no manual data
entry is required. However, it may be advantageous to convert labels into graphics in instances
of label overflow or overlapping. Advanced GIS users may consider using the labeling tool-
bar or the Maplex extension for highly labeled maps. In general, it is recommended to avoid
the use of excessive labeling to keep maps simple and focus on the most important elements.
Figure 3.61 is a screen shot illustrating the use of the labeling toolbar.
FIGURE 3.61
Labels tab of the Layer Properties dialogue. Note that different labels can be used for different queried data sets
by changing the setting on the Method tab. In this example, the Well_ID field is labeled with the resulting labels
visible in the map view behind the dialogue box. Esri® ArcGIS ArcMap graphical user interface. Copyright
© Esri. All rights reserved. World Imagery Source: Esri, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping,
Aerogrid, IGN, IGP, and the GIS User Community.
174 Hydrogeological Conceptual Site Models
FIGURE 3.62
Screen shot illustrating the creation of a Water Supply Well feature using the editor toolbar (new feature
denoted by the small aqua circle). Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).
Data Management, GIS, and GIS Modules 175
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.63
Attribute definition using the editor toolbar. The new Water Supply Well is named EW (for “extraction well”)
and assumes the display properties for that symbol class. Note that if the layer being edited is stored in a geo-
database, the geodatabase itself is being edited. Esri® ArcGIS ArcMap graphical user interface. Copyright ©
Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass
.gov/mgis/colororthos2008.htm).
FIGURE 3.64
Dimensioning features can be created in editor for display purposes or to help place features in their correct locations.
Snapping tools can also help ensure that features are placed in exact locations, such as the intersection of two lines.
Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved. USGS Color Ortho Imagery
(2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).
176 Hydrogeological Conceptual Site Models
FIGURE 3.65
Input features for the clip tool. The hydrogeologist needs to clip the topographic contour lines to the watershed
boundary for incorporation into a groundwater model.
Data Management, GIS, and GIS Modules 177
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.66
After successful execution of the clip tool, a new layer is created with topographic contours limited to the extent
of the watershed.
Overlaying commands compute and create features representing the intersection, union,
or spatial join of separate layers. In hydrogeological practice, one example of overlaying
tool usage is calculating areas of soil impacted by multiple different contaminants. This
is demonstrated with the Intersection tool in Figure 3.67. Buffering tools calculate and
create polygon or polyline buffer rings around selected features. This is useful to the
FIGURE 3.67
Example use of the intersection tool to create a feature (red) representing the area where the two plumes (tan
and blue) overlap. Note that the intersect tool is considered an overlay tool, and the clip tool is considered
an extract tool. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).
178 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.68
Example use of the buffer tool to create 1000 ft radii circles around two water-supply wells. Note that there is
also a multiple ring buffer tool, which can automatically generate multiple circles with different radii around
individual features. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).
hydrogeologist when needing to show buffer zones around water-supply wells to protect
against contamination as illustrated in Figure 3.68.
FIGURE 3.69
Example use of the Calculate Geometry tool to calculate the area, perimeter, and centroid coordinates. The dia-
logue box shown is to calculate the area, which can be done using either the coordinate system of the layer or of
the data frame (in this case, the same as should be always attempted). Also note that numerous unit options are
available. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.
FIGURE 3.70
Example output of the measure features toolbar when drawing a polygon around the boundary of the swamp
(Middle Reach of Muddy River) to measure its area in acres. Note that the measurement is a good approxima-
tion of the exact calculation shown in Figure 3.69. Esri® ArcGIS ArcMap graphical user interface. Copyright ©
Esri. All rights reserved.
180 Hydrogeological Conceptual Site Models
FIGURE 3.71
Attribute table of monitoring-well feature class that does not have X, Y coordinates. Esri® ArcGIS ArcMap
graphical user interface. Copyright © Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) down-
loaded from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).
FIGURE 3.72
Selection of the add X, Y coordinates tool. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface.
Copyright © Esri. All rights reserved.
Data Management, GIS, and GIS Modules 181
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.73
Following execution of the tool, X (POINT_X) and Y (POINT_Y) coordinates have been added to the attri-
bute table in the feature class’s projected coordinate system. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
FIGURE 3.74
Screen shot of input parameters for the project tool. This tool can be used to define a projection for features
that do not have an associated projection or to reproject features from one coordinate system to another. In this
example, monitoring-well features in Massachusetts are being reprojected from the SPCS to the UTM coor-
dinate system. In addition, a geographic transformation is being used to convert the underlying geographic
coordinate system from NAD 83 to NAD 27. The reprojected feature class can be saved as a different file so
the original data layer is preserved. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface. Copyright
© Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www
.mass.gov/mgis/colororthos2008.htm).
182 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.75
Attribute table of the reprojected wells layer Wells_Reproject after using the add X, Y coordinates tool to popu-
late the table with the new UTM coordinates. Note the significant differences between the old coordinates
(SPCS) and the new UTM coordinates (POINT_X and POINT_Y). Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
• Converting feature types from one format to another, such as converting a poly-
line to a polygon
• Miscellaneous data management tasks for tables, feature classes, and rasters,
including creating, copying, exporting, and compressing files
FIGURE 3.76
Example hypothetical sampling design output from VSP. Green dots are historic shallow soil sampling loca-
tions, and yellow dots are additional sampling locations proposed by VSP to satisfy the decision error thresh-
olds prescribed by the user.
184 Hydrogeological Conceptual Site Models
ate random or grid-based sampling designs and is easily linked with Microsoft Access
and ArcMap to import basemaps and export data. RAT software can also be used to
perform simple statistical calculations, generate histograms and trend plots, and produce
data contour maps from rasters created with natural neighbor interpolation methods (U.S.
EPA 2009). RAT software goes beyond VSP in its utility as a data collection interface in
the field. The program can receive GPS and field-monitoring data and immediately store
them in a Microsoft Access database for presentation in its visual processor as depicted
in Figure 3.77. For this reason, FIELDS promotes RAT software’s use in real-time continu-
ous mapping applications during large-scale site investigations. The ability to efficiently
store, visualize, and analyze data in real time is particularly useful in contaminant delin-
eation or excavation projects where decisions are made continuously regarding where
next to sample or dig. Figure 3.78 shows an example visualization screen in RAT. The
use of data visualizations as a real-time decision tool is described further in Chapters 7
through 9.
FIGURE 3.77
RAT attempts to streamline the transmission of field data to the project database by interacting directly with field
equipment such as an X-ray fluorescence instrument used to screen soils for heavy metals, a GPS unit used to obtain
coordinates for sampling locations, and a multigas meter used to screen the breathing zone of the sampling area
(shown from top to bottom). (Modified from U.S. EPA, 2009b, Introduction to RAT Presentation. “RAT Introduction.
ppt.” Available at http://www.epaosc.org/site/doc_list.aspx?site_id=5208, accessed February 10, 2011.)
Data Management, GIS, and GIS Modules 185
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.78
RAT visualization screen with a sampling grid. (From U.S. EPA, Rapid Assessment Tool (RAT) User Guide
Version 3.02.07. SOP NO. C-ERT-O-004, Revision No. 0, 95 pp., 2009a.)
FIGURE 3.79
Image of contour map produced in SADA. (From University of Tennessee, Spatial Analysis and Decision
Assistance Documentation, 2008, www.tiem.utk.edu/~sada/documentation.shtml, accessed February 14, 2011.)
186 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.80
Example three-dimensional visualization of contamination over bedrock produced in SADA. (From University
of Tennessee, Spatial Analysis and Decision Assistance Software–Visualization, 2007a, www.tiem.utk
.edu/~sada/visualization.shtml, accessed February 14, 2011.)
full life-cycle statistical evaluation of data, incorporating the followings steps (University
of Tennessee 2008):
The holistic, self-contained approach of SADA presents great opportunities to the hydro-
geologist to efficiently manage, analyze, and present data. Often in professional practice,
multiple different computer programs are used for each of the above steps, which can cre-
ate confusion and inefficiency. The GIS professional appreciates SADA because results and
conclusions from each site assessment stage can be seamlessly visualized. For example,
risk assessment conclusions are rarely visualized on a map as the calculations are typically
performed in external spreadsheets devoid of location data. An example risk assessment
visualization in SADA is presented in Figure 3.81. Clear presentation of risk assessment
results is particularly important because the majority of decisions made in the remediation
of hazardous waste sites are based on risk-assessment results.
The integrated, comprehensive nature of SADA also enables the hydrogeologist to effec-
tively blend the assessment stages such that decisions can be made in an iterative fash-
ion after revisiting earlier analyses. For example, SADA can be used to create sampling
designs integrating the results of earlier risk-assessment calculations.
Data Management, GIS, and GIS Modules 187
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.81
Image of contour map of cumulative risk produced in SADA. (From University of Tennessee, Spatial Analysis
and Decision Assistance Software–Risk Assessment, 2007b, www.tiem.utk.edu/~sada/humanhealth_risk
.shtml, accessed February 14, 2011.)
3.7.1.4 ArcToolbox
Within ArcToolbox, the Statistics toolset is used to calculate summary statistics data from
map or geodatabase elements. ArcToolbox also contains a Spatial Statistics toolbox that
performs more complex calculations on spatial data, including geographic distribution,
pattern, and cluster analysis as shown in Figure 3.82. These tools may be used to calculate
the mean center of data (one example of which would be calculating the center of mass of
a groundwater contaminant plume) or to perform hot spot, cluster, and outlier analysis
on hydrogeological data. The Spatial Analyst extension to ArcGIS also contains functions
to calculate statistical parameters of rasters and then assign these values to cells in a new
raster layer (Esri 2010n).
3.7.1.5 ProUCL
More complicated statistical methods are often required for hydrogeological investiga-
tions. These methods are not limited to spatial data alone but are often used in hydrogeol-
ogy for analyses of spatial data stored in a GIS environment. Common advanced statistical
calculations performed in hydrogeological practice include confidence interval estimation,
hypothesis testing, distribution testing, and trend analysis. ProUCL is a statistical mod-
ule available in the public domain, which can be used to perform hypothesis testing (for
example, comparing the mean concentration of contaminant concentrations in soil to a
regulatory standard), distribution testing, and to calculate UCLs of a mean estimate (e.g.,
a 95% UCL; U.S. EPA 2007). UCLs are of particular interest for risk assessments for which
95% UCLs often represent exposure point concentrations.
188 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.82
Screen shot of available spatial statistics tools in ArcToolbox. Esri® ArcGIS ArcToolbox graphical user interface.
Copyright © Esri. All rights reserved.
FIGURE 3.83
Example Mann–Kendall trend analysis summary table produced in MAROS for benzene concentrations in
groundwater at a hypothetical hazardous-waste site. Note that MAROS gives the user a qualitative judgment
that can be referenced when assessing trends (e.g., decreasing).
FIGURE 3.84
Example plot of the center of mass (first moment) of a hypothetical groundwater plume at a hazardous-waste
site. Locations of the center of mass are shown for 10 sequential years to demonstrate how the plume has
changed over time.
190 Hydrogeological Conceptual Site Models
performed through Mann–Kendall trend analysis, which can be used to help estimate
when cleanup will be achieved. MAROS evaluates spatial changes in groundwater con-
tamination by calculating the zeroth, first, and second spatial moments of groundwater
data. The zeroth moment is simply the total mass of a contaminant in groundwater, the
first moment is the center of mass of the groundwater plume (X, Y position), and the sec-
ond moment is the spread of the plume about the center of mass (Aziz et al. 2006).
Spatial moment analysis can demonstrate successful application of monitored natural
attenuation of groundwater contamination or can be used to justify removing redun-
dant monitoring wells from a sampling network. The results of MAROS analyses can be
exported into a visual processor (e.g., ArcMap) to create powerful graphics demonstrating
changes in the spatial orientation of groundwater plumes. Central to MAROS is the use of
X, Y coordinates, which cements its place as an important GIS module.
Downloaded by [University of Auckland] at 23:40 09 April 2014
used to collect lithology data in the subsurface, to install monitoring or production wells,
and to collect geophysical data. All these data must be presented in a professional, easy-
to-understand manner. Typically, preliminary data collected in the field are recorded on
paper in field manuals. These data can then be entered into a geodatabase or into spe-
cific programs created to facilitate boring log production. The use of handheld electronic
devices or laptop computers in the field to enter data, such as soil classifications, directly
into a geodatabase (real-time data entry) is described further in Chapter 7.
In the authors’ experience, the most widely used module for boring log production is gINT,
an Access-based relational database program. Within gINT, the hydrogeologist creates tables
of data collected in the field and automatically places the data in a customized boring log
report with user-defined symbology. Example data fields stored and displayed by gINT
include soil type with USCS classification and symbology, cone penetrometer test data, pho-
toionization screening data and graphs, well-construction data, and/or geophysical data.
Downloaded by [University of Auckland] at 23:40 09 April 2014
gINT can be considered a GIS module because it uses a database to store and query data
and because the addition of coordinates enhances its capabilities. When supplied with
coordinates for the individual boring logs, gINT can be used to create fence diagrams and
the shell of a geological cross section.
Another commercial software product commonly used to generate boring logs is
RockWorks by RockWare. RockWorks is a program specifically designed for subsurface
data visualization and has numerous applications in the geotechnical, environmental, and
oil/gas industries. Subsurface data can be imported into RockWorks in many different
formats, and the data can be analyzed and visualized to produce the following maps and
diagrams (RockWare 2011a):
• Boring logs
• Cross sections
• Fence diagrams
• Contour maps (using both geostatistical and deterministic grid interpolation
methods; see Chapter 4)
• 3D models
• Geochemical plots (piper, Stiff, and Durov diagrams)
• Structural diagrams (rose, lineation, and arrow maps)
RockWorks has both 2D and 3D visual processors and provides a platform for organizing
and gridding subsurface geologic, hydrogeologic, and geochemical data. The user can also
create composite images and animations. An example 3D composite image that illustrates
many of RockWorks’ capabilities is presented as Figure 3.85. Additionally, data, graphics,
and models can be imported from and exported to ArcGIS, AutoCAD, Surfer, and other
software environments. The companion product RockWorks GIS Link allows the user to
create cross sections, fence diagrams, and other graphics within ArcMap (RockWare 2011b).
Geologic cross sections are developed for nearly all hydrogeological applications in pro-
fessional practice. Cross sections are critical in both developing and visualizing the CSM
(Chapter 2) as myriad data can be displayed, including, but not limited to, stratigraphy,
lithology, geotechnical data, water-level data, field-screening data, and laboratory-analytical
data. Full computer automation of cross-section generation is very difficult as extensive pro-
fessional interpretation is needed to develop layers and annotate figures. While programs,
such as RockWorks, may create an excellent skeleton for a geologic cross section, it is likely
192 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 3.85
Example 3D composite image produced in RockWorks showing boring logs, a fence diagram, and fate and trans-
port pathways. (From RockWare, RockWorks Features, 2011, www.rockware.com/product/featureCategories
.php?id=165, accessed May 19, 2011.) Property of RockWare, published with permission.
that manual edits will be necessary—often done in independent computer programs such as
Canvas, AutoCAD, or ArcMap. Many hydrogeologists prefer to develop cross-section drafts
entirely by hand on graph paper and to then have these drafts digitized.
FIGURE 3.86
Example data visualization in HydroDaVE with topographic base map and groundwater elevation contours.
environmental-specific relational database with user-friendly reporting tools and the abil-
ity to use AutoCAD as a visual processor.
The primary drawback of proprietary database systems, as opposed to databases cre-
ated manually by the user through an Access/SQL platform, is the loss of flexibility to
create customized, project-specific solutions for data management problems. However,
proprietary systems are becoming increasingly robust and mindful of project needs. Web-
enabled programs, such as HydroDaVE, are becoming adept at incorporating data beyond
numeric values, including critical CSM data elements, such as Google maps, text, images/
photographs, animations, and scans of historical documents. Proprietary systems are also
becoming better at integrating into other programs, such as ArcGIS, for data analysis and
visualization. Therefore, it is likely that the role of these systems in professional practice
will expand in the coming years.
input data, the quality of the required assumptions, and the quality of the technical review
of the output. The familiar maxim garbage in = garbage out certainly applies. A non
hydrogeologist can produce a contour map with a computer program that appears reason-
able on paper but has major conceptual errors. An example of button-pressing contouring
gone wrong is provided in Figure 3.87.
In general, GIS modules are tools to help the hydrogeologist perform and display com-
plex analyses efficiently. The hydrogeologist must always rigorously review the input and
output and make revisions as needed to maintain consistency with the CSM. A computer
module is not a substitute hydrogeologist. Rather than asking to see what the computer
says, project managers and clients should ask to see what the hydrogeologist produces
through use of computer modules.
Because of the button-pressing perception of computer modules, many environmental
professionals are wary of their application and prefer to rely solely on conceptual infer-
Downloaded by [University of Auckland] at 23:40 09 April 2014
ences that can be made on raw data alone or on hand calculations made using that raw
data (e.g., hand contouring, analytical modeling). However, this mind-set ignores the
many benefits of computer modules in hydrogeology. The following list explains why GIS
modules are invaluable to the field of hydrogeology and comprise excellent responses to
the question: “Why should we use these computer programs anyway?”
GIS modules are efficient. Any hydrogeologist who has experienced the frustration of per-
forming manual analysis and then having that analysis scanned and traced into a com-
puter visual processing program can appreciate the efficiency gains of GIS modules. The
GIS language of these modules enables seamless integration of output with visual proces-
sors. Furthermore, the computational efficiency of these modules saves hours of time. A
contour map that may take an hour or more to draw manually can be created in minutes.
Even more powerful is that many different versions of the same contour map can be cre-
ated in minutes with different input parameters to provide interpretive options to the
FIGURE 3.87
Example of groundwater contour map produced in a computer program that makes no conceptual sense.
Instead of interacting with the river in a realistic manner, water is shown to flow toward an unknown sink for
which there is no explanation (i.e., no pumping wells or other groundwater extraction mechanisms). There is a
problem with either the data quality or with the way the data was contoured in the computer program.
Data Management, GIS, and GIS Modules 195
hydrogeologist. Even when a contour map created by a GIS module has to be adjusted
manually by the hydrogeologist, which is often the case, the time involved is significantly
shorter compared to drawing the entire map by hand, from scratch.
Computer modules are nonsubjective. Any form of hand contouring or hand interpolation
will be entirely subjective by definition. Linear interpolation will often be used without
any justification for doing so. Conversely, computer modules produce output based on
scientific quantitative analysis (statistics, geostatistics). This nonsubjective output can then
be assessed by the hydrogeologist and adjusted based on fundamental concepts in hydro-
geology (e.g., surface water–groundwater interaction). As a result, the analysis process
becomes more transparent and focuses attention on the key conceptual assumptions made
in the analysis. However, it cannot be stressed enough that no GIS module or computer
program can replace the final professional interpretation of a hydrogeologist. There will
be real-world situations where underlying hydrogeology, heterogeneity, and other factors
Downloaded by [University of Auckland] at 23:40 09 April 2014
References
Downloaded by [University of Auckland] at 23:40 09 April 2014
Aziz, J., Vanderford, M., Newell, C. J., Ling, M., Rifai, H. S., and Gonzales, J. R. 2006. Monitoring
and Remediation Optimization System (MAROS) Software Version 2.2 User’s Guide. Air Force
Center for Environmental Excellence, GSI Job No. 2236, 309 pp.
Data Management Association (DAMA), 2007. Home–DAMA International. Available at www.dama
.org/, accessed February 13, 2011.
Databasedev.co.uk, 2007. Database Solutions for Microsoft Access. Available at www.databasedev.co
.uk, accessed February 14, 2011.
de Smith, M. J., Goodchild, M. F., and Longley, P. A. 2006–2011. Geospatial Analysis—–A Comprehensive
Guide to Principles, Techniques and Software Tools, 3rd Edition. The Winchelsea Press, 560 pp. Web
Edition. Available at http://www.spatialanalysisonline.com/output/. Accessed February 14,
2011.
Esri, Inc., 2010a. What Is a Geodatabase? ArcGIS 10 Help Library.
Esri, Inc., 2010b. What Is a File Geodatabase? ArcGIS 10 Help Library.
Esri, Inc., 2010c. Table Basics. ArcGIS 10 Help Library.
Esri, Inc., 2010d. Feature Class Basics. ArcGIS 10 Help Library.
Esri, Inc., 2010e. Raster Basics. ArcGIS 10 Help Library.
Esri, Inc., 2010f. What Are Geographic Coordinate Systems? ArcGIS 10 Help Library.
Esri, Inc., 2010g. What Are Projected Coordinate Systems? ArcGIS 10 Help Library.
Esri, Inc., 2010h. About Map Projections. ArcGIS 10 Help Library.
Esri, Inc., 2010i. Robinson. ArcGIS 10 Help Library.
Esri, Inc., 2010j. Vertical Datums. ArcGIS 10 Help Library.
Esri, Inc., 2010k. Displaying Maps in Data View and Layout View. ArcGIS 10 Help Library.
Esri, Inc., 2010l. What Is a Shapefile? ArcGIS 10 Help Library.
Esri, Inc., 2010m. What Is ArcCatalog? ArcGIS 10 Help Library.
Esri, Inc., 2010n. Raster Dataset Statistics. ArcGIS 10 Help Library.
Golden Software, Inc., 2004. Gridding Overview. Surfer Version 8.05 Help.
Massachusetts Department of Environmental Protection (MassDEP), 2007. MCP Representativeness
Evaluations and Data Usability Assessments. Commonwealth of Massachusetts, Executive
Office of Energy & Environmental Affairs, Policy #WSC-07-350.
Matzke, B. D., Nuffer, L. L., Hathaway, J. E., Sego, L. H., Pulsipher, B. A., McKenna, S., Wilson, J. E.,
Dowson, S. T., Hassig, N. L., Murray, C. J., and Roberts, B., 2010. Visual Sample Plan Version
6.0 User’ Guide. United States Department of Energy, PNNL-19915, 255 p. Available at http://
vsp.pnnl.gov/docs/PNNL%2019915.pdf.
Ormsby, T., Napoleon, E., Burke, R., Groessi, C., and Feaster, L., 2004. Getting to Know ArcGIS Desktop,
Second Edition. Esri Press, Redlands, CA.
RockWare, 2011a. RockWorks Features. Available at www.rockware.com/product/featureCategories
.php?id=165, accessed May 19, 2011.
Data Management, GIS, and GIS Modules 197
U.S. EPA, 2009b. Introduction to RAT Presentation. “RAT Introduction.ppt.” Available at http://
www.epaosc.org/site/doc_list.aspx?site_id=5208, accessed February 10, 2011.
United States Geological Survey (USGS), 1987. Map Projections—A Working Manual. USGS
Professional Paper 1395, US Department of the Interior, US Geological Survey, 394 pp.
USGS, 2001. The Universal Transverse Mercator (UTM) Grid. USGS Fact Sheet 077-01, US Department
of the Interior, US Geological Survey, 2 pp.
USGS, 2007. Geographic Information Systems. Available at http://egsc.usgs.gov/isb/pubs/gis_
poster/, accessed January 23, 2012.
USGS, 2010. USGS High Plains Aquifer WLMS: Water-Level Data by Water Year (October 1 to
September 30). US Department of the Interior, US Geological Survey. Available at http://ne.water
.usgs.gov/ogw/hpwlms/data.html, accessed October 2, 2010.
Downloaded by [University of Auckland] at 23:40 09 April 2014
4
Contouring
4.1 Introduction
Contouring is one of the most important tasks performed during all phases of conceptual site
model (CSM) development. It is a two-dimensional visual presentation of a spatially distrib-
uted CSM element. Because most such elements are below land surface and not directly vis-
ible, their spatial distribution, or shape, is estimated from data collected in the subsurface at
discrete locations, the so-called data points. Locations of unique data points are defined with
three spatial coordinates, X, Y, and Z. Lines that connect locations with the same value of a
parameter of interest are called contours. As explained later in more detail, the contours can
be drawn using various methods but are always the result of interpolation between known
discrete values and therefore do not represent a true spatial distribution of the parameter. In
other words, depending on the number of field data and distances between data points, as
well as the applied contouring method, contours are more or less accurate.
Mathematically, a contour line (or isoline) represents a function of two spatial coordi-
nates along which it has a constant value. The best-known example is a two-dimensional
map view of the topographic (land) surface where the land surface elevation is a function
of X and Y coordinates. A topographic contour line connects all points of equal elevation
above a given level (called datum, usually mean sea level). Successive contour lines on a
topographic map are drawn for equal difference in elevation between them. This differ-
ence is called contour interval. Figure 4.1 illustrates the concept of contour lines applicable
to any surface. In general, contour lines can be thought of as intersections of stacked hori-
zontal planes with a real or hypothetical surface. Common examples of surfaces of hydro-
geological interest include the water table of an unconfined aquifer (real physical surface),
the potentiometric surface of a confined aquifer (imaginary surface), and the top surface of
a confining layer (real physical surface).
In addition to surfaces, where the main element of interest is elevation, contour lines
are routinely used to represent equal values of other spatially varying parameters, such
as contaminant concentration in the vadose and saturated zones, hydraulic conductivity,
percentage of clay in the sediment, and many others. However, because these parameters
are distributed within certain volumes of porous media, contour lines are of limited use
because they, by definition, can represent values of a parameter only in planar views as
shown schematically in Figure 4.2. Quite a few sets of contours in both horizontal (map
view) and vertical (cross-sectional view) planes would have to be constructed to repre-
sent the true three-dimensional nature of the contaminant distribution in this case. More
appropriate would be to represent contaminant distribution using a mathematical function
of three spatial coordinates, X, Y, and Z, or the so-called isosurfaces along which the con-
centration values are constant. One of the common reasons why this visualization method
is disproportionately less utilized than the two-dimensional contouring is availability of
data at different depths (i.e., along the vertical or Z coordinate) and costs associated with
obtaining such data at a sufficient vertical resolution. Three-dimensional contouring with
199
200 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
(a)
(b)
FIGURE 4.1
(a) Two views of a terrain with horizontal geologic layers dissected by surface drainage. (b) Contour map of the
topographic (terrain) surface. Individual contour lines are intersections of horizontal planes with the terrain
surface. Contour interval is the vertical distance between the successive horizontal planes (contour lines).
Contouring 201
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.2
Vertical cross section through groundwater contaminant plume showing contours of equal contaminant con-
centration. Contours are interpolated from concentrations measured at discrete vertical locations in multiport
monitoring wells.
FIGURE 4.3
Left: Finding the position of the water table in three dimensions using data from three monitoring wells (num-
bers are water levels in meters or feet above sea level). Right: Construction of water table contour lines by trian-
gulation with linear interpolation.
interpreter. The first draft manual map is not necessarily an exact linear interpolation
between data points. Rather, it is an interpretation of the hydrogeologic conditions with
contours that roughly follow available numeric data. An important but often ignored fact
when manually drawing contour maps is that most, if not all, parameters that are con-
toured do not change linearly from one location to another. In other words, natural and
anthropogenic processes that shape spatial distribution of a CSM element (parameter) can
rarely be described with linear equations. Some notable exceptions include smooth, undis-
turbed contacts between sediment layers deposited during slow, long-lasting, regional
transgressions and the hydraulic gradient in a homogeneous confined aquifer of uniform
thickness from which there is no addition (recharge) or withdrawal (pumping) of ground-
water at the scale of interest.
An obvious limitation of manual contouring is that there is no possible way for the
hydrogeologist to quantify the uncertainty associated with his or her interpolated surface.
Additionally, the inferred spatial relationship between the data is entirely subjective and
is not based on quantitative analysis of spatial dependence with distance or direction.
Therefore, the slope of manually drawn contour lines between data points is arbitrary.
If the available computer program cannot produce a satisfactory contour map (for
example, there is a complicated mixture of impermeable and equipotential boundaries of
groundwater flow), and it cannot be forced to do so by the interpreter, the solution is to
digitize a manually drawn map (or draw the contours manually in a computer program
such as ArcMap). This, however, may be a lengthy process, and it is better to acquire an
appropriate software package for contouring. Quite a few computer programs today offer
a wide range of contouring methods, allow the interpreter to adjust the generated con-
tours, and can display contour maps in a variety of formats. Some of the most powerful
and widely used commercial programs include Surfer (Golden Software 2002) and the
Geostatistical Analyst extension (and to a lesser extent the Spatial Analyst extension) to
ArcGIS (Esri 2003). There are also several versatile programs in the public domain, such
as Visual Sample Plan (VSP; Matzke et al. 2010) and SADA (Institute of Environmental
Modeling, University of Tennessee 2008), which include both contouring and GIS capa-
Downloaded by [University of Auckland] at 23:40 09 April 2014
bilities. Graphical User Interface (GUI) software packages developed to support popular
groundwater modeling programs, such as Modflow, can also be used to create contour
maps from field data and export them to other applications. Examples of commercial
software include Groundwater Vistas by Environmental Simulations, Inc. and Processing
Modflow by Simcore Software, Inc. More detail on GUIs for groundwater modeling is
provided in Chapter 5.
Contouring programs require that individual data points be presented with two spatial
coordinates and the value of the parameter to be contoured (more detail on data prepara-
tion and coordinate systems is provided in Chapter 3). Common to all programs is divi-
sion of the two-dimensional space of interest into equally spaced vertical and horizontal
lines, that is, the creation of a contouring grid. In Surfer, the user can either specify the
grid spacing or let the program automatically determine it from the range of distances
between individual data points. The two basic requirements common to all contouring
methods, namely, data organization and creation of the grid, are shown schematically
in Figure 4.4. What separates different contouring methods is the mathematical equa-
tions (i.e., the model) used to calculate the parameter values (e.g., water table elevation or
contaminant concentration) at locations where it was not directly measured. During this
process, called spatial interpolation, the calculated values of the parameter are assigned
to the grid either at intersections of the grid lines or at the centers of the cells (squares)
formed by the grid lines. This means the basis for any contour map that will be eventu-
ally drawn by a program is a numeric matrix of equally spaced rows and columns called
a raster file (see also Chapter 3). Contour lines, for any contour interval specified by the
user, connect identical numeric values in the grid. Some programs, such as Surfer, include
options for smoothing the initial contours to give them a more natural look. Selecting a
finer contouring resolution (i.e., smaller cell size or grid spacing) will generally also result
in smoother contours.
One of many advantages of computer-based contouring using software programs is that
the interpolated grid, or raster file, can be displayed in a variety of ways as illustrated in
Figures 4.5 and 4.6. Observing the interpolated surface with and without contours super-
imposed, rotating and viewing it from different vertical angles, or using different colors
and resolution enables the user to better interpret various features of the surface and their
significance for the ongoing project. The most important benefit of having raster files (i.e.,
numeric values) is that they can be effortlessly transferred between various programs and
used for quantitative analyses. This includes calculation of volumes between surfaces,
areas between contours, or surface gradients (slopes) for example. These calculations are
204 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.4
Portion of a contour map created using a computer program. Top: Eight discrete values measured in the field are
shown with blue circles and black numbers with no decimal digits. The model calculates interpolated values and
assigns them to all grid nodes. Several interpolated values (red numbers with five decimal digits) are shown with
small red circles at intersections of dashed grid lines. Contour lines, with the contour interval of five, connect the
same parameter values in the grid. Bottom: Same map with the contour interval of two; it is advisable to create sev-
eral maps with different contour intervals as they may better reveal local variations in the parameter values.
commonly referred to as grid math or grid calculus and have numerous applications in
professional hydrogeology:
FIGURE 4.5
Left: Contour map of a water table influenced by three pumping wells near a river. Dashes on contour lines
indicate groundwater flow (gradient) direction. Orange circles are locations of wells with field measurements.
Right: The same map shown as a colored raster. The inset reveals the visual advantage of color raster maps in
emphasizing details (differences).
The use of grid math to support documentation of monitored natural attenuation (MNA)
processes at hazardous-waste sites is described in Chapter 8.
Figure 4.7 shows contours created by different interpolation methods using the same
data set of the top surface of an actual clay aquitard. It is obvious that some of the maps
look very different from others, and some are quite similar. While, in most cases, it would
be rather easy to decide which maps (contouring methods) do not make much sense, it
will be more challenging to select the right one based solely on the first visual impres-
sion even if a degree of professional judgment is involved. Therefore, it is desirable to use
spatial interpolation models that also statistically describe the accuracy of the interpolated
surface. Spatial interpolation models and uncertainty analysis in contouring are described
in detail in Section 4.2.3.
206 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
(a)
(b)
FIGURE 4.6
(a, b) Two 3D views of the water table surface map shown in Figure 4.5.
Contouring 207
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.7
Contours of the top of a clay aquitard created by various contouring methods available in Surfer.
208 Hydrogeological Conceptual Site Models
methods do not measure the uncertainty of interpolated predictions, with the notable
exception of local polynomial interpolation (LPI; Krivoruchko 2011). Despite this obvi-
ous limitation, deterministic models may be useful where spatial correlation cannot be
discerned within a data set. For example, soil or sediment concentrations of hydrophobic
organic contaminants, such as polychlorinated biphenyls, can have extremely high local
variability and disperse disposal locations that make data appear random and ill-suited
for geostatistical modeling. However, quite frequently inadequate data exploration is con-
ducted before choosing a spatial interpolation model, and the hydrogeologist may not
recognize that data transformations and/or detrending may satisfy statistical assump-
tions and justify the use of geostatistical methods (see Section 4.5).
Deterministic models are commonly used in professional practice, and it is likely they
will persist in contouring software indefinitely because of their speed and simplicity.
Consultants often prefer a model that is easier to understand and explain/document to
Downloaded by [University of Auckland] at 23:40 09 April 2014
stakeholders than one that requires more technical rigor, thereby avoiding a costly (or per-
ceptively costly) “science project.” Deterministic models commonly applied in hydrogeol-
ogy and earth sciences are briefly discussed below.
Minimum Curvature
Contouring with minimum curvature is widely used in the earth sciences because this
method generates the smoothest possible surface while still closely honoring the original
data points. Minimum curvature is not an exact interpolator, however, and often does
not show expected breaks in contours that may be the result of natural influences. Some
programs, such as Surfer, alleviate this shortcoming by allowing the user to introduce so-
called breaklines and fault lines into the gridding process (note that these features can also
be used with most other contouring methods in Surfer; Golden Software 2002).
to another decreases with the power of distance. The greater the weighting power is, the
less effect points far from the grid node have during interpolation. As the power increases,
the grid node value approaches the value of the nearest point (i.e., less averaging). For
a smaller power, the weights are more evenly distributed among the neighboring data
points (i.e., more averaging).
Normally, inverse distance to a power behaves as an exact interpolator. When calculat-
ing a grid node, the weights assigned to the data points are fractions, and the sum of all the
weights are equal to 1.0. When a particular observation is coincident with a grid node, the
distance between that observation and the grid node is 0.0, and that observation is given
a weight of 1.0, while all other observations are given weights of 0.0. Thus, the grid node
is assigned the value of the coincident observation. Although using a smoothing param-
eter somewhat buffers this behavior, the resulting maps often have a bull’s-eye pattern
around the positions of data points as shown in Figure 4.8 and also previously in Figure
Downloaded by [University of Auckland] at 23:40 09 April 2014
4.7. Software smoothing of contours can only slightly reduce this effect but does not elimi-
nate unnatural depressions and mounds (Golden Software 2002).
As there is no rational basis for choosing the weighing power for IDW interpolation, the
contouring parameters should be selected based on minimizing cross-validation error (see
Section 4.2.3.4). When data are dense and spatially correlated, the optimal power will be
close to 3, whereas when data are spatially independent, the optimal power will be close
to 1 (Krivoruchko 2011). If data are truly independent, spatial interpolation cannot be used
to make conclusions as there is no justification for basing new predictions on existing data
(in this case, interpolation should be used for display purposes only). As explained in the
work of Krivoruchko (2011), Geostatistical Analyst does not allow specification of a weigh-
ing power less than 1 because this can cause data points far away from prediction locations
to influence predictions more than those that are close. This contradicts the fundamental
assumption of spatial correlation and is nonsensical. Surfer does allow specification of
powers between 0 and 1, and the user is thus cautioned against this practice.
FIGURE 4.8
Contours of the top of an aquitard created using an IDW interpolation with default settings in Geostatistical
Analyst. Measurement locations indicated by blue circles.
Contouring 211
are all exact interpolators and closely preserve original data. However, unlike IDW inter-
polation, RBFs can predict values higher and lower than the maximum and minimum
measured values, respectively. RBFs are well suited for gently varying data sets, such as
hydraulic head, but do not produce good results for highly variable data sets (Krivoruchko
2011). In terms of the ability to fit data and produce a smooth surface, the multiquadric
method is considered by many to be the best. A smoothing factor can be introduced to all
the methods in an attempt to produce a smoother surface (Golden Software 2002). More
detail on contouring with RBFs can be found in the work of Carlson and Foley (1991).
(as is the case in deterministic models), weights are determined from the observed data
through the semivariogram model (see Section 4.3.1). Interpolation weights dictate how
each measured value contributes to the prediction at an unsampled location (Webster and
Oliver 2001). Geostatistical models produce predictions and prediction error estimates at
all unsampled locations.
The primary geostatistical model in existence is kriging, which is one of the most robust
and widely used methods for interpolation and contouring in many scientific fields. Figure
4.9 shows a contour map created by Geostatistical Analyst using default kriging and the
same data set of the top-of-clay aquitard shown in Figure 4.7. Kriging is known as the
optimal interpolation method because it minimizes the mean square error of predictions
and is statistically unbiased—predicted values and measured values coincide on average
(Webster and Oliver 2001). The statistical assumptions behind kriging and the overall geo-
statistical modeling process are described in detail in Section 4.3.
FIGURE 4.9
Contours of the aquitard created using kriging with default settings in Geostatistical Analyst. Note significant
differences versus the IDW surface depicted in Figure 4.8.
Contouring 213
and Surfer can detrend data before or after variography and contouring using LPI.
Natural physical processes can also have preferred orientations. For example, coarse ero-
sion material brought from land settles out fastest near sea shoreline, and the finer mate-
rial takes longer to settle and travels farther. Thus, the closer one is to the shoreline, the
coarser the sediments (higher hydraulic conductivity) become, and the further from the
shoreline, the finer the sediments (lower hydraulic conductivity). When interpolating at a
point, an observation 100 m away but in a direction parallel to the shoreline is more likely
to be similar to the value at the interpolation point than is an equidistant observation in a
direction perpendicular to the shoreline (Golden Software 2002). The property exhibited by
this example, in which spatial dependence varies in different directions at small to moderate
distances between points, is termed anisotropy. Data sets in which the range of spatial corre-
lation changes with direction exhibit geometric anisotropy, and data sets in which variance
changes with direction exhibit zonal anisotropy. As kriging assumes a constant variance
(see Section 4.3), kriging models cannot account for zonal anisotropy (Krivoruchko 2011).
However, any robust contouring program can account for geometric anisotropy by enabling
the user to change the range (distance) of correlation with direction.
The kriging algorithm accounts for anisotropy by converting the searching neighbor-
hood into an ellipse. The ellipse is specified by the lengths of its two orthogonal axes and
by an orientation angle. The elliptical shape of the searching neighborhood is depicted in
the lower left corner of the images in Figure 4.10. Different contouring programs may use
different approaches in defining and specifying the orientation and shape of the ellipse.
In Surfer, the lengths of the axes are called Radius 1 and Radius 2, and in Geostatistical
Analyst, they are termed the Minor Range and the Major Range. The orientation angle is
defined as the counterclockwise angle between the positive X axis and Radius 1. The rela-
tive interpolation weighting is defined by the anisotropy ratio, which is the maximum axis
of the ellipse divided by the minimum axis of the ellipse (Radius 2 divided by Radius 1 in
Surfer). An anisotropy ratio less than 2 is considered mild, and an anisotropy ratio greater
than 4 is considered severe. Typically, when the anisotropy ratio is greater than 3, its effect
is clearly visible on grid-based maps (Golden Software 2002).
It is important to note that contouring programs also enable incorporation of anisotropy in
deterministic models such as IDW. This may seem counterintuitive as deterministic models
are not based on spatial correlation, and therefore, they should not be able to account for direc-
tional changes in spatial correlation. Geostatistical Analyst and Surfer simulate anisotropy for
deterministic models by warping the coordinate system such that the distance in one direction
changes faster than the distance in another. This transformation creates an ellipse similar to
the correct use of anisotropy in geostatistical interpolation (Krivoruchko 2011).
214 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.10
Example directional semivariograms for water-level data in an aquifer. The top image shows a search angle of
140°, and the bottom image shows a search angle of 50°. Note that the separation distance on the X axis has been
adjusted by a factor of 10 –2. The Y axis shows the calculated semivariance (γ) for each data pair, which changes
significantly based on search direction. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright
© Esri. All rights reserved.
Contouring 215
charge boundaries like surface water features. These boundary conditions create systemic
changes in groundwater elevation as water moves from high head to low head (i.e., water
flows downhill). Regional potentiometric head measurements, in particular, demonstrate
significant trend (Kitanidis 1997). Therefore, it is appropriate to consider the strong direc-
tional dependence of the data an example of trend rather than anisotropy and the data
should be detrended prior to kriging. If directional dependence is still observed in the
semivariogram after detrending, it is likely that anisotropy also exists. A major cause of
anisotropy in groundwater elevations is variations in hydraulic conductivity that follow
preferred depositional orientations. To summarize, the large-scale variation (i.e., trend)
observed in this example is caused by far-field hydraulic boundary conditions, and the
small-scale variation is caused by preferred depositional patterns (i.e., anisotropy).
A more empirical approach to distinguishing trend from anisotropy on a semivario-
gram is related to the concept of stationarity, which is a required condition for kriging
estimates to be valid. Data are stationary if there is a constant mean and variance across
the data domain (with a small variance compared to the size of the domain), and data cor-
relation depends only on separation distance and direction rather than absolute location.
The omnidirectional semivariogram shown in Figure 4.11 (top) is indicative of nonstation-
arity as the semivariogram increases exponentially and does not approach an asymptote
that represents the variance of the data (termed the sill, see Section 4.3; Kitanidis 1997).
This implies that there is an infinite range of correlation within the data, which is likely
caused by a large-scale, deterministic trend. As trend is causing data to be nonstationary
in this case, it must be removed prior to kriging to establish a separation distance at which
data values are no longer correlated. Figure 4.11 (bottom) depicts the semivariogram of
residuals after removing a second-order trend. A finite range of correlation and a sill are
now observed, indicative of stationary behavior.
The Geostatistical Analyst extension to ArcGIS has a very useful trend analysis tool
that can also assist in the decision-making process. A screen shot of the trend analysis
tool analyzing the water-level data presented in the Figures 4.10–4.11 semivariograms is
presented in Figure 4.12 and demonstrates the presence of a strong second-order trend.
The benefits of detrending these data to create a stationary variable are clear when com-
paring Figure 4.13, created without detrending, to Figure 4.14, created with second-order
detrending. Both figures are kriged surfaces of the water-level data presented on the semi-
variograms in this section. The kriged surface in Figure 4.14 better captures the heteroge-
neity of the true groundwater contours and produces much better estimates in unsampled
locations. This occurs because detrending eliminates large-scale bias and focuses the krig-
ing model on local variation.
216 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.11
Omnidirectional semivariograms for the water-level data presented in Figure 4.10. The top semivariogram shows
infinite correlation, indicating the presence of trend, causing a variable mean and nonstationarity. The bottom semi-
variogram is produced after second-order detrending and the residuals have a definitive sill and are stationary. Esri®
ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
Contouring 217
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.12
Trend analysis tool in Geostatistical Analyst with data points plotted along X, Y, and Z axes and a second-
order regression line successfully fitted to the data. Esri® ArcGIS Geostatistical Analyst graphical user interface.
Copyright © Esri. All rights reserved.
FIGURE 4.13
Kriged potentiometric surface without detrending. Black lines and labels represent the actual water-level eleva-
tion contours (created synthetically using a numeric groundwater model), and the colored filled contours repre-
sent the kriged surface interpolated with point measurements displayed and labeled in purple.
218 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.14
Kriged potentiometric surface after removing a second-order trend from the data. Detrending significantly
improves the match between simulated (observed) and kriged contours, most notably in areas with low sam-
pling density.
of the relative quality of the grid by computing and investigating the gridding errors, also
referred to as residuals. The gridding errors are calculated by removing the first observa-
tion from the data set of N values and using the remaining data and the specified algo-
rithm to interpolate a value at the first observation location. Using the known observation
value at this location, the interpolation error is computed as
Then, the first observation is returned into the data set, and the second observation is
removed from the data set. Using the remaining data (including the first observation)
and the specified algorithm, a value is interpolated at the second observation location.
Using the known observation value at this location, the interpolation error is computed as
before. The second observation is returned into the data set, and the process is continued
Downloaded by [University of Auckland] at 23:40 09 April 2014
in this fashion for the third, fourth, fifth observations, etc.—all the way up to and includ-
ing observation N. This process generates N interpolation errors (Golden Software 2002).
After computing the cross-validation errors, the mean error (same units as the data,
measuring the prediction bias) and the root-mean-square error (measuring prediction
accuracy) can be calculated for all spatial interpolation models. As deterministic mod-
els (e.g., IDW) cannot estimate prediction uncertainty, these are the only two statistical
metrics that can be calculated. Because geostatistical models also provide the prediction
standard error or prediction standard deviation (same units as data) of each prediction
location, three additional metrics can be calculated: mean standardized error (dimension-
less), average standard error (analogous to root mean square error), and root-mean-square
standardized error (measuring the assessment of prediction variability). The concept of
prediction standard error in kriging is described further in Section 4.3.2.
In a sense, two separate sets of model diagnostic statistics are available. The first, com-
posed of the mean error and root-mean-square error, reflects the absolute ability of the
model to make accurate predictions. The second set, composed of the standardized sta-
tistics only available in kriging, reflects the accuracy of the model considering the vari-
ability of the data itself. A good model will have similar statistics between these two sets
of metrics, indicating that the model makes good predictions that accurately reflect the
variability of the data. In summation, cross-validation diagnostics provide a quantitative,
objective measure of quality for the interpolation model and its comparison with other
models.
Figure 4.15 is a screen shot comparison of cross-validation results displayed in
Geostatistical Analyst for the top of aquitard grids created with the default IDW and krig-
ing models, displayed in Figures 4.8 and 4.9, respectively. When interpreting the cross-
validation results, one should have in mind the following general rules (Esri 2003):
• Predictions should be as close to the measurement values as possible, that is, the
scatter along the straight line on the graphs in Figure 4.15 should be minimal;
the smaller the root-mean-square prediction error, the better. The default kriging
model performs better than the default IDW model in this case.
• Predictions should be unbiased (centered on the measurement values). If the pre-
diction errors are unbiased, the mean prediction error should be near zero, and
the slope of the regression lines in Figure 4.15 should be 1:1. However, this value
depends on the scale and units of the data, so it is better to look at standardized
prediction errors, which are given as prediction errors divided by their prediction
220 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.15
Cross-validation comparison between default IDW and kriging models. Note that statistics involving predic-
tion standard error are only available for the kriging model. Esri® ArcGIS Geostatistical Analyst graphical user
interface. Copyright © Esri. All rights reserved.
standard errors (available only in the kriging model). The mean of these should
also be near zero.
• It is important to assess the variability of predictions, that is, the validity of predic-
tion standard errors. If the average standard error is close to the root-mean-square
prediction error, the variability in prediction is correctly assessed, and the root-
mean-square standardized error should be close to one. If the average standard
error is greater than the root-mean-square prediction error or if the root-mean-
square standardized error is less than one, then the variability of predictions is
overestimated. If the average standard error is less than the root-mean-square
prediction error or if the root-mean-square standardized error is greater than one,
then the variability of predictions is underestimated. Based on these guidelines,
the default kriging model in Figure 4.15 is underestimating variability and needs
a higher partial sill and/or nugget.
A good geostatistical model has a standardized mean near zero, a small root-mean-square
prediction error, an average standard error approximately equal to the root-mean-square
prediction error, and a standardized root-mean-square prediction error close to one (Esri
2003).
Another form of model diagnostic analysis that is less used in professional practice is
validation. Validation goes further than cross-validation in testing the model as an entire
subset of data is removed from the input. Interpolated predictions at these locations can
then be compared to the actual measured values, and statistical analysis of the errors can
be conducted. Validation is similar to verification of a groundwater model as it tests how
well the model can predict values in unknown areas as opposed to individual locations as
Contouring 221
is the case with cross-validation. Geostatistical Analyst has a validation toolset to assist in
removing data subsets and analyzing the associated errors.
Both unadjusted cross-validation or validation errors (residuals) and the prediction
standard errors (for kriging only) can be displayed in the form of a map, which enables
analysis of possible reasons for a specific spatial distribution of errors. Contours of cross-
validation errors for the IDW and kriging models of the aquitard surface are presented
in Figures 4.16 and 4.17, respectively. Together with the original contour maps and error
statistics, the error maps are used to compare different gridding methods and aid in
selecting the most appropriate one for the project at hand. In terms of both the visual
appearance of the contours (Figures 4.8 and 4.9) and the statistical performance of the
interpolation (Figures 4.15 through 4.17), kriging is superior to IDW in this particular
case. This is not coincidental because kriging is also superior to all other methods when
applied appropriately (i.e., it is the optimal interpolator). Statistical model comparison is
Downloaded by [University of Auckland] at 23:40 09 April 2014
additionally explained in Section 4.5. However, for various reasons, including perceived
complexity, many hydrogeologists still do not use kriging, which is described in detail in
Section 4.3.
It is also important to note that selection of the appropriate spatial interpolation model
should not solely rely on statistical diagnostics. Cross-validation and validation are highly
sensitive to measurement error and can poorly reflect stable features of the data. Statistics
can also give the impression that a model is more accurate than it truly is. Krivoruchko
(2011) illustrates this phenomenon through a synthetic problem. First, 150 data points are
randomly generated using a synthetic kriging model. Then two kriging models are fit to
FIGURE 4.16
Cross-validation error (residual) map for the default IDW model. The greatest error is approximately 40 ft.
222 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.17
Cross-validation error (residual) map for the default kriging model. The greatest error is approximately 26 ft.
the data set—the first with the same parameters as the synthetic model, and the second
with estimated model parameters. Using both cross-validation and validation, the esti-
mated model performs better statistically than the synthetic model (i.e., better than the
model that was used to create the data to begin with).
Faced with this uncertainty, the hydrogeologist must take into consideration his or her
knowledge of the processes that contributed to the current spatial distribution of the data
(i.e., the CSM). Additionally, cross-validation statistics may be more important in one part
of the data set than others; therefore, the overall statistics may not be as important as the
errors in specific locations (e.g., around potential receptors in groundwater contamination
studies). Beyond statistics, the qualitative appearance of the interpolated surface is impor-
tant in its own right, especially as it pertains to the CSM. Therefore, it is acceptable to select
a geostatistical model that is less accurate statistically but better reflects the physical and
chemical aspects of the CSM.
4.3 Kriging
As described earlier, kriging is a geostatistical method that takes into consideration spatial
variance, location, and sample distribution. While often erroneously ignored in profes-
Contouring 223
sional practice, kriging has several underlying assumptions that should be assessed to
ensure proper application and evaluate model limitations (Krivoruchko 2011):
• The data to be kriged are stationary, which means that there is a constant mean
and variance across the data domain, and data correlation depends only on sepa-
ration distance and direction rather than absolute location.
• The semivariogram or covariance model (see Section 4.3.1) describing spatial cor-
relation is known exactly and is consistent throughout the data domain.
• Sample locations are independent of the data values (i.e., the sample locations are
random).
• While not specifically stated in the classical derivation of kriging, an assumption
generally accepted by statisticians is that kriging is a Gaussian predictor, which
Downloaded by [University of Auckland] at 23:40 09 April 2014
It is acknowledged that, in practice, real data will never follow all of the above assump-
tions; however, some practical recommendations are listed below:
There are various interpolation (gridding) models within the kriging method, and the
most applicable one to the existing data set can be chosen after determining a theoreti-
cal semivariogram or statistical function (model) that best describes the underlying field
data. Although all major contouring programs on the market include kriging, few allow
for user-friendly, visual generation of a semivariogram by interactively adjusting all its
key parameters. Therefore, in order to select an appropriate kriging method, it may be
necessary to generate the semivariogram using some external program. Both Surfer and
Geostatistical Analyst include very powerful and simple-to-use options for generating
experimental and theoretical semivariograms and creating contours maps with various
kriging methods.
224 Hydrogeological Conceptual Site Models
Figure 4.18 shows the top of a clay aquitard map created with default kriging in Surfer
using the same data set applied with default kriging in Geostatistical Analyst to produce
the map shown in Figure 4.9. As expected, both programs produce similar maps because
the applied geostatistical theory is the same. However, minor discrepancies exist as the
default semivariogram and kriging choices in these two programs are different, and the
user should make every effort to understand kriging methods implemented in either pro-
gram and learn basic principles of creating semivariograms interactively. For example, the
default model in Surfer does not incorporate a nugget effect, which makes the model an
exact interpolator and produces a surface (Figure 4.18) that is less smooth than that shown
in Figure 4.9. This has important conceptual and statistical implications (see Section 4.3.2
for additional discussion regarding the nugget effect).
In Surfer, the default option for kriging is a linear semivariogram model with the pro-
gram automatically fitting all experimental data using the least squares. This automated
Downloaded by [University of Auckland] at 23:40 09 April 2014
fitting may or may not result in a nugget effect, but in any case, the user is left in the dark as
to what is being implemented. Consequently, the created map may sometimes make very
little sense, and the user may blame kriging as an inappropriate method without examin-
ing the experimental variogram of the data first. An example of one such automated map
of dissolved contaminant concentration in groundwater created by Geostatistical Analyst
FIGURE 4.18
Contours of the aquitard created using kriging with default settings in Surfer. Note minor differences versus
the Geostatistical Analyst surface depicted in Figure 4.9.
Contouring 225
ND ND
ND 34
570
8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3
73
50
10 7.8 8 5.2
ND
Downloaded by [University of Auckland] at 23:40 09 April 2014
1.5
2.9 6.9
0.2
FIGURE 4.19
Nonsensical contaminant concentration contour map created using default kriging in Geostatistical Analyst.
Black lines and concentration labels represent the actual concentration contours in micrograms per liter (created
synthetically using a numeric groundwater model), and the colored filled contours represent the kriged surface
interpolated with the point measurements displayed and labeled in purple.
is shown in Figure 4.19. Comparison of this colored plume map with the theoretical con-
taminant plume contours shown on the same figure clearly demonstrates why default
choices in any contouring program (i.e., mindless button pressing) are not recommended.
In contrast, when hydrogeologists use the full power of interactive variography and apply
their knowledge of the underlying physical processes, the results are much more favorable
as demonstrated in Section 4.5 for the same data set.
4.3.1 Variography
Variography is the term often used to describe the process of evaluating spatial correlation
between data measured in the field. If such correlation is evident, it includes determin-
ing which geostatistical model is the most appropriate to describe it quantitatively. The
selected model is then used to predict (interpolate) the variable in question at the unsam-
226 Hydrogeological Conceptual Site Models
pled locations, which is part of another process called kriging. Variography includes the
following steps:
The covariance (SXY) between two variables X and Y that are not regionalized variables
(i.e., not spatially correlated) is
SXY =
1
n− 2 ∑ (x − x
i=1
i av ) ⋅ ( y i − y av ) (4.1)
where n is the number of paired data, xi and yi are individual values of each variable, and
xav and yav are average values of each variable.
Semivariance (γi) for each pair of observation points of the spatially correlated (regional-
ized) variable is (see Figure 4.20)
1
γi = ( zi ,head − zi ,tail )2 (4.2)
2
Semivariance is calculated for all possible data pairs, which is often a fairly large number:
n(n − 1)
total number of pairs = (4.3)
2
FIGURE 4.20
Semivariance is calculated for each pair of observations, separated by distance h.
Contouring 227
where n is the number of measured data points. This information is then presented in two
ways (see Figure 4.21):
• As a semivariogram cloud, where all semivariances for all pairs are plotted against
their respective individual separation distances. This graph contains a large num-
ber of data and does not reveal much in terms of spatial correlation between the
data.
• As an experimental semivariogram, where all semivariances within one separa-
tion distance or lag (also called bin) are averaged and plotted against the average
separation distance between the data pairs within that lag.
Figure 4.22 illustrates the process of calculating semivariances within one lag and then
moving to the next lag. The average variance for one lag is
Downloaded by [University of Auckland] at 23:40 09 April 2014
i= n
2 γ * ( h) =
1
n
⋅ ∑[ g(x) − g(x + h) ]
i=1
i i
2
(4.4)
where
γ*(h) is the experimental semivariance for given distance (lag) h
h is the separation distance between the two data points
g is the value of the sample (data point)
x is the position of one sample in the pair
x + h is the position of the other sample in the pair
n is the number of pairs in calculation
FIGURE 4.21
Semivariogram in Geostatistical Analyst where red circles represent semivariances that have been binned but
not averaged (similar but not exactly equal to the cloud), and blue crosses represented the binned, averaged
values where the spatial correlation is readily apparent. Note that the semivariance (γ) labels on the Y axis are
scaled by a factor of 10 –1. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All
rights reserved.
228 Hydrogeological Conceptual Site Models
5 0 ft 5 0 ft
5 0 ft
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.22
Calculation pattern for omnidirectional (isotropic) variogram, looking at one data point at the time. Left:
Separation distance between the black data and the other data inside the shaded circle is between 0 and 50 ft,
that is, the lag width is 50 ft. This smallest separation distance (chosen by the user arbitrarily) is called Lag 1
(Bin 1). As the calculation window moves throughout the sampled area from one point to another, each new
step produces a number of pairs, which are all added to Lag 1. Right: Separation distance between the black data
and the data inside the shaded area is between 50 and 100 ft, that is, the lag width is also 50 ft. This separation
distance is called Lag 2 (Bin 2). As the calculation window moves throughout the sampled area from one point
to another, each new step produces a number of pairs, which are all added to Lag 2.
The calculated value (divided by two) is the measure of the difference between the two
data points, which are the distance h apart. The semivariogram measure plotted on the
vertical graph axis is in squared units of data (in the case of the hydraulic head, it is
squared distance or ft2). The lag is usually given a tolerance (say, 50 ft ± 10 ft) so that
more spatial information is included; note that most groundwater information in the
field is collected from irregularly spaced sampling points, so the exact lag distance (say,
50 ft) may lead to fewer calculated values for plotting the experimental semivariogram
graph. This tolerance should not be greater than one-half the basic distance. For example,
if the basic distance h is 50 ft, the tolerance should be about 20 ft. This means all pairs
that fall between 30 and 70 ft will be included in the calculation of the semivariogram.
The process is then repeated for as many new lags as possible. Each new distance (lag)
is increased by the basic interval (say, h = 50 ft + 50 ft = 100 ft; h = 150 ft; h = 200 ft, etc.),
and the results are plotted on the same graph. The maximum separation distance should
not be larger than one-half the distance between two points in the field data set that are
farthest apart.
Figure 4.23 shows one experimental semivariogram with the number of data pairs per
one lag (also called bin) and the variance of all data in the sample. The sample variance
(s2) is given as
i= n
s2 =
1
n−1
⋅ ∑(g
i=1
2
i
2
− g av ) (4.5)
Contouring 229
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.23
Example of experimental semivariogram.
where gi are the values of individual data, and gav is the sample mean given as
i= n
1
g av = ×
n ∑g
i=1
i (4.6)
Note that the x coordinates of the individual plotted points are not regularly spaced
because each coordinate is the average separation distance for all pairs included in the
individual bin; these average numbers vary from bin to bin by default because the data
are irregularly spaced. Semivariograms can be plotted for various directions (so-called
directional semivariograms), which is recommended, especially if a certain degree of
anisotropy in data is expected. Directions that produce noticeably different (while still
meaningful) semivariograms may indicate actual anisotropy in the data. Note, however,
that there is a substantial degree of subjectivity in interpreting semivariograms. For a
statistically valid semivariogram, there should be at least 30 pairs of data, which is often
not the case in groundwater studies. It is up to the user to know his or her data well and
make an evaluation as to the reasonable lower limit of data per bin. When the data set is
small, the directional calculation (e.g., all pairs falling within a 30° window ± x degrees
of tolerance) may not produce a meaningful semivariogram even if there is an underlying
anisotropy. In such cases, the calculation is, by default, performed for all possible direc-
tions and for all sample pairs that fall within the calculation interval. This will produce a
global, omnidirectional experimental semivariogram.
FIGURE 4.24
Some of the most common theoretical semivariograms. (From Golden Software, Inc., Surfer 8 User’s Guide:
Contouring and 3D Surface Mapping for Scientists and Engineers. Golden Software, Inc., Golden, CO, 2002.)
Models with true ranges (data are not spatially correlated beyond the specified range):
• Circular
• Spherical
• Tetraspherical
• Pentaspherical
• Gaussian
• Exponential
• Stable (a hybrid between Gaussian and exponential available in Geostatistical
Analyst)
• K-Bessel and Rational Quadratic (similar models)
• J-Bessel (wave/hole effect)
The shape of the experimental semivariogram should indicate which theoretical model
is appropriate for the given data (with data at small separation distances, i.e., close to
the origin, being most important). Note, however, that multiple models can be used to fit
Contouring 231
the same semivariogram. After selection of the appropriate model, the user must deter-
mine what the values are of the three important parameters of the semivariogram (see
Figure 4.25):
a range of influence
C sill
C0 nugget effect
The range of influence (a) is the distance at which data become independent of one another.
After this point, the graph is horizontal, and the corresponding value of γ is called the sill
of the semivariogram (C). In Surfer, the difference between the sill and the nugget is called
the scale, and in Geostatistical Analyst, it is called the partial sill.
Downloaded by [University of Auckland] at 23:40 09 April 2014
The scale (or partial sill), range, and nugget (where applicable) should be selected such
that the sill of the theoretical function is close to the sample variance. Figure 4.25 shows
a semivariogram with the nugget effect, which is the graph’s intercept at the vertical axis.
Its presence may indicate a potential error in data collection or any other component in
the data at the scale smaller than the separation distances between the field data. The
nugget effect is a constant that raises a theoretical semivariogram C0 units along the
vertical:
where γ′(h) is one of the common theoretical semivariograms shown in Figure 4.24.
In Surfer, the nugget effect is the sum of the error variance and the micro variance. The
error variance is a measure of the direct repeatability of the data measurements. A good
example is the collection of duplicate samples when analyzing contaminant concentrations
in groundwater. Experience shows that the duplicate sample is often not exactly the same as
the first measurement. The error variance values take these variances in measurement into
account. A nonzero error variance means a particular observed value is not necessarily the
FIGURE 4.25
Example of theoretical function fitted to experimental data, showing three main elements: range, sill, and
nugget.
232 Hydrogeological Conceptual Site Models
exact value of the location. Consequently, kriging tends to smooth the surface and does not
behave as an exact interpolator when using a nugget effect.
The micro variance is a measure of variation that occurs at separation distances of less
than the typical nearest neighbor sample spacing. For example, a parameter of interest
may show two spatial structures that can be described with a nested variogram in which
both models are spherical. The range of one of the structures is 100 m, and the range of the
second structure is 5 m. If the closest sample spacing were 10 m, it would not be possible
to see the second structure (5-m structure). The micro variance allows for specifying the
variance of the small-scale structure. It is for this reason of identifying possible small-scale
variations of the parameter of interest that some field sampling plans include collection of
a number of colocated samples, that is, samples that are very close to each other.
In general, specifying a nugget effect causes kriging to become more of a smoothing
interpolator, implying less confidence in individual data points versus the overall trend of
Downloaded by [University of Auckland] at 23:40 09 April 2014
the data. The higher the nugget effect, the smoother the resulting grid (Golden Software
2002). This means that kriging tends to underpredict large values and overpredict small
values. The nugget effect has important implications with respect to both the appearance of
the interpolated surface (kriging prediction) and the statistical performance of the model
(kriging prediction standard error), a case study of which is presented in Section 4.3.2.
Note that curves can also be fit to a covariance model as opposed to a semivariogram
model. Covariance is the expected product of the deviations of two random variables from
their mean. A covariance model looks like an upside-down semivariogram, and a simple
mathematical relationship can convert covariance functions to semivariogram functions.
In professional practice, the semivariogram model is used much more often because it
does not require a specified mean, and it is better for estimating data correlation at small
distances (Krivoruchko 2011). Covariance models are required for cokriging applications,
described in Sections 4.3.3.2 and 4.5.
parameter rather than specifying sectors and the number of included data values. This is a
more user-friendly way to change the search neighborhood and see how the resulting sur-
face changes. Note that this is not the same as contour smoothing in Surfer, which simply
adjusts how contours are displayed; it is also not smoothing in the sense of using a nugget
effect to increase averaging and perform inexact interpolation.
• Change the lag size and the number of lags so the range of data correlation (i.e.,
the part of the graph with a positive slope before the asymptote) occupies three
Downloaded by [University of Auckland] at 23:40 09 April 2014
Both Surfer and Geostatistical Analyst have an Optimize button that determines the best
semivariogram model parameters through regression analysis. The user is cautioned
against using the tool because, in the experience of the authors, it tends to weight all semi-
variogram pairs equally and does not give priority to data at small separation distances
near the origin of the graph. Therefore, using this button can result in an erroneously high
nugget and introduces too much variability into a model.
297.13 296.53
297.11 296.92
296.37
296.37
296.22
296.56 296.29
296.07 296.05 295.67
295.83
296.11 295.46
295.74
295.49
295.13
295.25 295.11
294.94
294.91
294.82
294.52
294.89
294.95 294.71
294.77
294.39
294.52
294.63
294.45
Prediction Standard Error
15
02 02
45
0. .07
3
0. .25
0. 0.1
15 15
0. 0.4
5
03 0.0
.6
.0
0. 0.0
.
-0
-0
-0
-0
-0
-
-
-0
07
25
-
5
4
01
01
04
0.
0.
0.
0.
0.
FIGURE 4.26
Prediction standard error map for the surface presented in Figure 4.14. Units are same as measurement
value (feet).
also be lower in areas with small changes in data values (low variability) and higher in
areas with large changes in data values, such as a contaminant hot spot. However, when
using conventional ordinary kriging (the most commonly used model in professional
practice), only the error resulting from the configuration of the measurement data (i.e., the
relative position of the measurement locations) is reflected (Krivoruchko 2011). In other
words, the standard error at a prediction location is the same regardless of the actual data
values immediately surrounding that location. All that affects standard error are the mea-
surement locations and the semivariogram model (i.e., the variance of the entire data set).
This is a little-appreciated fact and has far-reaching implications because the majority of
users of contouring programs are likely misinterpreting uncertainty in their models.
This phenomenon is illustrated by the prediction standard error maps presented in
Figures 4.27 and 4.28. The two surfaces are created using the exact same ordinary krig-
ing model without any transformation or detrending with the same set of data values
Contouring 235
ND ND
ND 34
570
8300
7500
ND 670
1600 360 ND
3.8
ND 0.3
73
7.8 8 5.2
Downloaded by [University of Auckland] at 23:40 09 April 2014
ND
1.5
2.9 6.9
0.2
4.2
3.3
ND
0.4
ND 2.3
0.2
ND
1.1
ND
0.7
00
40
70
00
10
20
40
60
90
,9
,0
,0
,0
,1
,1
,1
,1
,1
,1
-1
-2
-2
-2
-2
-2
-2
-2
-2
-2
0
0
87
95
00
04
07
10
11
12
14
16
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
FIGURE 4.27
Prediction standard error map for a surface created using ordinary kriging without data transformation or
detrending. Error is lowest around measurement locations and highest in areas with sparse coverage.
ND ND
7500 0.7
6.9
ND
8300
4.2 34
8 0.4 ND
ND
7.8 1.5
3.8
3.3 1600 ND
Downloaded by [University of Auckland] at 23:40 09 April 2014
670
1.1
2.9 ND
73
ND
360
5.2
0.3
ND 0.2
0.2
ND
570
2.3
ND
00
40
70
00
10
20
40
60
90
,9
,0
,0
,0
,1
,1
,1
,1
,1
,1
-1
-2
-2
-2
-2
-2
-2
-2
-2
-2
0
0
87
95
00
04
07
10
11
12
14
16
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
FIGURE 4.28
Prediction standard error map for the same data set as that of Figure 4.27 but with values switched between
locations. The map is exactly the same, proving error is only related to measurement density and not the relative
locations of the data values.
using a global semivariogram for the entire data set. Moving-window kriging is an option
in Geostatistical Analyst, and for more information, the reader is referred to the work of
Krivoruchko (2011). The other, more practical solution is to perform data transformation
(and, if necessary, detrending) prior to kriging. This works because transformed data fol-
low a theoretical distribution (e.g., normal, lognormal), and the variance of transformed
data depends on local data values (Krivoruchko 2011). A graphical example of this solu-
tion is presented in Section 4.5 with a contaminant concentration data set with high local
variability.
There are several other cases where a standard deviation (prediction standard error)
grid is incorrect or meaningless. If the variogram model is not truly representative of the
Contouring 237
data, the standard deviation grid is not helpful to data analysis. In addition, the krig-
ing standard deviation grid generated when using a variogram model estimated with the
Standardized Variogram estimator or the Autocorrelation estimator in Surfer is not cor-
rect. These two variogram estimators generate dimensionless variograms, so the kriging
standard deviation grids are incorrectly scaled. Similarly, while the default linear vario-
gram model will generate useful contour plots of the data, the associated kriging standard
deviation grid is incorrectly scaled and should not be used. The default linear model slope
is one, and because the kriging standard deviation grid is a function of slope, the resulting
grid is meaningless (Golden Software 2002).
is stated by Krivoruchko (2011): “A new prediction at the data location is more accurate
than the original noisy measurement.”
This is counterintuitive and people unfamiliar with geostatistics may ask, How can this
be? The answer is that kriging uses information from spatially correlated nearby measure-
ments to improve knowledge about measured values. Failing to understand this concept
leads to excessive concern in professional practice with honoring data that can have sig-
nificant measurement and microscale error. As a result, many project managers and clients
will reject contour maps that do not exactly interpolate between measurements (i.e., contour
lines do not exactly go through the same measured values) even when duplicate analysis
shows significant error. Additionally, laboratory analytical data of constituents, such as vola-
tile organic compounds, are subject to considerable uncertainty as results may be accepted
without qualification with surrogate recoveries as low as 50%. Even water-level data, which
are seemingly reliable, can exhibit significant variation over small distances because of soil
heterogeneity and/or temporal variations that are not apparent to the hydrogeologist. It is
hard to find any example in professional hydrogeology where data are entirely precise.
When using a nugget effect, kriging becomes an inexact interpolator and is empowered
to predict more accurate values at measurement locations, taking error into consideration.
In a sense, rather than honoring each individual measurement, kriging with a nugget
effect honors the entire data set. The nugget effect also has significant implications with
respect to the prediction standard error. When performing exact interpolation with no
nugget, there will be discontinuities in the prediction standard error surface, where the
error jumps to zero at measurement locations (Kitanidis 1997). This is nonsensical, and
forcing this condition can lead to a worse overall model by increasing prediction standard
error away from measurement locations.
This phenomenon is demonstrated in Figures 4.29 and 4.30, which are prediction stan-
dard error maps for two different kriged surfaces of the same data set. Figure 4.29 is the
prediction standard error map for kriging with default variogram parameters and the
default nugget (inexact interpolation). Figure 4.30 is the prediction standard error map
for default kriging without a nugget effect (exact interpolation). The discontinuities in the
prediction standard error are visible in Figure 4.30 as the error jumps to zero at prediction
locations. Note that some of the jumps at the measurement locations are so small that the
purple color is not visible, hidden behind the white sample location symbol. Conversely,
the prediction standard error surface is much smoother in Figure 4.29, and errors away
from prediction locations are smaller than those of the exact kriging scenario.
The model predictions and prediction standard errors at the monitoring locations
depicted on Figures 4.29 and 4.30 are presented in Table 4.1 and confirm the above
238 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.29
Prediction standard error map of a kriging model created in Geostatistical Analyst using default parameters
and the default nugget effect. Small white circles represent data values used in the kriging, and the labeled
crosshair symbols are target locations for prediction and prediction standard error analysis. MW-1 through
MW-11 have known values that are used in the contouring, and actual values at MW-12 through MW-18 are
unknown.
FIGURE 4.30
Prediction standard error map of the same data set using default kriging without a nugget effect. This is an
exact interpolation model, and as a result, prediction standard error jumps to zero at measurement locations.
Contouring 239
TABLE 4.1
Summary Table of Predictions and Associated Standard Errors at Target Locations Depicted in
Figures 4.29 and 4.30
Predicted
Location Predicted Value Std Error (No Value (Default Std Error
Measured Measured Value (No Nugget) Nugget) Nugget) (Default Nugget)
MW-1 290.00 290.00 0.00 297.88 2.16
MW-2 301.00 310.00 0.00 305.34 2.03
MW-3 309.00 309.00 0.00 314.95 1.95
MW-4 320.00 320.00 0.00 319.25 2.09
MW-5 331.00 331.00 0.00 320.86 2.07
MW-6 339.16 339.16 0.00 339.80 1.49
MW-7 344.86 344.86 0.00 342.55 1.83
Downloaded by [University of Auckland] at 23:40 09 April 2014
observations. The model without the nugget predicted measured values exactly at all loca-
tions with measurements (MW-1 through MW-11) with zero standard error. Conversely,
enabling the nugget allowed the model to create new estimates at MW-1 through MW-11,
taking into consideration the overall data variability. At the pure predicted locations where
no measurements exist (MW-12 through MW-18), the model with the nugget has less stan-
dard error and performs better. Unless there is compelling evidence that zero error exists
(which is less likely), the hydrogeologist should use a nugget (even if it is a very small
value) to perform kriging.
difference between these kriging types is how they characterize the mean of the data val-
ues used in the interpolation, which is summarized in the following list:
In general, ordinary kriging should be used for most applications as it is rare that the true
mean of the data in question is precisely known. However, if the mean is definitively known,
then simple kriging will have the lowest mean-square error of the three methods and will
truly be the optimal interpolator (Krivoruchko 2011). Universal kriging is similar to ordinary
Downloaded by [University of Auckland] at 23:40 09 April 2014
or simple kriging when using detrending, although the underlying concept is different. Trend
as described in Section 4.2.3.3 represents an external, long-range deterministic process acting
on the data, whereas the trend modeled in ordinary kriging is more of an internal variation
over short distances. Short-range trend is also termed drift (Webster and Oliver 2001).
Each of the above three kriging methods will generally produce similar prediction
maps; however, the prediction standard error maps may be significantly different. Simple
kriging may produce the most accurate prediction standard error map at times, but it also
tends to underpredict standard error. Universal kriging has the paradoxical problem of
requiring information about the drift to estimate the semivariogram model while also
requiring information about the semivariogram to characterize drift. These uncertainties
highlight the benefits of using the more simplistic ordinary kriging model unless com-
pelling technical justification exists to choose otherwise. In part for this reason but also
because of high data variability and the high cost of obtaining measurements, ordinary
kriging is generally the preferred method in the geosciences. Conversely, simple kriging is
more often used in meteorology (Krivoruchko 2011).
4.3.3.2 Cokriging
Oftentimes in hydrogeological applications, more data exist for one variable than another.
For example, 100 monitoring wells may be gauged on a quarterly basis for water-level
elevation, but only 50 of those wells may be sampled on an annual basis for laboratory
chemical analysis. There are significantly more water-level data because they are easier
and less expensive to collect than chemical data. However, chemical data and water-level
data are often correlated because chemical transport is determined by the groundwater
flow field. Therefore, the greater spatial and temporal density of water-level data can be
used to improve chemical concentration predictions at unsampled locations. This can be
accomplished through cokriging.
Cokriging is a form of kriging where predictions for a variable of interest (the pri-
mary variable, equal to the chemical concentration in the above example) are supple-
mented with data from a subsidiary, correlated variable (water-level data in the above
example—although most often, in professional practice, a surrogate chemical would
be used; Webster and Oliver 2001). Multiple subsidiary variables may be used to fur-
ther improve predictions, although each addition further complicates the model. For
the instance where one primary and one secondary variable are used, a semivariogram
or covariance model can be applied and fitted to each variable independently while
a covariance model is used to correlate between the two variables. Each variable can
Contouring 241
the probability that groundwater concentrations exceed the MCL. A probability map of
contaminant concentrations in sediment created using indicator kriging is presented in
Figure 4.31.
To make indicator kriging even more useful, multiple threshold values can be speci-
fied to create multiple indicator variables (Webster and Oliver 2001). Cross-indicator vario-
grams can then be modeled to evaluate relationships between the different variables in a
process similar to cokriging.
Indicator kriging is attractive to many professionals because exceedance probability
is often a significant element of risk assessment. Environmental consultants, for exam-
ple, may see indicator kriging as an excellent means of demonstrating to regulators that
there is a low risk of exceeding action levels at exposure point locations, such as private
drinking-water wells. However, indicator kriging has numerous limitations that cast
FIGURE 4.31
Example probability map created with indicator kriging (left) compared with map of contaminant concentra-
tions for the same data set (right).
242 Hydrogeological Conceptual Site Models
serious doubt on its use in formal decision-making processes (Krivoruchko and Bivand
2009):
• Data detrending and transformation is not possible, which means that indicator
variables can significantly deviate from stationarity, and that prediction uncer-
tainty depends solely on measurement density. As a result, probability estimates
may be inaccurate, and prediction uncertainty will definitely be inaccurate.
• Kriging may not be the optimal interpolator for indicator variables.
• The semivariogram model may be inappropriate for discrete data.
• A nugget effect should not be used in indicator kriging as it contradicts the fun-
damental assumption of indicator kriging—namely, values are known exactly so
that comparisons with thresholds are exact. The absence of the nugget can result
Downloaded by [University of Auckland] at 23:40 09 April 2014
Considering the above uncertainties, it is not justifiable to use indicator kriging as the pre-
dominant decision tool for a hydrogeological application, especially where risk is involved.
Indicator kriging is better suited as a data exploration tool similar to histogram or trend
analysis (Krivoruchko 2011).
A better, albeit more complicated, alternative to indicator kriging is disjunctive kriging.
Disjunctive kriging is the simple kriging of data transformed to a standard normal distri-
bution using Hermite polynomials. The estimated values can then be compared with the
normal distribution to create an accurate probability map that does not lose information
regarding the extent to which data deviate from threshold values (Webster and Oliver
2001). The most common form of disjunctive kriging is Gaussian disjunction kriging, which
assumes that the data in question (or the detrended, transformed data) follow a bivariate
normal distribution. Data are bivariate normal if linear combinations of paired values are
normally distributed with correlation coefficients that depend solely on separation distance
rather than absolute position. This assumption must be satisfied for disjunctive kriging to
produce reliable predictions. Geostatistical Analyst in ArcGIS supports disjunction kriging
and has an Examine Bivariate Distribution tool to help determine if disjunctive kriging is
justified (Krivoruchko 2011). When assumptions are satisfied, disjunctive kriging is a power-
ful tool that reliably informs the hydrogeologist about exceedance probability.
• Determining groundwater flow direction, volumetric flow rate, and linear velocity
• Modeling contaminant fate and transport
• Evaluating the impacts of groundwater extraction and discharge for water supply
purposes
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.32
Relationship of river water level and piezometric levels at the hydrogeological cross section near Kocela, the
Trebisnjica River, eastern Herzegovina. (From Milanović, P., Karst istocne Hercegovine i dubrovackog priobalja
(Karst of Eastern Herzegovina and Dubrovnik Litoral). ASOS, Belgrade, 2006. With permission.)
recharge dynamics). On the other hand, a well about 10 m away may show the hydraulic
head fluctuations of several meters or more. Using only the hydraulic head informa-
tion for the purposes of assessing representative groundwater flow directions (hydraulic
gradients) in this case would obviously not be sufficient. Another example illustrating
the complexity of groundwater flow in fractured rock and karst aquifers is presented in
Figure 4.33. By looking at the hydraulic heads measured in piezometers P4, P3, and P2,
and not knowing the lengths and positions of the screen intervals, one could erroneously
conclude that groundwater flows away from the spring (note that P3 is screened in all
three karst conduits and, therefore, has the same water level as P2, which is screened in
the shallowest conduit; P1 is screened in the deepest conduit, and P4 is screened in the
middle conduit). In conclusion, the interpretation of hydraulic head measurements in
these types of aquifers should be combined with hydrogeologic mapping, dye tracing,
and, certainly, a thorough understanding of groundwater flow through fractures and
conduits (Kresic 2007).
P1 P2 P3 P4
Spring
FIGURE 4.33
Example of how closely spaced monitoring wells in karst may register very different hydraulic heads depend-
ing on the depth and length of well screens. Based on actual investigations near a large spring in Dinaric karst.
(Modified from Kupusović, T., Nas Krs, XV, 26–27, 21–30, 1989.)
Contouring 245
Potentiometric surface contour maps are traditionally used to determine hydraulic gra-
dients and groundwater flow directions. However, one should always remember that a
contour map is a two-dimensional representation of a three-dimensional flow field, and as
such, it has limitations. If the area (aquifer) of interest is known to have significant vertical
gradients, and enough field information is available, it is always recommended to create at
least two contour maps: one for the shallow and one for the deeper aquifer depth. As with
geologic and hydrogeologic maps in general, a contour map should be accompanied with
several cross sections showing locations and vertical points of the hydraulic head mea-
surements with posted data. Probably the most incorrect and misleading case is when data
from monitoring wells screened at different depths are lumped together and contoured as
one average data package. A perfect example would be a karst aquifer with thick residuum
(regolith) deposits and monitoring wells screened in the residuum and at various depths
in the bedrock. If data from all the wells were lumped together and contoured as one data
Downloaded by [University of Auckland] at 23:40 09 April 2014
set, it would be impossible to interpret where the groundwater is actually flowing for the
following reasons:
The flow in two distinct media (the residuum and the bedrock) may therefore be in two
different general directions at a particular site, including vertical gradients from the resid-
uum toward the underlying bedrock. Creating one average contour map for such a system
would not make any hydrogeologic sense.
A contour map of the hydraulic head is one of the two parts of a flow net, which is a
set of streamlines and equipotential lines, as shown in Figure 4.34 (top). A streamline (or
flow line) is an imaginary line representing the path of a groundwater particle as it flows
through an aquifer. Two streamlines bound a flow segment of the flow field and never
intersect, that is, they are roughly parallel when observed in a relatively small portion of
the aquifer. An equipotential line is the intersection of a horizontal plane and the equi-
potential surface—everywhere at that surface the hydraulic head has constant value, as
shown in Figure 4.34 (bottom). Note that the equipotential surface is curved, and does not
have to be, and usually is not vertical. Two adjacent equipotential lines (surfaces) never
intersect and can also be considered parallel within a small aquifer portion. These char-
acteristics are the main reasons why a flow net in a homogeneous, isotropic aquifer is
sometimes called the net of small (curvilinear) squares. However, as explained in Section
4.4.2, inevitable heterogeneity and anisotropy of porous media at realistic field scales cre-
ate various distortions of this ideal flow net.
A flow net does not change over time for steady-state flow conditions. In transient condi-
tions, a flow net represents the instantaneous flow field at a particular time, and it changes
for any other time. At least several data sets collected in different hydrologic seasons
should be used to draw groundwater contour maps for the area of interest. In addition to
recordings from piezometers, monitoring wells, and other water wells, every effort should
be made to record elevations of water surfaces in the nearby surface streams, lakes, ponds,
and other surface water bodies. One should also gather information on hydrometeoro-
logic conditions in the area for preceding weeks to months, paying special attention to
246 Hydrogeological Conceptual Site Models
∆Q Equipotential
line
∆Q
Flowlines
(Streamlines)
∆Q
Equipotential
Downloaded by [University of Auckland] at 23:40 09 April 2014
surface
∆Q
FIGURE 4.34
Example flow nets. Top: Map view. Bottom: Three-dimensional view. Flow in the segment between two flow-
lines, ΔQ, remains constant.
storm events (recharge episodes) and extended wet or dry periods. All of this information
is essential for creating correct contour maps and assessing the transient nature of the
flow net.
to discontinuities in rocks, such as fissures, fractures, faults, fault zones, folds, and bed-
ding planes (layering). Without elaborating further on the geologic portion of hydrogeol-
ogy, it is appropriate to state that groundwater professionals lacking a thorough geologic
knowledge (i.e., nongeologists) would likely have various difficulties in understanding the
many important aspects of heterogeneity and anisotropy.
One such aspect of heterogeneity is that groundwater flow directions change at bound-
aries between rocks (sediments) of notably different hydraulic conductivity, such as the
ones shown in Figure 4.35. An analogy would be refraction of light rays when they enter
a medium with different density, for example, from air to water. The refraction causes the
incoming angle, or angle of incidence, and the outgoing angle, or angle of refraction, to
be different (angle of incidence is the angle between the orthogonal line at the boundary
and the incoming streamline; angle of refraction is the angle between the orthogonal at
the boundary and the outgoing streamline). The only exception is when the streamline is
Downloaded by [University of Auckland] at 23:40 09 April 2014
perpendicular to the boundary—in which case, both angles are the same at 90º. The situa-
tion shown in Figure 4.35 applies to both map and cross-sectional views as long as there is
a clearly defined boundary between the two porous media.
Streamline K2 > K1
(Flowline) α2 > α1
Lines of equal
hydraulic head
α1
K1
K2 α2
K1 tan α1
=
K2 tan α2
K2 < K1
α2 < α1
α1
K1
K2
α2
FIGURE 4.35
Refraction of groundwater flowlines (streamlines) at a boundary of higher hydraulic conductivity (top) and a
boundary of lower hydraulic conductivity (bottom). Angle of incidence and angle of refraction are denoted with
α1 and α2, respectively. Hydraulic conductivity is denoted with K.
248 Hydrogeological Conceptual Site Models
One key parameter for various calculations of groundwater flow rates is the transmis-
sivity of porous media. For practical purposes, it is defined as the product of the aquifer
thickness (b) and the hydraulic conductivity (K):
T = b × K. (4.8)
It follows that an aquifer is more transmissive (more water can flow through it) when it has
higher hydraulic conductivity and when it is thicker. The knowledge of this relationship helps
in interpretation of hydraulic head data, possible reasons for changes in the hydraulic gradi-
ent (see Figure 4.36), and creation of potentiometric contour maps. An example of a hetero-
geneous flow field simulated by a groundwater flow model and the resulting contours of the
potentiometric surface is shown in Figure 4.37. The refraction of contour lines and changes
of the groundwater flow directions and the hydraulic gradients are caused primarily by the
Downloaded by [University of Auckland] at 23:40 09 April 2014
variations of the hydraulic conductivity. These changes happen over short distances and illus-
trate how relatively small differences in hydraulic conductivity may have a very significant
effect, which would have not been apparent if the aquifer were interpreted as homogeneous.
In hydrogeology, anisotropy specifically refers to groundwater velocity vectors. If the
groundwater velocity is same in all spatial directions, the porous medium is isotropic. If
the velocity varies in different directions, the porous medium is anisotropic. If the ground-
water velocity is anisotropic, the hydraulic conductivity is anisotropic as well. In fact,
when talking about anisotropy in hydrogeology, one usually refers to anisotropy of the
hydraulic conductivity rather than the groundwater velocity. Figure 4.38 illustrates, in two
dimensions, just some of the many possible causes of anisotropy. It is important to under-
stand that some degree of anisotropy can (and usually does) exist in all spatial directions.
It is for reasons of simplification and/or computational feasibility that hydrogeologists
consider only the three main perpendicular directions of anisotropy: two in the horizon-
tal plane and one in the vertical plane. In the Cartesian coordinate system, these three
directions are represented with the X, Y, and Z axes. Unfortunately, when the analyzed
groundwater system is quite complex, anisotropic, and influenced by various hydraulic
boundaries, it may be impossible to draw a correct contour map and determine likely
K1 K3 K2
FIGURE 4.36
Maps (top) and cross sections (bottom) showing how changes in aquifer transmissivity affect potentiomet-
ric contours. In general, lower transmissivities are associated with steeper hydraulic gradients (more closely
spaced contours), and higher transmissivities are associated with more widely spaced contours.
Contouring 249
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.37
Heterogeneous groundwater flow field created in a numeric groundwater model through extensive variation of
hydraulic conductivity (aquifer thickness is uniform). Lower hydraulic conductivity results in steeper hydraulic
gradients (example areas a and b); higher hydraulic conductivity results in more widely spaced contours, such
as in area c.
K1
K1 K2 K2
K1
K2
FIGURE 4.38
Some possible reasons for anisotropy of hydraulic conductivity. (a) Sedimentary layers of varying permeability;
(b) orientation of gravel grains in alluvial deposit; (c) two sets of fractures in massive bedrock.
250 Hydrogeological Conceptual Site Models
KY KY
KX KX = KY KX KX = 4 KY
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.39
Modeling output demonstrates how anisotropy affects particle transport in an aquifer. (From Kresic, N.,
Groundwater Resources. Sustainability, Management, and Restoration, McGraw Hill, New York, 2009. With
permission.)
groundwater flow directions based on a limited data set of hydraulic head measurements.
Ultimately, constructing a numeric groundwater model and testing assumptions about
various factors influencing the groundwater flow may be the only reasonable approach in
such case. Figure 4.39 shows output from a portion of a model used to test the influence of
anisotropy on tracks of particles released at certain locations in the aquifer.
One of the most important aspects of creating contour maps in alluvial aquifers is to
determine the relationship between groundwater and surface water features. In hydraulic
terms, the contact between an aquifer and a surface water body is an equipotential bound-
ary. In case of lakes and wetlands, this contact can be approximated with the same hydrau-
lic head. In case of flowing streams, the hydraulic head along the contact decreases in the
downgradient direction (both the surface water and groundwater flow downgradient). If
enough measurements of a stream stage are available, it is relatively easy to draw the water
table contours near the river and to finish them along the river–aquifer contact. However,
often little or no precise data is available on a river stage, and at the expense of precision, it
has to be estimated from a topographic map or from the monitoring well data by extrapo-
lating the hydraulic gradients until they intersect the river. Figure 4.40 shows some of the
Contouring 251
(a) (b)
123
123
122
122
123
123
122
122
Downloaded by [University of Auckland] at 23:40 09 April 2014
(c) (d)
117
122
116
123
121
122
115
(e)
1 Map view for case 3
345
2
34
4
34
3
3
34
2
FIGURE 4.40
Basic hydraulic relationships between groundwater and surface water shown in cross-sectional views (top)
and map views using hydraulic head contour lines. (a) Perennial gaining stream; (b) perennial loosing stream;
(c) perennial stream gaining water on one side and losing water on the other side; (d) losing stream discon-
nected from the underlying water table, also called ephemeral stream; (e) contour lines following recent rise
(case #2) and then drop (case #3) in a river stage. (From Kresic, N., Hydrogeology and Groundwater Modeling, Second
Edition, CRC Press/Taylor & Francis, Boca Raton, FL, 2007. With permission.)
252 Hydrogeological Conceptual Site Models
No-flow boundary
N
Flow out
Flow in
No-flow boundary
A
No-flow boundary
N
B No-flow boundary
FIGURE 4.41
Top: Hydraulic head contour lines in a basin-fill basin, assuming no influence of the surface stream flowing
through it. Arrows indicate general directions of groundwater flow. Bottom: Influence of two surface streams
(A and B) flowing into the basin from the surrounding bedrock areas and losing all water to the underlying
aquifer a short distance from the contact. Note wider contour lines in the central portion of the basin where
it is thicker and more transmissive. The main stream is hydraulically connected with the underlying aquifer;
the blue line indicates the gaining section of the stream; the dashed red line indicates the losing section of the
stream. (From Kresic, N., Groundwater Resources. Sustainability, Management, and Restoration, McGraw Hill, New
York, 2009. With permission.)
Contouring 253
other basin boundaries are assumed to be impermeable). Figure 4.41 (bottom) shows the
influence of two streams (A and B) entering the basin and losing water to the aquifer a
short distance from the boundary. It also shows a hydraulic connection between the aqui-
fer and the river flowing through the basin, including river reaches that lose water to or
gain water from the aquifer.
The most common mistake when creating potentiometric maps of unconfined aqui-
fers is to let a computer program ignore the presence of surface water features and
accept the results as is. Figure 4.42 shows a comparison between the theoretical (true)
potentiometric contours generated by a numeric model and the contours created from
a limited data set in which the model solution at certain model cells represents the
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.42
Comparison of the theoretical contour map of the water table (blue lines; elevation in feet above mean sea level)
and interpolated contours (brown lines) using default linear kriging in Surfer. The small brown circles simulate
monitoring wells and water-table elevations recorded in the field. The shaded area is, by default, defined by the
extent of X and Y coordinates of the data.
254 Hydrogeological Conceptual Site Models
hydraulic head recorded in the field at monitoring wells. Figure 4.42 was created in
Surfer using default linear kriging and ignoring the river. In comparison, the map in
Figure 4.43 takes the river stage into account by utilizing a breakline option in Surfer. It
is apparent that considering the river creates a much better map, even though all field
data points are relatively far from it, highlighting the importance of installing staff
gauges at field sites.
A breakline is a three-dimensional boundary file that defines a line with X, Y, and Z
values at each vertex. When the gridding algorithm sees a breakline, it calculates the Z
value of the nearest point along the breakline and uses that value in combination with
nearby data points to calculate the grid node value. Surfer uses linear interpolation to
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.43
Contour map created in Surfer when using the breakline function to incorporate the river, yielding much better
results. The theoretical contour map of the water table is shown with blue contours; the interpolated contours
using default linear kriging in Surfer are shown with orange-brown lines. The format of the breakline is shown
in the box. The brown dots simulate field data.
Contouring 255
determine the values between breakline vertices when gridding. Breaklines are not barri-
ers to information flow, and the gridding algorithm can cross the breakline to use a point
on the other side of the breakline. If a point lies on the breakline, the value of the break-
line takes precedence over the point. Breakline applications include defining streamlines,
ridges, and other breaks in the slope. Breaklines can be created in any text editor or
directly within Surfer using the Digitizer tool. The format of the breakline file is shown
in Figure 4.43.
Another example of using breaklines is illustrated in Figure 4.44, which shows contours
of a potentiometric surface created by three pumping wells located in the floodplain of a
slow-moving perennial river. The wells are pumping from the unconfined alluvial aqui-
fer. The left map shows contours where the program does not include the river stage (i.e.,
the interpreter ignores the hydraulic connection between the river and the aquifer), and
the right map shows contours created by Surfer where a breakline simulating the river
Downloaded by [University of Auckland] at 23:40 09 April 2014
is included. Figure 4.45 shows the final map where senseless contours in the areas with-
out data points are not displayed (Surfer does this automatically using blanking polygons
defined by the user).
FIGURE 4.44
Left: Contour map of a water table influenced by three pumping wells near a river when the hydraulic connec-
tion between the aquifer and the river is not accounted for. Dashes on contour lines indicate groundwater flow
(gradient) direction. Orange circles are locations of wells with field measurements. Right: The same map when
the river elevation is accounted for by using breakline in Surfer. Note nonsensical contours created by the pro-
gram in the areas without data points, including on the other side of the river.
256 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.45
Final contour map of alluvial aquifer after postprocessing of the results using grid blanking in Surfer (nonsensi-
cal contours in the area without data points are blanked, i.e., not displayed).
• Evaluate risk to receptors, such as water-supply wells, and receiving surface water
features
• Estimate the mass of contaminants in the subsurface
• Evaluate groundwater remediation effectiveness, including monitored natural
attenuation processes, by assessing changes in plume architecture, composition,
and mass over time
Unfortunately, it is much more difficult to generate accurate and useful contaminant con-
tour maps than potentiometric surface maps, primarily because of significantly higher
data variability and cost of data collection. Faced with this difficulty, many hydrogeolo-
gists create plume maps that look completely unrealistic or are so coarse in resolution that
they more accurately resemble an area of contaminant detections rather than a spatially
Downloaded by [University of Auckland] at 23:40 09 April 2014
interpolated surface. Hydrogeologists may simply say there is too much variability to per-
form kriging on contaminant concentrations and resort to a manual map devoid of statisti-
cal interpolation and uncertainty assessment. This section illustrates how to successfully
create geostatistical groundwater plume maps through a synthetic example, highlighting
important concepts and demonstrating the kriging techniques described throughout this
chapter.
content of soil. It is important for the hydrogeologist to possess reliable reference materials
for the chemical properties of common environmental contaminants.
Finally, after characterizing the release and the involved chemicals, the hydrogeolo-
gist must characterize the resulting groundwater plume based on existing data. This
involves evaluating the vertical and lateral extent of migration in the context of site-spe-
cific hydrogeologic data, such as stratigraphy, lithology, bedrock features, hydraulic con-
ductivity, hydraulic gradients, and the locations of groundwater recharge and discharge,
such as surface water features and wells. Heterogeneity plays an important role in this
assessment as it can substantially influence contaminant migration both vertically and
laterally. Figure 4.46 illustrates a modeled example where a low-permeability clay layer
in the saturated zone causes vertical plume bifurcation. Hydrogeologists should be cog-
nizant of the potential for these effects in real field conditions and design drilling and
sampling programs to evaluate heterogeneity. Whenever sufficient information exists,
Downloaded by [University of Auckland] at 23:40 09 April 2014
the hydrogeologist should also classify the groundwater plume as being increasing, sta-
ble, or decreasing as demonstrated in Figure 4.47. This informs the hydrogeologist as to
the strength of the remaining source and the extent of contaminant transformation and
FIGURE 4.46
Top: Simulated vertical plume bifurcation in the saturated zone as a result of heterogeneity (presence of a
clay lens). Effects such as this may cause contamination of water-supply aquifers at different depths. Bottom:
Development of the same plume assuming a homogeneous aquifer. Both views are cross-sectional.
Contouring 259
STABLE SHRINKING
PLUME PLUME
DETACHED
Great sorption Biodegradation PLUME
EXPANDING capacity rate greater than
PLUME loading rate Intermittent
Downloaded by [University of Auckland] at 23:40 09 April 2014
Biodegradation source
Sorption capacity rate equals Decrease in
exhausted loading rate loading
No biodegradation Decrease in Source removal
loading
Increase in loading
FIGURE 4.47
Influence of various fate and transport processes on plume development. While most fate and transport pro-
cesses may be present in any given case, the bullets list only those with the greatest possible net effect. (Modified
from United States Environmental Protection Agency, The Report to Congress: Waste Disposal Practices and
Their Effects on Ground-Water, EPA 570977001, 1977.)
decay (i.e., biodegradation) in the subsurface, which are the predominant determinants
of plume longevity. Figure 4.48 shows a commonly applied rule of thumb for assessing
the possible presence of NAPL phase in the subsurface, which greatly complicates the
overall characterization and prediction of contaminant fate and transport (from Kresic
2009).
0.005
0.1
10
>100
FIGURE 4.48
Delineation of potential aquifer zones with DNAPL trichloroethene (TCE) based on the 1% to 10% solubility
rule of thumb. TCE has aqueous solubility approximately between 1100 and 1400 mg/L. The aquifer area that
may contain residual DNAPL is assumed to be within the 100 mg/L concentration contour or approximately
8% of the pure phase solubility. Note that in the case of a DNAPL mixture, the effective solubility of TCE would
be less than the pure phase solubility. (From Kresic, N., Groundwater Resources. Sustainability, Management, and
Restoration. McGraw Hill, New York, 2009. With permission.)
260 Hydrogeological Conceptual Site Models
In summary, the hydrogeologist should be able to tell a story, describing the origins
of the contamination and explaining how current conditions came to be and how condi-
tions will potentially change in the future. Many complicated stories occur in real-world
conditions, involving comingled plumes, chain decay of contaminants, diffusion into low
permeability clay or bedrock (see the technical impracticability discussion in Chapter 8),
or plumes that move in different directions at different vertical intervals as illustrated in
Figure 4.49. Clear, information-rich data visualizations are essential to support difficult
contaminant fate and transport concepts, the most common and useful form of which are
contaminant concentration contour maps.
Downloaded by [University of Auckland] at 23:40 09 April 2014
58
54
52
56
50
54
48
52
46
60
Perchlorate (µg/L)
10
100
1,000
FIGURE 4.49
Example graphic demonstrating change in planar transport direction with depth. Solid black contours repre-
sent the potentiometric surface of shallow groundwater, and dashed red contours represent the potentiometric
surface of deep groundwater. Solid colored, filled contours represent the shallow perchlorate plume, and the
hatched filled contours represent the deep perchlorate plume. In this case, the direction of perchlorate transport
rotates approximately 60° at depth. Introduction of perchlorate to deep groundwater is caused by downward
vertical gradients. A conceptual justification must exist for this behavior, such as the presence of deep pumping
wells and/or thin or absent low-permeable layer between the shallow and deep water-bearing zone.
Contouring 261
these contours should look; therefore, statistical evaluation of the spatial interpolation model
(kriging) will be important. A secondary objective is to use the spatial interpolation model to
estimate concentrations at five unsampled locations and to see how the estimates compare to
the real values calculated by the model. This is a form of model verification. Prediction loca-
tions are presented in Figure 4.52 in addition to contours of contaminant concentration created
by the groundwater model. Geostatistical analysis and contaminant contouring presented in
this section are performed using the Geostatistical Analyst extension for ArcGIS.
For all of the presented kriging models, a result equal to one-half the reporting limit was
substituted for nondetect observations. In other words, all near-zero values simulated by
the model are replaced with a value of 0.05 μg/L for the purposes of contouring. Censored
environmental data pose many challenges to the hydrogeologist, and proper methods for
accounting for nondetections do exist in descriptive (e.g., calculating means and standard
deviations) and inferential statistics (e.g., performing hypothesis testing). As ProUCL (see
Chapter 3) and other commercial statistical software packages incorporate these methods,
there is no justification for using data substitution methods (which can introduce significant
FIGURE 4.50
Distribution of hydraulic conductivity and the resulting simulated potentiometric surface used to generate the
synthetic contaminant concentration map. Note that transport vectors will be nonlinear.
262 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.51
Simulated concentration contours and point values extracted from the model for the contouring example. Point
values represent measurements from monitoring wells in real-life applications.
FIGURE 4.52
Polyline contours of contaminant concentration produced by the groundwater model, point measurements, and
unsampled locations (TW-1 through TW-5) where concentration estimates are required.
Contouring 263
bias) for descriptive and inferential statistics. The reader is referred to the work of Helsel
(2005) for further information regarding statistics for censored data. However, there are cur-
rently no computer programs available to the general public that incorporate censored data
for geostatistical applications (Krivoruchko 2011). Therefore, substitution is the only available
option, and the hydrogeologist must assess the possible impacts of these substitutions on the
resulting interpolated surface and uncertainty estimates.
Ordinary kriging was used for each spatial interpolation scenario presented in this sec-
tion as the true mean concentration is unknown.
FIGURE 4.53
Default semivariogram for the contaminant concentration data. Spatial correlation is unrecognizable, and the
default lag coverage (product of lag size and number of lags) encompasses the entire data set. Note that the
semivariance (γ) labels on the Y axis are scaled by a factor of 10 –6. Esri® ArcGIS Geostatistical Analyst graphical
user interface. Copyright © Esri. All rights reserved.
264 Hydrogeological Conceptual Site Models
is an extreme amount of variability in the data set as the highest calculated semivariance
is close to 10 × 106 (ten million) (μg/L)2. The semivariogram map, or surface, presented
in the lower left corner of Figure 4.53, plots the semivariogram values in polar coordi-
nates with a common center using the distance and angle geometries of the data pairs. An
expanded version of the semivariogram map is presented in Figure 4.54 and shows that
areas of extreme variability are aligned along the NW–SE axis (the direction of ground-
water flow—note that north is up on all figures in Section 4.5) because of the precipitous
decline in concentration moving downgradient of the source area. The default semivario-
gram and search neighborhood properties are accepted, and the resulting default kriging
contour map is shown in Figure 4.55. The default surface looks absurd as excessive averag-
ing was performed such that the contaminant source area (represented by the 5000 μg/L
contour line) is grossly underpredicted, and nondetect areas are grossly overpredicted.
The associated default prediction standard error map depicted in Figure 4.56 reflects this
Downloaded by [University of Auckland] at 23:40 09 April 2014
overaveraging as the standard error exhibits relatively small variation across the interpo-
lated extent. Also, note that the prediction error is clearly independent of the data values
as prediction standard error decreases in areas of greater sampling density independent
of concentration.
The default kriging model was blindly accepted in this case, representing a situation
where the computer user is completely ignorant of the kriging process. However, if the
High : 9.71785e+006
Low : 0
FIGURE 4.54
Default semivariogram map for the contaminant concentration data. Note that the greatest variance is located
along the axis of groundwater flow at large separation distances, where the contaminant source area (or hot
spot) is paired with nondetections.
Contouring 265
ND ND
ND 34
570
8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3
73
50
10 7.8 8 5.2
Downloaded by [University of Auckland] at 23:40 09 April 2014
ND
1.5
2.9 6.9
0.2
FIGURE 4.55
Concentration contour map created using default kriging options. Filled, colored contours represent the kriging
output, and black polyline contours represent the real contours created by the groundwater model. The interpo-
lated surface is obviously unusable for any application.
computer user is a novice hydrogeologist or someone with a little knowledge about krig-
ing, it is entirely possible that the model could be rejected—although for the wrong reason.
The novice hydrogeologist may look at the noisy semivariogram in Figure 4.53 and con-
clude that spatial correlation does not exist and that a deterministic method such as IDW
must be used instead of kriging. This interpretation is wrong on many levels. For one, spa-
tial correlation clearly exists in the case of contaminant fate and transport in groundwater
with concentrations at closely located points being much more similar than concentrations
at points located farther away from each other. Second, even the use of a deterministic
model requires spatial correlation in the data, which is a fact not understood by the novice.
Third, no data exploration was conducted to determine if detrending and/or transforma-
tions were required to eliminate the extreme variability that is masking the spatial corre-
lation in the semivariogram. The first step in any successful geostatistical analysis is data
exploration, which is illustrated in the next section.
266 Hydrogeological Conceptual Site Models
ND ND
ND 34
570
8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3
73 50
10 7.8 8 5.2
Downloaded by [University of Auckland] at 23:40 09 April 2014
ND
1.5
2.9 6.9
0.2
Prediction Standard Error (ug/L)
400 - 430
5.0 4.2
430 - 460
460 - 490 3.3
490 - 520 ND
520 - 550 0.4
550 - 580 ND 2.3
580 - 610 2.0
0.2
610 - 640
ND
640 - 670 1.1
ND 1.0
670 - 700 0.7
FIGURE 4.56
Prediction standard error map for the default kriging model. The low variability in prediction standard error
reflects the overaveraging of the data set and the fact that local error is solely a function of sampling density.
• Semivariogram analysis
• Distributional assessment
• Trend analysis
FIGURE 4.57
Exploratory semivariogram for the contaminant concentration data. Six pairs with extremely high variability
at relatively low separation distances have been selected (highlighted in aqua) for identification. Esri® ArcGIS
Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
values or outliers, which, in certain cases, may be removed from the model. However, when a
few of these pairs are selected on the semivariogram (highlighted in aqua color in Figure 4.57),
one can see that they are all related to the contamination hot spot or source area as shown in
Figure 4.58. Each of the selected pairs includes either the 8300 μg/L result or the 7500 μg/L
result. As these two data points are critical to the CSM, they cannot be removed from the data
set. It is likely that prediction error will be higher in the hot spot because of the outlier behavior
of these data points. Anisotropy can also be assessed with the semivariogram as searching can
be used to see if the range of correlation changes with direction. At this stage, however, it may
be difficult to discern large-scale trend from short-scale anisotropy.
As stated in Section 4.3, kriging performs best when input data follow a normal distri-
bution. Distributional assessment consists of analyzing histograms and quantile–quantile
plots to determine if the data are normally distributed and, if not, to determine if the data
can be transformed so that the transformed data follow a normal distribution. A histo-
gram is a bar graph where the bar width shows the range of values within a group, and the
bar height shows the frequency with which values fall in that range. A quantile–quantile
plot charts the quantiles of the data against those of the standard normal distribution. The
most common data transformations are logarithmic (log), box-cox, arcsine, and normal
score. Normal score is the only method that will always produce normally distributed
transformed data. However, normal score transformation can only be used with simple
kriging because it assumes a known mean, and it performs poorly when the data have
268 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.58
Graphic showing the individual measurements involved in the high-variability pairs.
FIGURE 4.59
Histogram for unadjusted concentration data, which are skewed because of a high frequency of nondetect
results and a low frequency of samples with high concentrations (representing the source area/hot spot). Esri®
ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
Contouring 269
following the classical bell-curve shape, data are predominately grouped at low values,
with the exception of a few high value outliers, creating a tail shape. The mean and median
deviate significantly, which is another indicator of nonnormality. Figure 4.60 presents the
histogram of the data after they have been log-transformed. With the exception of an
abnormally high frequency of data values within the first, least-value bin (which contains
nondetect values), the chart bares more resemblance to the normal distribution with a
mean and median that are much closer than they previously were. While not ideal (the
data are still skewed), we can expect that kriging with log-transformed data, or lognormal
kriging, will perform significantly better than the default model. Analysis of the quantile–
quantile plot for log-transformed data, presented in Figure 4.61, confirms the histogram
result. If the data follow a lognormal distribution, the data points will fall exactly on a 45°
straight line. With the exception of the nondetect data, which are all assigned an arbitrary
value of 0.05 μg/L, the data better match a 1:1 line. It also is likely that the abnormally high
Downloaded by [University of Auckland] at 23:40 09 April 2014
frequency of values within the lower bin of the histogram is a result of the nondetections.
From this data exploration exercise, one can see how censored data can affect distribu-
tional assumptions and potentially influence kriging outcomes.
The final step in the data exploration process is determining whether deterministic
trends exist in the data. Figure 4.62 (top and bottom) depicts visualizations of Geostatistical
Analyst’s Trend Analysis tool depicting the contaminant concentration data (Z plane) in
the X and Y directions. First-, second-, or third-order polynomials can be fit to the data if
trend is observed. For the contaminant concentration data, there is a trend in both the X
and Y planes. The top graphic is a plot of the unadjusted concentration data, and while the
trend is visually apparent, the polynomial equation cannot accurately fit the data because
FIGURE 4.60
Histogram for log-transformed data, which more closely resembles the normal distribution, but is still skewed
because of numerous nondetect results. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright
© Esri. All rights reserved.
270 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.61
Quantile–quantile plot for the log-transformed data, which can be used to provide an additional line of evi-
dence in distribution assessment beyond histogram analysis. Alternatively, external programs such as ProUCL
can be used to perform statistical hypothesis tests to classify data distributions. Esri® ArcGIS Geostatistical
Analyst graphical user interface. Copyright © Esri. All rights reserved.
of the outlying behavior of the source area concentrations. The bottom graphic is a plot of
the log-transformed concentration data and demonstrates that log transformation before
detrending facilitates polynomial regression and enables a much better curve fit. This
will substantially improve detrending accuracy and reinforces the primary drawback of
simple kriging with normal score transformation—transformation can only be performed
after detrending.
Conceptually, a second-order trend is the best fit for the data based on the typical dis-
tribution of groundwater plumes. Moving from upgradient and side-gradient locations
through the contaminant source to the other side, concentrations will start low, increase,
and then decrease again, creating a hill shape akin to a second-order polynomial. The
deterministic process of groundwater flow creates this second-order trend that should be
removed from the data prior to contouring.
The following conclusions are reached based on this data exploration process:
• The high variance in the data set is caused by the contaminant hot spot, where
model uncertainty will potentially be high.
• A log transformation should be used prior to kriging so the transformed data
more closely follow a normal distribution.
• Nondetections are predominately responsible for deviation from the lognormal
distribution.
• A second-order trend should be removed from the data prior to kriging (but after
data transformation) so emphasis is placed on local variation.
Contouring 271
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.62
Visualizations of Geostatistical Analyst’s trend analysis tool using the contaminant concentration data. The top
graphic represents unadjusted data, and the bottom graphic represents log-transformed data. Log transforma-
tion improves the polynomial regression performance in areas of high contaminant concentration. In ordinary
kriging, log transformation can be conducted before detrending so the bottom graphic represents the poly-
nomial fit. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
FIGURE 4.63
Semivariogram with manually fit Gaussian model for the log-transformed data. Log transformation results
in a finite range of correlation and a partial sill that is clearly visible using the blue averaged semivariance
crosses. Anisotropy is used to account for the trend. Esri® ArcGIS Geostatistical Analyst graphical user inter-
face. Copyright © Esri. All rights reserved.
High : 31.5702
Low : 0
FIGURE 4.64
Semivariogram map for the log-transformed data. While the scale of variance has been reduced significantly
versus Figure 4.54, the relative distribution of variance is similar because of the presence of trend.
Contouring 273
As shown by the figures, the log-transformation significantly reduces the variance of the
data. The highest calculated semivariance is now approximately 40 (μg/L)2 for the lag cov-
erage displayed on the semivariogram (Figure 4.63), which is orders of magnitude below
the unadjusted maximum semivariance of approximately 10 × 106 (ten million) (μg/L)2.
For the semivariogram surface (Figure 4.64), the lag coverage was increased to include
the maximum distance between points such that the surface can be directly compared to
Figure 4.54. This is why the maximum semivariance shown in the surface does not exactly
match that of the semivariogram graph. Note that the relative distribution of variance
between Figures 4.64 and 4.54 is very similar as variability is still predominately related to
the contaminant hot spot or source area.
As a result of the log transformation, the spatial correlation of the data is now clearly
visible, and a Gaussian variogram model with a nugget effect can be fit to the data. Both
the semivariogram graph and surface show anisotropy because of the prevailing NW–SE
Downloaded by [University of Auckland] at 23:40 09 April 2014
direction of groundwater flow (once again, this is more likely an example of trend than
anisotropy). Therefore, anisotropy is added to the kriging model, and each of the blue
curves on the semivariogram corresponds to a particular direction. The search neighbor-
hood is smoothed, and the resulting interpolated surface is presented in Figure 4.65. The
ND ND
ND 34
570
8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3
73
50
10 7.8 8 5.2
ND
1.5
2.9 6.9
0.2
Kriged Concentration (ug/L)
0.05 - 1
5.0 4.2
1-2
2-5 3.3
5 - 10
ND
10 - 50
0.4
50 - 100 ND 2.3
100 - 500 2.0
500 - 1,000 0.2
ND
1,000 - 5,000 1.1
5,000 - 8,300 ND 1.0
0.7
FIGURE 4.65
Concentration contour map for lognormal kriging without detrending (colored, filled contours). The surface
is substantially improved over the default model, although concentrations are uniformly overpredicted and
instabilities exist in unsampled areas.
274 Hydrogeological Conceptual Site Models
lognormal kriging with anisotropy results are clearly better than the default model. The
overall shape of the plume is reasonably represented, and the contaminant hot spot is
not excessively averaged over a large area. However, areas with measured concentrations
between 1 and 100 μg/L are still overpredicted (in part because of the nugget effect), and
there are significant instabilities where the contours are unconstrained (see the discon-
nected 50–100 μg/L area adjacent to the simulated 5.0 μg/L contour line label, for example).
Therefore, this surface is still inadequate for most professional applications.
ing, which greatly assists the LPI in this example. The default second-order trend surface
is depicted in Figure 4.66 and represents the classical hill shape. Note that the default trend
surface is produced by global polynomial interpolation, where all data are used together
in the same polynomial expression.
The trend surface can be improved and converted to a local model through regression
analysis by utilizing the optimization tool. Remember that blindly hitting the Optimize
button is not guaranteed to produce good results; however, it generally works better for
the deterministic LPI model (which is used for the detrending) than for kriging. The opti-
mization process varies the polynomial interpolation bandwidth, spatial condition num-
ber, and search neighborhood to minimize cross-validation error. The resulting optimized
trend surface is presented in Figure 4.67 and resembles the classical groundwater plume
FIGURE 4.66
Screen shot of concentration data fit with a second-order global polynomial equation. The use of all data (hence
“global”) in the polynomial equation results in an overly averaged surface that does not accurately reflect the
plume shape. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
Contouring 275
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.67
Screen shot of optimized LPI model for use in detrending. The surface resembles a classical jet engine plume
shape and is overly smooth. Kriging the residuals of the detrended data will add local variability back into the
interpolation model. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights
reserved.
shape. This smooth, large-scale trend will be removed from the data prior to kriging so
that emphasis will be placed on local variability.
The log-transformed, detrended semivariogram and semivariogram map are displayed
in Figures 4.68 and 4.69, respectively. The detrending further reduces data variance to a
maximum value on the semivariogram graph (Figure 4.68) of less than 10 (μg/L)2. When
increasing the lag coverage to create the semivariogram surface (Figure 4.69), the maxi-
mum semivariance drops to approximately 1.5 (μg/L)2. More importantly, the detrending
changes the orientation of the semivariogram surface such that the maximum variance is
now found at small separation distances (i.e., close to the center of the map). This demon-
strates that the kriging model will now be based on local variability and will not be overly
influenced by the contaminant hot spot. Note also that the degree of anisotropy decreases
significantly as indicated by the narrower spread of curves in Figure 4.68. A Gaussian
model with a small nugget effect still provides the best fit to the semivariogram graph.
The resulting interpolated surface, after search neighborhood smoothing, is presented
in Figure 4.70. This model is clearly superior to the previous two iterations as interpolated
contours match simulated contours at both high concentrations and low concentrations.
The greatest discrepancy is now found around the 10 μg/L contour line, which overlies an
area of hydraulic conductivity and potentiometric surface transition (see Figure 4.50). The
spatial interpolation model is also unable to replicate the protrusion in the 1 and 2 μg/L
simulated contours in the right center of the map between the 0.3 and 1.5 μg/L values. This
276 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.68
Semivariogram with manually fit Gaussian model for the log-transformed, detrended data. The partial sill,
nugget, and degree of anisotropy have been significantly reduced compared to the default and log-transformed
(without detrending) models. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri.
All rights reserved.
High : 1.4956
Low : 2.02839e-005
FIGURE 4.69
Semivariogram map for the log-transformed, detrended data. Variability is now concentrated at small separa-
tion distances.
Contouring 277
ND ND
ND 34
570
8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3
73
50
10 7.8 8 5.2
ND
Downloaded by [University of Auckland] at 23:40 09 April 2014
1.5
2.9 6.9
0.2
Kriged Concentration (ug/L)
0.05 - 1
5.0 4.2
1-2
2-5 3.3
5 - 10 ND
10 - 50 0.4
ND 2.3
50 - 100
100 - 500 2.0
0.2
500 - 1,000 ND
1.1
1,000 - 5,000 1.0
ND
5,000 - 8,300 0.7
FIGURE 4.70
Concentration contour map for lognormal kriging with detrending (colored, filled contours). The surface is
usable for predicting concentrations at unsampled locations, generating graphics for inclusion in reports or
presentations, and calculating contaminant mass.
demonstrates that high localized heterogeneity is very difficult to represent with even the best
kriging model. Thorough, conceptual analysis of the groundwater flow field is necessary to
evaluate where additional sampling may be necessary to evaluate the effects of heterogeneity.
FIGURE 4.71
Cross-validation comparison between the default kriging model and the log-transformed, detrended Gaussian
kriging model. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights
reserved.
default graph reflects the fact that excessive averaging is used. Because of this averaging,
there is less underpredicting bias for the default model, and the root-mean-square stan-
dardized error is closer to 1. Therefore, if one were solely looking at the cross-validation
statistical results (i.e., ignoring the predicted versus observed graph and the surface itself),
it is possible that the default model could be selected over the advanced model with the
justification that it more accurately reflects the variability of the data. In this manner, the
cross-validation statistics are misleading as the two hot spot monitoring locations are outli-
ers that strongly influence the cross-validation statistics of the advanced model. The default
model merely averages these values away, correcting the bias by introducing more error.
While the default model is clearly erroneous in this case, the decision regarding which
model is best becomes more difficult when two reasonable models are created, where one
is superior statistically and the other is superior visually. An example of this situation is
presented on the companion DVD related to cokriging of the contaminant concentration
data. In general, the decision as to which model is better depends on the overall purpose
of the spatial interpolation exercise. If the contouring is being used to make predictions
that must be rigorously defended to regulators and/or litigators, it is beneficial to have a
strong statistical performance with reliable estimates of uncertainty. Alternatively, if the
contouring is being used predominately for graphical display purposes or for input into a
groundwater model (situations where smooth interpolation surfaces are desirable), visual
appearance may be more important. The CSM must also be considered in the evaluation
as the model with worse statistical performance may more accurately depict conceptual
processes and therefore better represent reality.
In addition to cross-validation statistics, the prediction standard error of the kriging
models should be compared. The prediction standard error surface for the detrended, log-
transformed model is presented in Figure 4.72. This map is dramatically different from
Contouring 279
ND ND
ND 34
570
8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3
73 50
10 7.8 8 5.2
ND
Downloaded by [University of Auckland] at 23:40 09 April 2014
1.5
2.9 6.9
0.2
FIGURE 4.72
Prediction standard error map for the detrended, log-transformed model. Note that while the kriged prediction
surface (Figure 4.70) cannot replicate the contour bend between the 0.3 and 1.5 µg/L values on the right-center
of the map, the prediction standard error map does capture variability in this area.
the nonsensical default prediction standard error map presented in Figure 4.56. After
detrending and transforming the data, prediction standard error now depends on both the
sampling density and the locations of the data values, which is why error is significantly
higher in the contaminant hot spot than in the downgradient plume area. Pockets of rela-
tively high prediction standard error between 5 and 18 μg/L are caused by a sampling
density that insufficiently captures the observed heterogeneity of the data.
The data dependency of the prediction standard error estimates are also illustrated
through the histograms presented in Figure 4.73. The histogram at the top is for prediction
standard error estimates of the detrended, log-transformed data and shows that the error
estimates reasonably resemble a lognormal distribution, just like the data themselves.
Conversely, the default prediction standard error histogram at the bottom does not fol-
low any discernable distribution. In general, prediction standard error estimates are data
dependent when they follow the same distribution as the input data. While one should
be very happy that error is now data dependent, the novice user may once again be mis-
led by the fact that the advanced model has a significantly higher maximum prediction
standard error than the default model. It is hoped that hydrogeologists performing spatial
280 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.73
Histograms of log-transformed prediction standard error estimates at sampling locations for the detrended,
log-transformed kriging model (top), and the default kriging model (bottom). Esri® ArcGIS Geostatistical
Analyst graphical user interface. Copyright © Esri. All rights reserved.
TABLE 4.2
Summary Table of Predictions at Unsampled Locations for Three Kriging Models
Predicted Concentrated (μg/L)
Model
Concentration Detrended, Log Log Transformed
Test Well (μg/L) Transformed Default Default
TW-1 1,436 2,981 4,034 1,074
TW-2 12,677 6,594 8,538 1,134
TW-3 8.10 11.7 25.0 513
TW-4 2.90 1.10 3.16 3.43
TW-5 0.52 0.94 3.25 1.80
Downloaded by [University of Auckland] at 23:40 09 April 2014
the peak concentration is grossly underpredicted, and lower concentrations are overpre-
dicted. The log-transformed model without detrending uniformly overpredicts, with the
exception of TW-2, as the hot spot exerts too much influence on the rest of the interpola-
tion surface. The detrended, log-transformed model performs best across the full range of
concentrations but still has difficulty capturing the peak at TW-2. It is unlikely that any
kriging model will accurately predict the TW-2 concentration because it is significantly
higher than the greatest measured values used in the contouring (8300 μg/L). For regula-
tory purposes, accurately predicting the extent of low concentrations in single-digit or tens
of micrograms per liter is often most important as drinking water standards are generally
found in that concentration range. The advanced model clearly performs best in this range
as demonstrated by the results at TW-3.
FIGURE 4.74
Covariance plot between the contaminant concentration data (Variable 1) and the water level data (Variable 2),
demonstrating correlation between the two variables. Esri® ArcGIS Geostatistical Analyst graphical user inter-
face. Copyright © Esri. All rights reserved.
5,000
1,000
100
50
10
FIGURE 4.75
Concentration contour map for a cokriging model using additional water-level monitoring points (white circles)
to supplement the existing contaminant concentration data set (purple circles).
Contouring 283
4.5.3 Summary
The above contouring exercise demonstrates the extensive geostatistical capabilities that
are now readily available to the professional hydrogeologist. GUIs in programs such as
Surfer and Geostatistical Analyst greatly simplify advanced quantitative analysis, and
the use of advanced geostatistical methods is no longer limited to academia. Obviously,
there is danger inherent to introducing these methods to nonexperts because of the
potential for software misuse (see the discussion in this chapter regarding the use and
interpretation of prediction standard error and indicator kriging, for example). However,
in the opinion of the authors, the far bigger problem is the widespread use of default
models in professional hydrogeology. This practice is not scientific and completely
ignores the advancements that have been made in commercial and public-domain soft-
ware in recent years.
One potential reason for the persistence of default models is the failure to appreciate the
benefits of geostatistical spatial interpolation or, more precisely, the failure to appreciate
284 Hydrogeological Conceptual Site Models
the consequences of not using geostatistical spatial interpolation. Without defensible, sci-
ence-based interpolation, one is left with an endless cycle of data collection. If uncertainty
cannot be quantified then no argument can be made that enough data have been collected
to support the CSM and the data quality objectives of the sampling program. For site reme-
diation applications, this translates to the familiar process of delineating to nondetect.
Excessive data collection during site investigation and remediation often leads to the gen-
eration of absurd figures and data tables such as those in Figures 4.76 and 4.77. While the
extremely high density of monitoring wells in Figure 4.76 may be necessary for research
applications, it makes little sense in professional practice.
It goes without saying that the cycle of endless data collection results in unacceptable
costs using financial resources that could be better allocated elsewhere. Unfortunately,
environmental consultants will often acquiesce to regulatory demands for additional
sampling without even proposing a geostatistical alternative that expands knowledge
Downloaded by [University of Auckland] at 23:40 09 April 2014
about the data that have already been collected. Redundancy in data collection can be
both spatial and temporal, and geostatistical methods can document this redundancy and
prove that additional data collection has no value. For example, VSP, a statistical analysis
program available in the public domain, has specific modules designed to help the user
identify spatial and temporal redundancy in sampling networks (Matzke et al. 2010). As
advanced capabilities, such as single well/temporal variogram analysis, are now available
in the public domain in programs such as VSP, there is no excuse for the professional com-
munity to ignore these methods any longer.
To avoid wasteful data collection and to advance informed decision-making processes
that quantify uncertainty, it is necessary for both the consulting and regulatory commu-
nities to become more educated in geostatistics and more willing to apply and embrace
geostatistical concepts.
FIGURE 4.76
Photograph of a well field at the Massachusetts Military Reservation. Appropriate application of kriging may
help avoid sampling at this density in professional (nonacademic) settings. (From United States Geological
Survey, Carbon and Nitrogen Cycling in Groundwater: Cape Cod Study Site. Biogeochemistry of Carbon and
Nitrogen in Aquatic Environments, 2010. Available at www.brr.cr.usgs.gov/projects/EC_biogeochemistry/
Cape.htm.)
Contouring 285
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.77
Data table demonstrating the consequences of excessive data collection to satisfy regulatory demands.
Geostatistical methods tell us more about our data than ill-advised extraneous sampling and help ensure that
each additional sample has a substantive conceptual justification. In this case, the cost of each single piece of
data (i.e., each cell) on the table is estimated at $14.
FIGURE 4.78
Dialogue box for the Topo to Raster tool in ArcGIS. Note the allowable input formats include hydrology-specific
data fields. Esri® ArcGIS Spatial Analyst graphical user interface. Copyright © Esri. All rights reserved.
FIGURE 4.79
Contour lines used as input to the Topo to Raster tool.
Contouring 287
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.80
Execution of the tool produces the colored raster shown below the contour lines.
FIGURE 4.81
Dialogue box for the Raster to Point conversion tool in ArcGIS. Esri® ArcGIS ArcToolbox graphical user inter-
face. Copyright © Esri. All rights reserved.
288 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014
FIGURE 4.82
Execution of the tool results in the creation of a point file with a feature placed at the center of each grid cell with
the value of the raster cell added to the point file’s attribute table. This point file can now be used to recreate the
raster in any format using exact interpolation.
the raster created for Figure 4.80 results in the point file presented in Figure 4.82. We have
now converted the originally obtained contour lines into a point file with the attributes of
an interpolated surface.
In order to convert these points to a Surfer grid file, one only needs to bring the point
file into Surfer and perform any exact interpolation method (such as default linear kriging)
specifying the same grid size as the original Esri grid file. The same process can be per-
formed in reverse using a point file from a Surfer grid and then interpolating the points
into a raster using Spatial Analyst or Geostatistical Analyst in ArcGIS.
FIGURE 4.83
Dialogue box for the Extract Values to Points tool. Raster values at the corresponding X, Y locations will be
added to the attribute table of the black points depicted in the figure. Esri® ArcGIS ArcToolbox graphical user
interface. Copyright © Esri. All rights reserved.
FIGURE 4.84
Dialogue box used to specify parameters for kriging with Spatial Analyst. Variography cannot be performed in
Spatial Analyst, and if Spatial Analyst were meant to be used for kriging, Geostatistical Analyst would not exist.
Esri® ArcGIS ArcToolbox and Spatial Analyst graphical user interface. Copyright © Esri. All rights reserved.
FIGURE 4.85
Contours of the aquitard created using kriging with default settings in Spatial Analyst. Note areas of instability
with jagged transitions between contour lines.
Contouring 291
References
Carlson, R. E., and Foley, T. A., 1991. Radial Basis Interpolation on Track Data. Lawrence Livermore
National Laboratory, UCRL-JC-1074238.
Clark, I., 1979. Practical Geostatistics. Applied Science Publishers, London, 129 pp.
Cressie, N., 1991. Statistics for Spatial Data. John Wiley & Sons, Inc., New York, 900 pp.
Esri, Inc., 2003. ArcGIS® 9: Using ArcGIS® Geostatistical Analyst. Redlands, CA, 300 pp.
Esri, Inc., 2010a. How Local Polynomial Interpolation Works. ArcGIS 10 Help Library.
Esri, Inc., 2010b. How Semivariogram Sensitivity Works. ArcGIS 10 Help Library.
Esri, Inc., 2010c. Key Concepts of Geostatistical Simulation. ArcGIS 10 Help Library.
Golden Software, Inc., 2002. Surfer 8 User’s Guide: Contouring and 3D Surface Mapping for Scientists and
Engineers. Golden Software, Inc., Golden, CO, 640 pp.
Helsel, D. R., 2005. Nondetects and Data Analysis. John Wiley & Sons, Hoboken, NJ, 250 pp.
Downloaded by [University of Auckland] at 23:40 09 April 2014
Institute of Environmental Modeling, University of Tennessee, 2008. Spatial Analysis and Decision
Assistance (SADA) Software Home Page. University of Tennessee Research Corporation.
Available at www.tiem.utk.edu/~sada/index.shtml.
Isaaks, E., and M. Srivastava, 1989. An Introduction to Applied Geostatistics. Oxford University Press,
New York, 561 pp.
Kitanidis, P. K., 1997. Introduction to Geostatistics. Applications in Hydrogeology. Cambridge University
Press, Cambridge, UK, 249 pp.
Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC Press/Taylor & Francis,
Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources. Sustainability, Management, and Restoration. McGraw Hill,
New York, 852 pp.
Krivoruchko, K., 2011. Spatial Statistical Data Analysis for GIS Users. Esri Press, Redlands, CA, 928 pp.
Krivoruchko, K., and Bivand, R., 2009. GIS, Users, Developers, and Spatial Statistics: On Monarchs
and Their Clothing. In Interfacing Geostatistics and GIS, Springer, pp. 209–228. Paper presented
at the StatGIS 2003, International Workshop on Interfacing Geostatistics, GIS and Spatial
Databases, Pörtschach, Austria, September 29–October 1, 2003.
Krivoruchko, K., Gribov A., and Ver Hoef, J., 2000. A New Method for Handling the Nugget Effect
in Kriging. Available from ESRI online at http://training.esri.com/bibliography/index.
cfm?event=general.recorddetail&id=30570. Accessed March 23, 2011.
Kupusović, T., 1989. Measurements of piezometric pressures along deep boreholes in karst area and
their assessment. Nas Krs XV(26–27), 21–30.
Matzke, B. D., Nuffer, L. L., Hathaway, J. E., Sego, L. H., Pulsipher, B. A., McKenna, S., Wilson, J. E.,
Dowson, S. T., Hassig, N. L., Murray, C. J., and Roberts, B., 2010. Visual Sample Plan Version 6.0
User’s Guide. United States Department of Energy, PNNL-19915, 255 pp. Available at http://
vsp.pnnl.gov/docs/PNNL%2019915.pdf.
Milanović, P., 2006. Karst istocne Hercegovine i dubrovackog priobalja (Karst of Eastern Herzegovina and
Dubrovnik Litoral). ASOS, Belgrade, 362 pp.
United States Environmental Protection Agency, 1977. The Report to Congress: Waste Disposal
Practices and Their Effects on Ground-Water. EPA 570977001, 531 pp.
United States Geological Survey, 2010. Carbon and Nitrogen Cycling in Groundwater: Cape Cod
Study Site. Biogeochemistry of Carbon and Nitrogen in Aquatic Environments. Available at
http://www.brr.cr.usgs.gov/projects/EC_biogeochemistry/Cape.htm.
Webster, R., and Oliver, M. A., 2001. Geostatistics for Environmental Scientists. John Wiley & Sons, Ltd.,
Chichester, England, 271 pp.
Downloaded by [University of Auckland] at 23:40 09 April 2014
5
Groundwater Modeling
5.1 Introduction
Groundwater models are utilized during all phases of conceptual site model (CSM) develop-
ment. They vary in complexity from simple analytical models used as screening tools to three-
dimensional (3D) numerical models that serve as a CSM’s focal point. Advanced numerical
models with a graphical user interface (GUI) have many advantages of geographic informa-
tion system (GIS) and embody various quantitative relationships between the CSM elements.
Models enable hydrogeologists to make educated predictions about the state of the system and
its response to changes. These changes may be an increased demand of groundwater use in a
watershed, more frequent occurrence of droughts, or impacts of a groundwater remediation
technology on contaminant concentrations some distance downgradient of the site. Whatever
the case may be, a predictive model—analytical or numerical—is often used to help guide
development of the CSM, test scenarios, and support or refute conclusions.
Unfortunately, there exists a distrust of groundwater models in the environmental indus-
try. Some professionals and regulators believe groundwater model results to be entirely
nonunique, which means that results can easily be tweaked to fulfill a preordained pur-
pose while still satisfying a calibration standard. To a certain extent, the critics are correct.
The hydrogeologist has tremendous influence over model results and can sway things one
way or another by modifying model input parameters that are difficult to trace. For this
and other reasons, some professionals hired by parties with high-stakes agendas, such as
winning lawsuits, may bend the limits of professional ethics and develop groundwater
models that cannot be comprehended even by some practicing hydrogeologists.
The questions that inevitably arise regarding modeling for any purpose are “Why do we
need to do groundwater modeling in the first place?” and “Can groundwater modeling be
performed in a transparent manner that is easy to review and understand?” To answer the
first question, hydrogeologists need groundwater models for two primary reasons: One is to
predict future conditions, and the other is to understand how current conditions came to be.
Nearly all projects in hydrogeology require future projections, and the hydrogeologist is relied
upon for expert opinion in this matter. Most commonly, the hydrogeologist must evaluate the
efficacy of selected interventions in water-supply development or hazardous-waste remedia-
tion (particularly groundwater remediation). Examples include the installation and operation
of new water-supply wells or extraction wells associated with pump-and-treat remediation, the
injection of chemical reagents for groundwater remediation (in situ chemical oxidation), or the
stimulation of contaminant biodegradation. Groundwater models are the best available means
of simulating these interventions and, most simply, determining if they will work. Modeling
can also fill in the gaps between data that have been or will be collected at discrete time inter-
vals, thus helping the hydrogeologist better understand and simulate transient processes.
The answer by the authors and many professional hydrogeologists to the second ques-
tion (Can groundwater modeling be performed in a transparent manner that is easy to
review and understand?) is a resolute yes, as demonstrated further in this chapter.
293
294 Hydrogeological Conceptual Site Models
and hydraulic and chemical parameters. Yet the hydrogeologist will inevitably discover
something new about the site through the execution of the groundwater model and will
likely alter the CSM accordingly. The explanation for this iterative behavior is that the
initial CSM is usually developed with coarse resolution and focuses on macro concepts
such as general soil (rock) type, stratigraphic relationships, general groundwater chem-
istry, and aquifer production potential. When these broad categorizations are translated
into a transient, site-specific numerical model, the many details involved in groundwater
flow and contaminant transport become significant at a more refined scale. Changes in
model layering or key parameters such as groundwater recharge or hydraulic conductiv-
ity become necessary, and often a conceptual justification for these changes exists that
initially escaped the coarse resolution of the CSM. The groundwater model and the CSM
work hand-in-hand to create a 3D simulation of the real world, which the hydrogeologist
can use to predict future conditions and explain how current conditions came to be.
4040 feet
Hydraulic conductivity, ft/d
4.3 x 10-5 0.01 4.32 25 Available boring log data
FIGURE 5.1
Spatial distributions of four sediment types at two different depths below ground surface, 30 ft apart. These
probabilistic realizations were created by an expert witness in a lawsuit using a technique developed by the
expert witness. Available boring logs for the two depths are shown with yellow circles. Each sediment type
was assigned arbitrary values of hydraulic conductivity. Regional lacustrine clay deposits are depicted in red.
future by leaks of gasoline at certain gas stations. Incidentally, the sites in question are
situated in an area with thick regional lacustrine clay deposits, which act as a regional
aquitard. This aquitard separates surficial sediments from the underlying aquifer where
the production wells are installed to extract groundwater for water supply. Apparently
because the nongeologist (an expert witness for the plaintiff) opined that “every aquitard
leaks,” a computer program was used to create an objective interpretation of the subsurface
based on well drillers’ logs and a statistical algorithm to fill in geologic data gaps based on
probability. The expert witness and the assistant applied this concept three times, creat-
ing three different 3D realizations of the probable subsurface geology. Two depth-discrete
horizontal maps (slices) of one of the 3D realizations, together with the locations of field
information (from borings/wells) available for these depths, are shown in Figure 5.1. By
utilizing a technique developed by the expert witness and the assistant, they proved every
aquitard should leak because the probabilistic model created many holes (small, large, and
everything in between) in the thick lacustrine clay deposits, including in wide areas with-
out any field information. The three probabilistic realizations of the 3D subsurface geology
created by the expert witness and the assistant looked similar but did have fundamental
differences because of the randomness of the model.
After creating the three probabilistic interpretations of the subsurface geology, the
expert witness and the assistant then simulated the fate and transport of the constituent of
concern (COC) in groundwater. The final deliverable of their groundwater modeling was
three different applicable (to the issue in question) predictions of the contaminant concen-
trations in the subsurface. At the same field locations (same wells) and at the same times
of prediction, concentrations ranged between nondetect and hundreds of micrograms per
liter (parts per billion) between the three scenarios. When asked by the judge which of the
predicted concentrations at certain locations was more probable, the expert witness could
296 Hydrogeological Conceptual Site Models
not decide. As a result, the “most sophisticated groundwater model ever created” was
dismissed by the judge.
As described further in Section 5.5, statistically based algorithms to assess model uncer-
tainty generally require hundreds or even thousands of model runs so that the probability
of a certain outcome can be assessed statistically. The expert witness may have been able to
quantify the likelihood of a certain contaminant distribution if a more robust analysis were
conducted, and doing so may have prevented the model from being discarded. However,
the possibility does exist that the expert witness performed additional analyses yet only
presented the three realizations that showed a breach in the aquitard solely to illustrate
that such an occurrence was possible. This practice of cherry-picking only the random
realizations that suit one’s conceptual objectives defeats the entire purpose of probabilis-
tic modeling. Anything is theoretically possible, and the real challenge is characterizing
uncertainty qualitatively through professional judgment and the CSM, and quantitatively
Downloaded by [University of Auckland] at 23:45 09 April 2014
wide and tens of miles long were based on a number of factors, including major potentio-
metric surface troughs in the aquifer, the presence of sinking streams, geochemical infor-
mation, and geologic structures (Lindgren et al. 2004).
Figure 5.2a shows a portion of the model domain with the simulated steady-state poten-
tiometric surface of the Edwards Aquifer. The model’s no-flow boundary is emphasized by
annotation added to the original figure from Lindgren et al. (2004). This no-flow boundary
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.2
Portion of the model domain showing results of a numerical groundwater flow model developed for karstic
Edwards Aquifer in Texas. (a) Calibrated steady-state model with the model-predicted potentiometric surface
contours; annotation added for emphasis. (b) Calibrated transient model for drought conditions of August
1956; blue arrows and question marks added for emphasis. (From Lindgren, R. J. et al., Conceptualization and
Simulation of the Edwards Aquifer, San Antonio Region, Texas. U.S. Geological Survey Scientific Investigations
Report 2004-5277, 143 pp., 2004.)
298 Hydrogeological Conceptual Site Models
Figures 5.3a and 5.3b show the same steady-state and transient conditions depicted in
Figures 5.2a and 5.2b, respectively, when the virtual conduits are in the model (the con-
duits are represented by the jagged, thick red lines). Notably, the simulated potentiometric
contour lines for the conduit conditions are omitted from the two figures. Instead, the
authors of the model have chosen to schematically show groundwater directions using
hand-drawn black arrows (note that all arrows in Figure 5.3 are original to the USGS pub-
lication). The USGS arrows showing groundwater flow directions are perpendicular to the
no-flow boundary as emphasized with added question marks in Figure 5.3b, indicating
that flow from the no-flow boundary is occurring.
The authors of this book are not aware of any plausible explanation for the strange,
hydraulically impossible behavior of the no-flow boundary in this one-layer numerical
model of the Edwards Aquifer. In addition to the USGS, multiple public and private profes-
sionals are listed as authors of the model or members of the Ground Water Model Advisory
Panel that endorsed it. Consequently, it may appear that both the numerical groundwater
model and the conceptual hydrogeological model it is supposed to simulate must be cor-
rect because they were endorsed by numerous institutions and consulting companies.
The above example illustrates that one should never accept previously published, peer-
reviewed information at face value without independent, critical analysis. While the USGS
produces many useful and accurate reports, the institution is not infallible. It is often
desirable for a hydrogeologist to directly cite the USGS or United States Environmental
Protection Agency (U.S. EPA) as there will generally be less resistance from regulators in
accepting the related concepts and conclusions. However, if the assumptions and results of
the study in question are wrong, it can lead to the rapid propagation of conceptual errors
that can become entrenched in professional practice.
It is acknowledged that the circumstances of the model review are not known to the
authors, including the visualizations of model output made available to the reviewers.
However, the widespread endorsement of a model with apparent conceptual and other
problems could be construed as an example of groupthink, when group pressures lead to
a breakdown in independent thought and result in flawed decision making (Janis 1972).
Groupthink favors anecdotal assumptions over scientific evidence, avoids criticism and
controversy to achieve “consensus,” and rationalizes bad decisions made in the past rather
than exploring new solutions. To avoid groupthink, group members should remain as
impartial as possible and consult independent, evidence-based opinion from third parties
removed from the impacts and political pressures of the decision to be made.
Selecting and designing a remedial process may be the central decision a team of envi-
ronmental professionals is making for a site. As suggested by the U.S. EPA, it is difficult
Groundwater Modeling 299
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.3
Portion of the model domain showing results of a numeric groundwater flow model developed for karstic
Edwards Aquifer in Texas when virtual conduits (jagged red lines) are included in the model. Hand-drawn
black arrows showing groundwater flow directions are original to this USGS report. Note the absence of model-
predicted potentiometric surface contour lines. (a) Calibrated steady-state model. (b) Calibrated transient model
for drought conditions of August 1956; question marks are added for emphasis. (From Lindgren, R. J. et al.,
Conceptualization and Simulation of the Edwards Aquifer, San Antonio Region, Texas. U.S. Geological Survey
Scientific Investigations Report 2004-5277, 143 pp., 2004.)
for any software to incorporate the myriad input data and parameters that are required to
select and customize a remedial process to a site’s unique characteristics. The agency also
points out that new insight and experience are constantly reshaping design and applica-
tion of a remedial process, even for the most successful and widely used technologies.
For these reasons, many past attempts at developing decision support tools that include
groundwater models to support remedial process selection have been abandoned or are no
longer supported by the U.S. EPA (2005). Instead, hydrogeologists are relying on individ-
ual, site-specific groundwater models and their combinations to provide conceptual and
300 Hydrogeological Conceptual Site Models
water modeler; the two professions are inseparable today, and groundwater modeling has
become an industry standard practice in all aspects of hydrogeology.
• Empirical (experimental)
• Probabilistic
• Deterministic
Empirical models are derived from experimental data that are fitted to some mathematical
function. Although empirical models are limited in scope and are usually site- or problem-
specific, they can be an important part of a more complex numerical modeling effort. For
example, the behavior of a certain contaminant in porous media can be studied in the
laboratory or in controlled field experiments, and the derived experimental parameters
can then be used for developing numerical models of groundwater transport.
Probabilistic models are based on laws of probability and statistics. They can have vari-
ous forms and complexity starting with a simple probability distribution of a hydrogeo-
logical property of interest and ending with complicated, time-series stochastic models.
Groundwater Modeling 301
The main limitations for wider use of probabilistic (stochastic) models in hydrogeology
are that they require large data sets needed for parameter identification and they cannot
be used to answer (predict) many of the most common questions from hydrogeologic prac-
tice, such as effects of future pumping on an aquifer (Kresic 2007).
Deterministic models describe the state or future reactions of the groundwater system
using the physical laws governing groundwater flow. An example is the flow of ground-
water toward a fully penetrating well in a confined aquifer as described with the Theis
equation. Most problems in traditional hydrogeology are solved using deterministic mod-
els, which can be as simple as the Theis equation or as complicated as a numerical model
of multiphase flow through a multilayered, heterogeneous, anisotropic aquifer system.
Regardless of the type, for any groundwater model to be interpreted and used properly,
its limitations should be clearly understood and described. In addition to strictly technical
limitations, such as accuracy of computations (hardware/software), the following is true
Downloaded by [University of Auckland] at 23:45 09 April 2014
It is therefore obvious that a model will have a varying degree of reliability, and it could
not be misused as long as all its limitations are clearly stated, the modeling process follows
industry-established procedures and standards (see Section 5.6), and the modeling docu-
mentation and reports are transparent, also following the industry standards.
The two most widely applied groups of deterministic models are analytical and numeri-
cal as described in the following sections.
1. Solute transport with or without first-order decay for any dissolved constituent.
2. Solute transport with biotransformation modeled as a sequential first-order decay
process primarily for simulating the sequential reductive dechlorination of chlo-
rinated ethanes (TCA) and ethenes (PCE, TCE).
3. Solute transport with biotransformation modeled as a sequential first-order decay
process with two different reaction zones (i.e., each zone has a different set of deg-
radation rate coefficient values).
4. Decaying source. To model a decaying source in BIOCHLOR, the Domenico (1987)
semianalytical solution for reactive transport with first-order biological decay
was modified to incorporate a decaying source (boundary) condition. The revised
model assumes that the source decays exponentially via a first-order expression:
Coexp(–kst). The source decay constant ks must be determined by the user prior to
using BIOCHLOR. The model includes an option for a continuous (nondecaying)
source term as well.
Because BIOCHLOR and BIOSCREEN have been developed primarily as quick screening
tools, they should not be applied where pumping systems create a complicated flow field.
In addition, the models should not be applied where vertical flow gradients affect contami-
nant transport or where recharge from the land surface or laterally from adjacent aquifers
plays a significant role. However, either model can be used for simulating fate and trans-
port of any dissolved constituent, organic or inorganic, including those that do not sorb
onto porous media or are not degrading. An example screen shot of the BIOSCREEN-AT
input menu is shown in Figure 5.4. In this case, the modeled constituent is a metal with a
retardation factor, R, of 132. The metal does not degrade as specified by a decay rate of zero.
Note that the fields for all degradation coefficients or half-lives of BTEX constituents are
displayed by default but are not active in this case because the modeled constituent is not
BTEX. The model output screen showing the constituent’s concentration along the plume’s
centerline versus distance from the source after 100 years of travel is provided in Figure
5.5. The mechanisms that decrease the contaminant’s concentration as it moves away from
the source are the assumed mechanical dispersion and the retardation resulting from
Groundwater Modeling 303
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.4
BIOSCREEN-AT input menu. The modeled constituent is a metal with the retardation factor (retardation coeffi-
cient), R, of 132 and the source zone concentration of 1.23 mg/L. The simulation time is 100 years. Biodegradation
option is not active.
FIGURE 5.5
BIOSCREEN-AT results screen for the input parameters shown in Figure 5.4.
304 Hydrogeological Conceptual Site Models
sorption (again, note that this simple analytical model does not simulate dilution from
recharge or lateral influx of clean groundwater).
natural attenuation is most likely to be protective of human health and the environment. It
also allows regulators to carry out an independent assessment of treatability studies and
remedial investigations that propose the use of natural attenuation (Aziz et al. 2000).
Advective transport. The representative seepage velocity (Vs) of groundwater flow through
the interstitial space of the porous media (aquifer matrix) is calculated by multiplying
hydraulic conductivity (K) by hydraulic gradient (i) and dividing by effective porosity (n).
As emphasized by the BIOCHLOR manual, it is strongly recommended that actual site
data be used for hydraulic conductivity and hydraulic gradient data parameters, whereas
effective porosity can be estimated based on predominant soil type.
The site-specific hydraulic conductivity is 0.00175 cm/s, the geometric mean of slug test
values from the four monitoring wells, MW-1, MW-2, MW-3, and MW-4: 0.01067, 0.00039,
0.00106, and 0.00214 cm/s, respectively. The geometric mean, rather than the arithmetic
mean, is selected as the most probable value, knowing that the hydraulic conductivity of
an aquifer typically follows a log-normal probability distribution (Kresic 2007).
The hydraulic gradient is 0.00179, calculated as the average change in the hydraulic head
between MW-1 and MW-4 (recorded between September 2003 and June 2011) divided by
the distance between the two wells: (246.01 – 244.31)/950 = 0.00179. Because of the minor
presence of silt and clay, the effective porosity is estimated at 20% or slightly less than the
typically used value of 25% for sands. The schematic of the industrial site showing the
contaminant source zone and the four monitoring wells selected for the modeling is pre-
sented in Figure 5.6. The BIOCHLOR input screen showing input fields for the advective
transport and other model parameters discussed further is given in Figure 5.7.
It is important to note that there is ongoing confusion in professional practice between
the total and effective porosity of groundwater media. Total porosity is a measure of the
total volume of voids in the soil matrix, whereas effective porosity is a measure of the
interconnected volume of voids in the soil matrix. Therefore, the effective porosity is always
lower than the total porosity. It is not uncommon for the authors to hear people say that
clay has a low hydraulic conductivity because of its low porosity, which is completely false.
In reality, clay has a total porosity much higher than that of sand, gravel, or silt. However,
the interconnected fraction of porosity is very low in clay, minimizing transmission of
groundwater.
Dispersion. The process by which a dissolved solvent will be spatially distributed lon-
gitudinally (along the direction of groundwater flow), transversely (perpendicular to
groundwater flow), and vertically (downward) because of mechanical mixing and chemi-
cal diffusion in the aquifer is called dispersion. Selection of dispersivity values is a diffi-
cult process, given the impracticability of measuring dispersion in the field. Nevertheless,
Groundwater Modeling 305
w
So h 2
id
t
ur 00
ft
ce ft
ft 150
350
ft -1
950 ection MW rce
Soukness
Dir
Flow ic
th t
-2 f
rline MW 25
Cente
me
Plu -3
MW
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.6
Schematic of monitoring wells and source zone used for BIOCHLOR modeling at the industrial site.
FIGURE 5.7
BIOCHLOR input screen.
306 Hydrogeological Conceptual Site Models
the U.S. EPA lists simple estimation techniques based on the length of the plume (Aziz
et al. 2000). Additional discussion regarding the mathematical significance of dispersiv-
ity and various related uncertainties is provided in Section 5.4.2. From the 2011 field data,
the length of the longest COC plume (cis-1,2-DCE) at the industrial site is estimated to
be approximately 450 ft downgradient of the source (note that the farthest monitoring
well with any COC detection is MW-3). The longitudinal dispersivity (Alpha x) of 45 ft is
selected based on one of the three default options in BIOCHLOR. Option 1 assumes that
Alpha x is 10% of the estimated plume length. By the model’s default, the transverse dis-
persivity, Alpha y, is estimated to be one-tenth of Alpha x. To yield a conservative estimate
of vertical dispersion, Alpha z, the default value used in BIOCHLOR is set to a very low
number (1E-99) as suggested by the U.S. EPA. This means that the contaminant concentra-
tion will not decrease as a result of vertical dispersion.
Adsorption. Adsorption to the soil matrix slows down the migration of contaminants
Downloaded by [University of Auckland] at 23:45 09 April 2014
K dρb
R = 1+
n
than the detection limit, one-half of the constituent’s detection limit is used in the model
as required by most regulatory agencies. Note, however, that this practice can lead to erro-
neous conclusions and serious conceptual errors as described in detail in Chapter 8. Note
also that because of the inclusion of dispersion, the rate constant calculated in the Buschek
and Alcantar approach should be considered a bulk attenuation constant rather than a
pure biodegradation constant. These two terms are often confused.
Generally, the more highly chlorinated the compound is, the more rapidly it is reduced
by reductive dechlorination (Vogel and McCarty 1985, 1987). Therefore, it is possible for
daughter products to increase in concentration downgradient of the source zone before
they decrease (Aziz et al. 2000). This is illustrated with the site-specific examples for cis-
1,2-DCE and VC. Table 5.2 shows the biotransformation rate constants for the three COCs
TABLE 5.1
Monitoring Wells and Site-Specific Information Used for Model Calibration
Concentration (μg/L)
Well ID Distance from Source (ft) TCE DCE VC
MW-1 2 3200 280 2.3
MW-2 150 2100 450 87
MW-3 350 67 75 <1 (0.5)
MW-4 950 <1 (0.5) <1 (0.5) <1 (0.5)
Note: Concentration of one-half detection limit assumed for nondetect values is given in parentheses.
TABLE 5.2
Biotransformation Rate Constants in One per Year Calculated by BIOCHLOR from
the Field Data versus Final Calibrated Values and Their Equivalent Constituent
Half-Life (in Years)
Buschek-Alcantar Final Equivalent
Constituent Method Model-Calibrated Half-Life
TCE 0.223 0.224 3.09
DCE 0.157 0.157 4.41
VC 0.059 0.277 2.50
308 Hydrogeological Conceptual Site Models
calculated by the model from the field observations together with the final calibrated val-
ues. During the model calibration, the initial rate constants for TCE and DCE were not
changed, whereas the rate constant for VC was adjusted to better match the nondetect field
concentration observed further downgradient from the source (i.e., MW-3 is nondetect for
VC). Note that the biotransformation rate constant, ks, and the equivalent constituent half-
life, t1/2, are related as follows: ks = ln 2/t1/2.
The industrial site aquifer porous media consist primarily of fine to medium sands
with spatially varying content of silt and clay. The prevalent geochemical conditions that
drive natural biotransformation processes and, therefore, the estimated rate constants
may change spatially as the COCs migrate with the groundwater. The same is true with
sorption of COCs onto porous media solids. This variability is especially evident in case
of VC, as seen in Table 5.1 (see also Figure 5.7—BIOCHLOR input screen, Field Data for
Comparison). It appears that VC is still degrading faster and/or retarding more than what
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.8
Determination of the source decay constant. Explanation in text.
coefficient (better fit). However, because of the model restrictions and the 20% safety factor,
neither value can be used for the given set of input parameters (the message displayed by
the model states that ks must be less than 0.036). Therefore, the value of 0.035 is selected for
the model simulations. This results in an apparent overprediction of the simulated year-
2011 concentration of TCE at MW-1 (see Figure 5.9).
Initial source concentrations. In order to calibrate the model to the field-observed concen-
trations of COCs in 2011, the concentrations of three COCs for the source area represented
by MW-1 had to be assumed for some time prior to 2011 because of the transient (time-
dependent) nature of both the source strength and the fate and transport of dissolved
constituents. The initial source concentrations and the model run time were estimated
based on the calculated groundwater velocity of approximately 16 ft/year, the attenuating
effects, and the extrapolated trend lines of the TCE concentration at MW-1 (Figure 5.8). The
model run time of 15 years prior to 2011 and the initial source concentrations for TCE, cis-
1,2-DCE, and VC of 14,000, 790, and 5 µg/L (or ppb), respectively, were ultimately selected
during model calibration. Interestingly, the initial TCE concentration is approximately the
average of the two values shown in Figure 5.8 for the two different slopes. Possibly because
of the high historic detection limits for VC (10 ppb), none of the historic samples at MW-1
310 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.9
Model-calculated concentrations of COCs versus the field-observed concentrations in 2011 (calibration run–
thick blue lines) and the predicted future concentrations of COCs at the industrial site for years 2046, 2096, and
2196. Note log-scale and that all constituents are nondetect 950 ft from the source, but are plotted at one-half
the detection limit.
had detectable concentrations of VC. For the estimating purposes, the VC concentration
was represented with 5 µg/L, which is one-half of the historically reported detection limit.
Model results. Figure 5.9 shows the model-calculated concentrations of COCs versus the
field-observed concentrations in 2011 (calibration run–thick blue lines) and the predicted
future concentrations of COCs at the industrial site for years 2046, 2096, and 2196. Note,
however, that the U.S. EPA states that, “The longer the time frame simulated, the greater
the uncertainty associated with the modeling result. While the time to reach remedial
Groundwater Modeling 311
objectives at all points in the Joint Site groundwater will likely be on the order of 100 years,
simulations greater than the order of 50 years into the future are generally not reliable or
useful” (U.S. EPA 1999). Discussion on reasonable timeframes for groundwater remedia-
tion and supported modeling is provided in Chapter 8.
In general, because of the source decay constant restrictions (including the added safety
factor of 20%), in most cases, BIOCHLOR overpredicts by default the dissolved concentra-
tions at monitoring wells closest to the source zones. At the industrial site, this is evident
when comparing the field data from 2011 and the model-calculated concentrations of TCE
at MW-1: 3200 µg/L versus approximately 8000 µg/L (see Figure 5.9). This also means that
the model predictions for the future fate and transport of COCs at the industrial site are
conservative because the source decay rate is simulated to be lower than that currently
observed.
As mentioned earlier, the prevalent physical and geochemical conditions that drive
Downloaded by [University of Auckland] at 23:45 09 April 2014
natural fate and transport processes may change spatially as the COCs migrate with the
groundwater. Currently, BIOCHLOR does not provide for simulation of spatially variable
fate-and-transport parameters. Biotransformation rate constants can vary in two zones but
cannot be simulated together (at the same time) with a decaying source. Nevertheless, the
calibrated model for the industrial site shows a reasonable degree of accuracy in simulat-
ing the overall field-observed distribution of all three COCs.
Sensitivity analysis. The BIOCHLOR model has several built-in restrictions: most notably
the use of one common retardation factor for all COCs and a lower-than-observed source
decay rate. Combined, these restrictions contribute to overall conservative model predic-
tions of constituent concentrations when compared to the field-observed data. This is also
one of the reasons why certain parameters have different impacts on individual COC con-
centrations. For example, the common baseline value of the calibrated retardation factor,
R = 2.10, is used for all three COCs, but it is significantly lower than the more reasonable
3.21 initially calculated by the model for TCE (see Figure 5.7). This means that using a
higher R would likely result in a better match for TCE, but the match for DCE and VC may
be less favorable.
Because of its simplicity, BIOCHLOR can be used very efficiently to test the sensitiv-
ity of all input parameters. This is achieved by changing the values of one parameter at
a time while keeping the calibrated values of all other parameters constant. Figure 5.10
102
αx = 90
101 αx = 4.5
αx = 45 MCL = 2 ug/L
100
10-1
0 200 400 600 800 1000 1200 1400 1600
Distance from Source (ft)
FIGURE 5.10
Model sensitivity analysis for dispersivity.
312 Hydrogeological Conceptual Site Models
illustrates this process for the dispersivity and the model-predicted concentrations of VC
in year 2046. The initially assumed longitudinal dispersivity (Alpha x) of 45 ft, which was
also accepted as the calibrated value (baseline) was changed to 4.5 and 90 ft. The predicted
length of the 2 ppb [maximum contaminant level (MCL) for VC] plume edge is quite dif-
ferent for the three values of Alpha x, indicating that the dispersivity is a sensitive param-
eter (a detailed discussion on dispersion is provided in Section 5.4.2). Interestingly, the
predicted (high) concentrations for the first 400 ft are almost the same in all three cases,
illustrating the general importance and sensitivity of predicting low concentrations typi-
cal of MCLs at greater distances downgradient of the contaminant source.
For the benefit of the reader, the BIOCHLOR model file used in this example is pro-
vided on the companion DVD. For the file to run, the user must have a working version of
Microsoft Excel and enable macros. Note that the program has an online manual describ-
ing all input parameters and offering common ranges of their literature values.
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.11
Three-dimensional model domain in MODFLOW divided into layers and cells.
Groundwater Modeling 313
to program, require less data, are friendlier for data input, and generally exhibit better
mass conservation than finite -element alternatives.
Numerical models can be one-, two-, and three-dimensional; can model the vadose zone, the
saturated zone, or both (variably-saturated models); and can model one-phase or multiple-
phase flow (e.g., groundwater, air/vapor, and DNAPLs). They are divided into two main
groups: (1) models of groundwater flow and (2) models of contaminant fate and transport. The
latter ones cannot be developed without first solving the groundwater flow field, that is, they
use the solution of the groundwater flow model as the base for fate-and-transport calculations.
Some of the more common questions that fully developed and calibrated groundwater flow
and fate-and-transport models may help answer are (Kresic 2009) as follows:
• What is the safe (sustainable) yield of the aquifer portion targeted for groundwater
development?
Downloaded by [University of Auckland] at 23:45 09 April 2014
• How many wells , and at what locations, are needed to provide a desired flow rate?
• What is the impact of current or planned groundwater extraction on the environ-
ment (e.g., on surface stream flows, wetlands)?
• Is there a potential for saltwater intrusion from an increased groundwater
pumpage?
• Where is the contaminant flowing to, and/or where is it coming from?
• How long will it take the contaminant to reach the water table or potential
receptors?
• What will the contaminant concentration be once it reaches a receptor?
• How would a remedial intervention affect contaminant concentrations in the
source area and in the downgradient plume?
Once these questions are addressed by the model(s), many new ones may pop up, which is
exactly what the purpose of a well-documented and calibrated groundwater model should
be; it should answer all kinds of possible questions related to groundwater flow and fate
and transport of contaminants. Here are just two of the common important questions
often involving a multimillion-dollar price tag and the possibility of a protracted lawsuit:
Who is responsible for the groundwater contamination? And what is the most feasible
groundwater remediation option?
MODFLOW (a modular three-dimensional finite-difference ground-water flow model)
developed at the USGS by McDonald and Harbaugh (1988), Harbaugh and McDonald (1996),
Harbaugh et al. (2000), and Harbaugh (2005) is considered by many to be the most reliable,
verified, and utilized groundwater flow computer program today. There are several inte-
grated, user-friendly pre processing and postprocessing graphical software packages (GUIs)
for MODFLOW that greatly facilitate data input and visualization of modeling output (results).
Groundwater Vistas (Rumbaugh and Rumbaugh 2007; http://www.groundwater
models.com/), Groundwater Modeling System (GMS; http://www.ems-i.com/), Visual
MODFLOW (http://www.waterloohydrogeologic.com/), and Processing MODFLOW
(Chiang and Kinzelbach 2001; http://www.pmwin.net/pmwin5.htm) are the four com-
mercial (proprietary) modeling processors most widely used in modern hydrogeology.
They are all very similar in capabilities but have intricacies in operation that distinguish
them from one another. Additionally, different processors support different code alterna-
tives or add-ons to standard MODFLOW. In general, Processing MODFLOW, Groundwater
Vistas, and Visual MODFLOW stick with MODFLOW and its finite-difference companion
314 Hydrogeological Conceptual Site Models
modules developed by the USGS and for government agencies. This includes MT3D and
RT3D for three-dimensional saturated-zone fate-and-transport modeling and particle
tracking codes. Conversely, GMS offers alternative software, including two- and three-
dimensional finite-element models (SEEP2D and FEMWATER, respectively, the latter of
which supports variably saturated applications). All four GUIs also offer versatile 3D visu-
alization of model input and output and animation of results.
A new GUI program for MODFLOW, called ModelMuse, has been recently released by
the USGS. This free public-domain program, available for download at http://water.usgs
.gov/software/lists/groundwater, will likely dramatically change the commercial ground-
water modeling software market. In addition to MODFLOW, this version of ModelMuse
also supports PHAST–A Program for Simulating Groundwater Flow, Solute Transport, and
Multicomponent Geochemical Reactions. The USGS also provides a number of ModelMuse
training videos on its website.
Downloaded by [University of Auckland] at 23:45 09 April 2014
The primary differences between various GUI programs are visual in nature, and the
question of which one is best is subjective and more a matter of personal preference. Some
hydrogeologists may prefer Processing MODFLOW and Groundwater Vistas because
of their transparency, flexibility, and affordable pricing, while others may prefer GMS
because of its finite-element capabilities or Groundwater Vistas because it has the best
interface with GIS. There is no correct answer to the question of which program is supe-
rior. The authors recommend that the hydrogeologist explores each of them to determine
which suits her or him best and meets the needs of the project. This includes the availabil-
ity and cost of technical support provided by the vendor as all these programs will inevita-
bly have bugs and various options not readily explained in their manuals and user guides.
Layer
1
3
4
6
Downloaded by [University of Auckland] at 23:45 09 April 2014
7
25 feet
250 feet
FIGURE 5.12
All layers in classic MODFLOW must be continuous throughout the model domain.
Figure 5.13, the other three are more serious when trying to accurately simulate complex
3D geologic relationships and discontinuities and water fluxes between cells.
Fortunately, at the time of this writing, a true breakthrough in groundwater modeling
has taken place in the form of a fundamentally new program called MODFLOW-USG
(see Section 5.7 for a detailed description). MODFLOW-USG was developed by Dr. Sorab
Panday of AMEC and retains full compatibility with previous versions of MODFLOW
Layer 1
Layer 2
Layer 3
K1
K=K1
K2 K=K3
K3 K=(K1+K3)/2
FIGURE 5.13
Principle of modeling discontinuous layers. The area where the actual layer is missing is still modeled as hav-
ing a certain thickness, usually similar to the adjacent cells where this layer is present. However, the hydraulic
conductivity (K) of the missing cells is adjusted to create the effect of discontinuity. For example, the hydraulic
conductivity of discontinuous cells in layer 1 may be assigned the same value as that of the underlying cells in
layer 2 to which the cells more properly belong. Note that, following the rule of thumb, the thickness of succes-
sive cells should not increase by more than 1.5 times in order to avoid possible model instability. (Modified from
Kresic, N., Hydrogeology and Groundwater Modeling, Second Edition. CRC/Taylor & Francis Group, Boca Raton, FL,
807 pp., 2007.)
316 Hydrogeological Conceptual Site Models
while taking advantage of unstructured grids (USG) and finite-volume numerical solu-
tions. The program is released in the public domain and is supported exclusively by the
latest version of Groundwater Vistas. It enables hydrogeologists to accurately translate
even the most complex CSMs into a numerical environment, thus eliminating the need
for various surrogate modeling solutions. This includes flow in fractured rock and karst
aquifers.
For a detailed technical description of groundwater modeling principles, the reader is
referred to the work of Kresic (2007). The following discussion addresses some of the most
important concepts in groundwater modeling, emphasizing translation of the CSM and
topics of interest to a wide range of stakeholders.
In the world of groundwater modeling, the term initial conditions refers to the three-
dimensional distribution of observed hydraulic heads within the groundwater system,
which is the starting point for transient (time-dependent) modeling simulations. These
hydraulic heads (the water table of unconfined aquifers and potentiometric surface of
confined aquifers) are the result of various boundary conditions acting upon the system
during a certain time period. The initial distribution of the hydraulic heads for transient
modeling can also be the calibrated solution of a steady-state model. The steady-state solu-
tion is defined as the closest match to the field-observed hydraulic heads when assuming
constant boundary conditions and no change in storage. In general, any set of field-
measured or calibrated hydraulic heads can serve as the starting point for further analy-
sis, including for transient groundwater modeling. Ideally, the initial conditions should
be as close as possible to the state of a long-term equilibrium between all natural water
inputs and outputs from the system or with as little anthropogenic (artificial) influence as
possible, the so-called predevelopment conditions (Figure 5.14). However, in many cases,
there are insufficient hydraulic head data for assessment of such natural conditions, which
causes various difficulties with data interpolation and extrapolation, including uncertain-
ties associated with any assumed predevelopment boundary conditions (Kresic 2009).
Whatever the case may be regarding the selection of initial conditions, contouring of the
hydraulic head data is the first important step (see Chapter 4).
It has become standard practice in hydrogeology and groundwater modeling to describe
the inflow and outflow of water from the model domain with three general boundary con-
ditions: (1) known flux, (2) head-dependent flux, and (3) known head, where head refers to
the hydraulic head. These conditions are assigned to both external and internal boundaries
or all locations and surfaces where water is entering or leaving the model. One example of
an external boundary, sometimes overlooked as such, is the water table of an unconfined
aquifer that receives recharge coming from the vadose (unsaturated) zone. This estimated
or measured vertical flux of water into the model is applied as a recharge rate over certain
surface areas. It is expressed in a model-consistent unit of length (e.g., feet or meters) per
unit of time (e.g., days), which, when multiplied by the area, gives the flux of water as
volume per time. A large spring draining an aquifer is another example of an external
boundary with a known flux. An example of an internal boundary with a known flux,
where water is also leaving the model, is a water well with the pumping rate expressed in
model-consistent units (e.g., cubic feet per day or cubic meters per day).
It is obvious that water can enter or leave the model in a variety of natural and artificial
ways, depending upon hydrogeologic, hydrologic, climatic, and anthropogenic conditions
specific to the system of interest. In many cases, these water fluxes cannot be measured
Groundwater Modeling 317
GA SC
Sa
70
van
JASPER
na
60
hR
40
50
ive
30
BEAUFORT
r
EFFINGHAM
n d
la
Is
a d
He
10
n
lto
Hi
Downloaded by [University of Auckland] at 23:45 09 April 2014
Savannah
20
CHATHAM
an
ce
O
ic
nt
la
At
BRYAN
0 5 10 15 miles
LIBERTY 0 5 10 15 km
50
Sa
van
n ah
40
30
Riv
20
er
10
0 Po
rt
-10 Ro
yal
-30 So
un
d
-50
dn
la
Is
a d
He
l
n
lto
l
Hi
-60
an
ce
O
-40
tic
n
la
At
-20 N
0
l 0 5 10 15 miles
0 5 10 15 km
FIGURE 5.14
Potentiometric surface of the Upper Floridan aquifer in the area of Savannah, Georgia, and Hilton Head Island,
South Carolina. Top: predevelopment conditions. Bottom: recorded in May 1998, showing the influence of major
groundwater withdrawal for water supply in Savannah. Contour interval is 10 ft, contour lines dashed where
approximate; arrows show general directions of groundwater flow. (Modified from Provost et al. 2006.)
318 Hydrogeological Conceptual Site Models
directly and have to be estimated or calculated externally to the model. The simplest
boundary condition is one that can be assigned to a contact between an aquifer and a
low-permeability or impermeable porous medium, such as an aquiclude. Assuming there
is no groundwater flow across this contact, it is called a zero-flux or no-flow boundary.
Although this no-flow boundary condition may exist in reality, it is very important not
to assign it indiscriminately just because it is convenient. For example, contact between
unconsolidated alluvial sediments and surrounding bedrock is often modeled as a zero-
flux boundary, even though there may be some flow across this boundary in either direc-
tion. Without site-specific information on the underlying hydrogeologic conditions, a
zero-flux assumption may lead to erroneous interpretations of groundwater flow or fate
and transport of contaminants.
Recording hydraulic heads at external or internal boundaries and using them to deter-
mine water fluxes indirectly, rather than assigning them directly, is a very common mod-
Downloaded by [University of Auckland] at 23:45 09 April 2014
eling practice. The hydraulic heads provide for determination of the hydraulic gradients,
which, together with the hydraulic conductivity and the cross-sectional area of the bound-
ary, give the groundwater flow entering or leaving the model across that boundary. This
boundary condition, expressed by the hydraulic heads on either side of the boundary and
the hydraulic conductance of the boundary, is called head-dependent flux. One example of
a head-dependent flux boundary would be a river with riverbed sediments of a hydraulic
conductivity different than that of the underlying aquifer (see Figure 2.24).
When not much is known about the real physical characteristics of a boundary, or for
reasons of simplification, the boundary may be represented only by its hydraulic head:
the so-called known-head, fixed-head, or equipotential boundary. River or lake stages,
without considering riverbed (lakebed) conductance, are examples of such a boundary.
The flux of water across the boundary (Q) is calculated using Darcy’s equation: Q = AKi,
in which A is the cross-sectional area of the boundary, i is the hydraulic gradient between
the boundary (river or lake) and the aquifer, and K is the hydraulic conductivity of the
aquifer porous media. The main conceptual problem with these boundaries is that, as the
hydraulic head in the aquifer decreases, the flow entering the system from the boundary
erroneously increases as a result of the increased hydraulic gradient caused by the fixed-
head condition. This is of particular concern when performing transient modeling, which
takes into account time-dependent changes that can affect the system. It is important to
note that this problem can also occur with a head-dependent flux boundary, such as the
MODFLOW river boundary. If one prefers to model a certain boundary with a fixed head
condition, the boundary hydraulic head should be adjusted in the model for different time
periods based on available field information. This option is available in MODFLOW by
using the variable-head boundary module.
When deciding on boundary conditions, it is essential to work with as many hydraulic
head observations as possible, in both space and time, because fluctuations in the shape
and elevations of the hydraulic head contour lines directly reflect various water inputs
and outputs along the boundaries (see Figure 4.41). Additionally, it is desirable to have
hydraulic head observations from periods of groundwater stress caused by pumping or
infiltration as these can better inform the influence of hydraulic boundaries such as sur-
face water features.
Accurate representation of surface water–groundwater interactions is often the most
critical when selecting boundary conditions in alluvial basins and floodplains of surface
streams. A river may be intermittent or perennial, it may lose water to the underlying aqui-
fer in some reaches and gain water in others, and the same reaches may behave differently
depending on the season (e.g., see Figures 2.19, 2.23, and 4.40). The hydraulic connection
Groundwater Modeling 319
between a river and its associated aquifer may be complete without any interfering influ-
ence of riverbed sediments. In some cases, however, a well pumping close to a river may
receive little water from it because of a thick layer of fine silt along the river channel or
simply because there is a low-permeability sediment layer separating the aquifer and
the river. In these situations, it would be completely erroneous to represent the river as a
constant-head (equipotential) boundary directly connected to the aquifer. Such a bound-
ary in a numerical model would essentially act as an inexhaustible source of water to the
aquifer (or a water well) regardless of the actual conditions as long as the hydraulic head
in the aquifer is lower than the fixed river stage.
The ultimate reason for selecting any of the three general boundary types is the deter-
mination of the overall water budget of the modeled groundwater system (see Figure 2.33).
The sum of all water fluxes entering and leaving the model through its boundaries has to
be equal to the change in groundwater storage. When utilizing groundwater models for
Downloaded by [University of Auckland] at 23:45 09 April 2014
aquifer evaluation or management, the user has to determine (measure, calculate) flux
to be assigned to the known-flux boundaries. In case of the other two boundary types
(head-dependent flux and fixed-head), the model calculates the flux across the boundaries
using other assigned parameters—hydraulic heads at the boundary and inside the model,
boundary conductance, and hydraulic conductivity of the porous media. As discussed
later, the match of the water budget is the ultimate measure of the groundwater model’s
calibration and success.
Various graphs based on various experiments at different scales conducted in the labo-
ratory and in the field, including various model calibrations, show values of longitudinal
dispersivity ranging from as small as 0.01 m to as large as 5500 m or more. One such often
cited graph is shown in Figure 5.15. The widely used rule of thumb, suggested by the U.S.
EPA, is that the longitudinal dispersivity in most cases could be initially estimated from
the plume length as being 10 times smaller (Wiedemeier et al. 1998; Aziz et al. 2000). Based
on this guidance, for example, if the plume length is 300 ft, the initial estimate of the
longitudinal dispersivity would be about 30 ft. There are also other suggested empirical
relationships relating the plume length and the longitudinal dispersivity. Recognizing the
limitations of the available and reliable field-scale data on dispersivity, the U.S. EPA also
suggests that the final values of dispersivities used in fate-and-transport models should
be based on calibration to the site-specific (field) concentration data. The main reason why
very few (if any) projects for practical groundwater remediation purposes consider field
Downloaded by [University of Auckland] at 23:45 09 April 2014
104
Longitudinal Dispersivity
= 10% of scale
103
(Pickens and Grisak, 1981)
102
LONGITUDINAL DISPERSIVITY (m)
101
10-1 RELIABILITY
High
Intermediate
10-2 Low
10-3
10-1 100 101 102 103 104 105 106
SCALE (m)
FIGURE 5.15
Longitudinal dispersivity versus scale data reported by Gelhar et al. (1992). Data include Gelhar’s reanalysis of
several dispersivity studies. Size of the circle represents general reliability of dispersivity estimates. Location
of 10% of the scale linear relation plotted as a dashed line (Pickens and Grisak 10% rule of thumb). Xu and
Eckstein’s regression shown as a solid line. (From Aziz, C. E. et al., BIOCHLOR: Natural Attenuation Decision
Support System v. 1.0: User’s Manual. EPA/600/R-00/008. U.S. Environmental Protection Agency, Cincinnati,
OH, 2000.)
Groundwater Modeling 321
porous media over long distances (i.e., tracer tests may need to last several years to provide
any meaningful information).
Unfortunately, although the concept of dispersion has a logical physical explanation
(mixing resulting from tortuous flow), the related quantitative parameter of dispersivity is
just a surrogate without any real scientific justification. As explained in detail by Franke
et al. (1990), if it were possible to generate a model that could account for the actual per-
meability and effective porosity distributions of an aquifer, dispersive transport would
not have to be considered (except for molecular diffusion). Dispersivity cannot be directly
measured or calculated using proven deterministic laws of groundwater flow. For this
simple reason, although routinely used by hydrogeologists, it is the least understood quan-
titative parameter required by numerical fate-and-transport models. It is also the most
subjective parameter because its values are routinely selected from literature without any
additional quantitative analysis.
Downloaded by [University of Auckland] at 23:45 09 April 2014
The graph in Figure 5.15 shows how problematic it is to simply (and blindly) use litera-
ture or rule-of-thumb dispersivity values without considering site-specific conditions or
logic. Unfortunately, this is sometimes done because of the desired outcome of calcula-
tions by some stakeholders. The following narrative describes a situation from an actual
lawsuit. An expert for the plaintiff has produced the final, definitive, and calibrated fate-
and-transport model, which predicted certain future impact on a water-supply well from
a chemical added to gasoline spilled at a gas station. The distance between the gas station
and the well is thousands of feet, and the extent of the current contaminant plume is lim-
ited to the immediate area of the gas station. With the model, the expert predicted that the
contaminant would start arriving at the water-supply well, with a concentration of 0.2 ppb,
17 years from the present date. At two depositions, the expert stated that the accuracy of
his model was very high and he had a very high level of confidence that the concentration
at the supply well would be detected in 17 years. In addition, because the detection limit
of the chemical at the time of the depositions was exactly 0.2 ppb, the expert was very
confident in his model. Because of all this, the plaintiff was asking millions of dollars
for future damages to the water-supply well. However, because of some legal and other
issues, the judge in the case ruled, in the meantime (not knowing what any expert in the
lawsuit would say in his courtroom in front of the jury), that the future damages could
be considered only if they were certain to occur by a certain date, which was set by the
judge to be several months into the ongoing lawsuit. Having learned this, the plaintiff’s
expert changed the single most important parameter in his already calibrated, definitive,
and very accurate model. He increased the longitudinal dispersivity by 1100%, which then
enabled the contaminant to reach the water-supply well at very low but still detectable
concentrations (0.2 ppb) in just enough time to satisfy the new requirement for future
damages of millions of dollars. When he was asked, at his last-minute deposition one day
before the continuation of the trial and his appearance in front of the jury, why he did it,
the expert witness referred to the graph in Figure 5.15 and said that he made the decision
based on his very considerable professional experience of more than 25 years.
There are at least several points to be made from this case, but the authors will discuss
only those related to the technical rules of thumb and the use of graphs such as the one
in Figure 5.15. Most, if not all, fate-and-transport parameters commonly used in hydro-
geologic calculations act to decrease, or attenuate, a contaminant concentration as it trav-
els dissolved in groundwater. Dispersion is especially important if the contaminant does
not degrade and/or adsorb to aquifer solids. Scientists and agencies that set standards
and rules of thumb almost always do so with caveats as is the case with dispersivity.
Explanations that come with these rules of thumb should be read carefully by practicing
322 Hydrogeological Conceptual Site Models
hydrogeologists. For example, in our dispersivity case, the 10% rule of thumb refers to the
existing plumes and their lengths, not some future plumes that are yet to be developed.
Consider this scenario, which defies logic but may be handy for winning millions of dol-
lars, while referring to the graph published in quite a few books (including this one):
1. There is a public supply well 40,000 m away (that is correct, 40 km away) from this
particular gas station. Note that the graph in Figure 5.15 has a horizontal scale of
1000 km.
2. There was a spill of gasoline at this station that happened approximately one
month ago.
3. From the graph in Figure 5.15, it may be concluded that longitudinal dispersiv-
ity for a plume that is 40 km long could be as much as 1 km. (There are two data
Downloaded by [University of Auckland] at 23:45 09 April 2014
points on the graph suggesting so, and it does not matter why illogical data like
these are even included in a scientific graph.)
4. Although there is currently not a 40-km-long plume, I am expecting it to develop
for sure because my water-supply well is that far away.
5. I will create a fate-and-transport numerical model for the saturated zone (because
I am not familiar with the unsaturated zone processes and models, I will assume
that all of the spilled gasoline will immediately reach the water table and be dis-
solved at very, very high concentrations).
6. I will run the model and show that there will be a measurable impact 40 km away
in, for example, six months. I will not show the associated (predicted by the model)
plume map; rather, I will show a graph of concentration versus time at the 40-km
distant well because that is my focus in this legal case.
7. If someone (say, the technical expert for the other side) advises the defendant’s
attorney to ask me if the model was calibrated to some field data between the gaso-
line station and the 40-km distant water-supply well, I will answer that the model
accurately represents fate and transport of the contaminant as expected, based on
my experience.
8. I will keep to myself the somewhat unnerving fact that the model-generated
plume (which I am not going to share with anyone) shows contamination in
some water-supply and other wells, which are several hundred feet away from
the gas station and are currently nondetect for the contaminant, in just a few
days after the spill (after all, the dispersivity in my model is 1 km, so I am not
that surprised). Incidentally, the concealed plume map also shows that the con-
taminant traveled almost 900 ft upgradient of the gas station (not surprisingly).
If the attorney for the other side starts asking me all these questions, I may say
something about numerical dispersivity, the salmon effect, or something simi-
larly incomprehensible. I will, however, stick to the graph shown in Figure 5.15
and keep mentioning my professional judgment. (Authors’ note to the reader:
Salmon is an anadromous fish species that swims upstream in rivers in order to
spawn and can jump over high waterfalls as illustrated in Figure 5.16; in other
words, unlike free-flowing surface water or groundwater, salmon can, in fact,
move up gradient or uphill. The salmon effect therefore describes the greatly
exaggerated uphill migration of solutes caused by the inclusion of dispersivity in
numerical models.)
Groundwater Modeling 323
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.16
Artwork by Robert Hines showing Atlantic salmon swimming upstream. (Courtesy of U.S. Fish and Wildlife
Service, available at http://digitalmedia.fws.gov/.)
Unfortunately, the salmon effect is, to varying degrees, present in all numerical fate-and-
transport models that incorporate the dispersivity concept, including MT3D and RT3D.
This is because the governing finite difference equation is symmetrical, allowing for the
numerical upgradient migration of dissolved contaminants. The effect can be minimized
by fine model discretization (use of small cell size) but cannot be eliminated, especially if
the selected dispersivity value is unreasonable as suggested by the model results. This is
illustrated in Figure 5.17 where a model with a very small cell size (2.5 × 2.5 ft) was used
to test two dispersivity values, one of which satisfies common rules of thumb, including
the Péclet number (Pe). One form of this dimensionless number relates the cell size in the
direction of groundwater flow (Δx) and the longitudinal dispersivity (αx) as follows: Pe =
Δx/αx. It is usually recommended that Pe is less than 2–10 to minimize numerical disper-
sion in the model. However, as seen in Figure 5.17, the quite low longitudinal dispersivity
of 5 ft produces impossible results and an excessive salmon effect regardless of the Pe
number of 0.5. Note that a dispersivity of 5 ft would be considered by many as reason-
able and even too low at its face value for the scale of this model. In addition, the cell size
of 2.5 × 2.5 ft would also be considered as too fine by many practicing hydrogeologists
concerned about file size and model run-time. The use of the very low dispersivity value
324 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.17
Comparison of one-layer numeric fate-and-transport model results using two different values of longitudinal
dispersivity: 5 ft on the left and 0.1 ft on the right. Transverse dispersivity is 1/10 of the longitudinal dispersivity
in either model. The cell size is 2.5 × 2.5 ft. Groundwater flow is from northwest to southeast. The contaminant
is not retarded, and it does not degrade (a conservative tracer).
(0.1 ft) reduces this uphill migration despite the seemingly unreasonable Péclet number
of 25; however, some degree of the salmon effect is still visible. Despite the unavoidable
salmon effect and inconsistency regarding the Péclet number, there is no official literature
or practical technical guidance by the USGS, U.S. EPA, or other federal and state regula-
tory agencies on proper use of the highly questionable dispersivity concept in fate-and-
transport numerical modeling.
Groundwater Modeling 325
For the benefit of the reader, input and output files of the numerical groundwater flow
and fate-and-transport model presented above are provided on the companion DVD.
Note that this same model was used to illustrate contouring concepts in Chapter 4 (see
Figure 4.37). The model was created with Processing MODFLOW GUI and can be run
using the freeware software version 5.3.2 available for download at http://www.pmwin
.net/software.htm.
Molecular diffusion. Diffusion does not play a significant role in the advective–disper-
sive transport of contaminants dissolved in groundwater as they move freely through
the effective porosity of the aquifer. It is when the groundwater velocity becomes very
low as a result of small pore sizes and very convoluted pore-scale pathways that dif-
fusion may become an important fate-and-transport process. Porosity that does not
readily allow advective groundwater flow (flow under the influence of gravity) but does
allow movement of the contaminant resulting from diffusion is sometimes called dif-
Downloaded by [University of Auckland] at 23:45 09 April 2014
fusive porosity. A dual-porosity medium has one type of porosity that allows preferable
advective transport through it and another type of porosity that does not allow free
gravity flow or supports flow quantity that is significantly smaller than the flow taking
place through the higher effective (advective) porosity. Examples of dual-porosity media
include fractured rock, where advective flow preferably takes place through fractures,
while the advective flow rate through the rest of the rock mass or rock matrix is com-
parably lower, much lower, or does not exist for all practical purposes. This gradation
depends on the nature of matrix porosity; in some rocks, such as sandstones and young
limestones, matrix porosity may be fairly high, and it may allow a very significant rate of
advective flow, often as high or higher than through the fractures. In most consolidated,
hard rocks, matrix porosity is usually low, less than 5% to 10%, and it does not pro-
vide for significant advective flow. Other examples of dual-porosity media, of particular
interest for the migration of DNAPLs, include fractured clay and residuum sediments. In
some cases, various discontinuities and fractures in such media may serve as pathways
for some advective contaminant transport while the bulk of the sediments may have a
very low effective porosity, which does not allow advective transport. Flow of solutes
with high concentration through the fractures may result in the solute diffusion into the
surrounding matrix.
The preceding discussion explains one of the two factors that have to be present for any
significant diffusion to take place: (1) diffusive or matrix porosity in which the contami-
nant can move and (2) the existence of a high-enough concentration gradient for enough
time that would cause the contaminant to start diffusing into the rock matrix (diffusive
porosity).
A good example is a persistent body of NAPL, resting on or suspended in a low-
permeable clay sediment and creating high concentration gradients for long periods of
time. Another example would be a DNAPL body sitting in fractures in limestone for a
long time, thus driving high contaminant concentrations into the rock matrix. If, for any
reason, the concentration gradient reverses itself, such as because of the final dissolu-
tion of the free-phase (residual) DNAPL in fractures and flushing of the fractures with
the incoming uncontaminated groundwater, the contaminant that was diffused into the
matrix will start diffusing back into the fractures. In some cases, this back-diffusion may
act as a secondary source of groundwater contamination, and it may be important to
predict its effects using numerical models (Kresic 2007). Figure 5.18 shows an example
of model-predicted contaminant concentrations with and without taking diffusion into
account. Additional examples, including computer animation of contaminant migration,
are provided on the companion DVD.
326 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.18
Comparison of modeling results when accounting for diffusion into a low-permeable clay lens (top) and when
diffusion is not modeled (bottom). These two cross-sectional views of a migrating plume are created with the
variably saturated numeric model VS2DTI.
• Steady-state calibration does not involve aquifer storage properties, which are crit-
ical for a viable (transient) prediction.
A limited field data set predetermines the steady-state calibration. In such a case, an appro-
priate approach would be to define boundary conditions and stresses that are representa-
tive for the period in which the field data are collected.
When a transient field data set of considerable length is available, some meaningful aver-
age measure should be derived from it for a steady-state calibration. For example, this can
be the mean annual water-table elevation or the mean water table for the dry season, the
average annual groundwater withdrawal, the mean annual precipitation (recharge), the
average baseflow in a surface stream, and so on. Transient calibration typically involves
water levels recorded in wells during pumping tests or long-term aquifer exploitation.
There are two methods of calibration: (1) trial-and-error (manual) and (2) automated
Downloaded by [University of Auckland] at 23:45 09 April 2014
calibration. Trial-and-error calibration, or brute force, was the first technique applied in
groundwater modeling and is still preferred by most users. Although it is heavily influ-
enced by the user’s experience, it is always recommended to perform this type of cali-
bration, at least in part. By changing parameter values and analyzing the corresponding
effects, the modeler develops a better feeling for the model and the assumptions on which
its design is based. During manual calibration boundary conditions, parameter values and
stresses are adjusted for each consecutive model run until calculated heads match the
preset calibration targets. The first phase of calibration typically ends when there is a good
visual match between calculated and measured hydraulic heads at observation wells.
The next step involves quantification of the model error with various statistical param-
eters, such as standard deviation and distribution of model residuals, that is, differences
between calculated and measured values. Once this error is minimized (through a lengthy
process of calibration) and satisfies a preset criterion, the model is ready for predictive use.
It will sometimes be necessary to change input values and run the model tens of times
before reaching the target. The worst case scenario involves a complete change of the CSM
and redesign of the model with new geometry, boundaries, and boundary conditions.
During calibration, the user should focus on parameters that are determined with less
accuracy or assumed and change only slightly those parameters that are more certain. For
example, hydraulic conductivity determined by several pumping tests should be the last
parameter to change freely because it is usually the most sensitive. Most other parameters
are less sensitive and can be changed only within a certain realistic range; it is obviously
not possible to increase the precipitation infiltration rate 10 times from 10% to 100%. In
general, hydraulic conductivity and recharge are two parameters with equivalent quality;
an increase in hydraulic conductivity creates the same effect as a decrease in recharge.
Because different combinations of parameters can yield similar, or even the same, results,
trial-and-error calibration is not unique. During calibration, it is recommended to plot
residuals (measured values minus calculated values) on the model map using different
symbols (or colors) for negative and positive values. Together with mandatory graphs of
model predictions and residuals (see Figures 5.19a, 5.19b, 5.20a, and 5.20b), this allows for a
more accurate determination of the parameter value that produces the best overall visual
fit throughout the model domain.
Quantitative techniques for determining model error compare model results (simu-
lations) to site-specific information and include calculations of residuals, assessing cor-
relation among the residuals, and plotting residuals on maps and graphs (ASTM 1999).
Individual residuals are calculated by subtracting the model-calculated values from the
328 Hydrogeological Conceptual Site Models
116
114
112
Simulated value (ft amsl)
110 Layer 1
Layer 2
108 Layer 3
1:1
106
Linear
(all layers)
104
Downloaded by [University of Auckland] at 23:45 09 April 2014
102
100
100 102 104 106 108 110 112 114 116
(a) Observed value (ft amsl)
3.5
2.5 Layer 1
Layer 2
Residual (ft)
2
Layer 3
1.5 No bias
Linear (all layers)
1
Linear (Layer 1)
0.5 Linear (Layer 3)
-0.5
-1
100 102 104 106 108 110 112 114 116
(b) Observed value (ft amsl)
FIGURE 5.19
(a) Results of an initial model calibration when the influence of residential bedrock wells was not fully accounted
for. All points should be as close to the 1:1 line as possible, which means that simulated values exactly match
observed values. (b) Plot of model residuals for the calibration run presented in Figure 5.19a, showing that the
residuals are not random; this correlation of residuals is particularly evident for Layer 3, indicating possible
errors in the conceptual model. All points should be equally distributed above and below the no-bias line across
the range of observed heads.
targets (values recorded in the field, not extrapolated or otherwise assumed). They are
calculated in the same way for hydraulic heads, drawdowns, contaminant concentrations,
or flows; for example, the hydraulic head residuals are differences between the computed,
or simulated, heads and the heads actually measured in the field:
ri = hi – Hi
Groundwater Modeling 329
116
114
112
Simulated value (ft amsl)
110 Layer 1
Layer 2
108 Layer 3
1:1
106
Linear
(all layers)
104
Downloaded by [University of Auckland] at 23:45 09 April 2014
102
100
100 102 104 106 108 110 112 114 116
(a) Observed value (ft amsl)
0.8
0.6
0.4
Layer 1
Residual (ft)
0.2 Layer 2
Layer 3
0
No bias
-0.2 Linear (all layers)
Linear (Layer 1)
-0.4
Linear (Layer 3)
-0.6
-0.8
-1
100 102 104 106 108 110 112 114 116
(b) Observed value (ft amsl)
FIGURE 5.20
(a) Improved model calibration after obtaining site-specific data on residential well locations and pumping
rates. (b) Plot of model residuals for the improved calibration presented in Figure 5.20a. Although there still may
be a slight trend (correlation) visible, the residual values are all less than 1 ft and more evenly scattered along
the no-bias (zero) line.
where ri is the residual, Hi is the measured hydraulic head at point i, and hi is the computed
hydraulic head at the approximate location where Hi was measured. If the residual is posi-
tive, the computed value was too high; if it is negative, the computed value was too low
(ASTM 1999).
Spatial or temporal correlation among residuals can indicate systematic trends or bias
in the model (see Figure 5.19). Of two simulations, the one with less correlation among
330 Hydrogeological Conceptual Site Models
residuals has a better degree of correspondence (ASTM 1999). Apparent trends or spatial
correlations in the residuals may indicate a need to refine aquifer parameters or boundary
conditions or even to reevaluate the CSM. For example, if all of the residuals in the vicin-
ity of a no-flow boundary are positive, then the recharge may need to be reduced or the
hydraulic conductivity increased (ASTM 1999). For transient simulations, a plot of residu-
als at a single point versus time may identify temporal trends. Temporal correlations in
residuals can indicate the need to refine input aquifer storage properties or initial condi-
tions (ASTM 1999).
As noted earlier, graphs of model predictions and residuals across the range of observed
values are mandatory deliverables in model reports. They are also highly useful in iden-
tifying areas of the model that need improvement. For example, Figure 5.19 provides
calibration graphs for a preliminary groundwater flow model that was constructed and
calibrated based on available data for a site. Both graphs show significant overprediction
Downloaded by [University of Auckland] at 23:45 09 April 2014
of model heads (positive residuals up to approximately 4 ft) in the bottom layer (layer 3),
which represents a fractured bedrock aquifer. One potential reason for this discrepancy
identified by the hydrogeologist was the lack of site-specific information regarding the
number, locations, and pumping rates of residential bedrock wells in the vicinity of the
site. After conducting a field study to address this data gap, the hydrogeologist modified
the number and locations of the wells and increased the flow rates based on real data from
the residences. The changes greatly improved the model calibration in layer 3 by lowering
hydraulic heads as shown by the revised graphs presented in Figure 5.20a and b.
Another recommended procedure during model calibration and sensitivity analysis is
to plot a graph of model error change versus parameter change as illustrated in Figure 5.21.
This describes parameter sensitivity—more sensitive parameters have steeper slopes than
less sensitive parameters.
0.270
0.264
River Stage
0.258
Absolute Residual Mean (ft)
1
0.252 2
0.246 3
0.240 4
0.234 5
0.228 6
0.222 7
0.216 8
0.210
0.990 0.992 0.994 0.996 0.998 1.000 1.002 1.004 1.006 1.008 1.010
Multiplier
FIGURE 5.21
Graph of the absolute residual mean model error for hydraulic head based on changes in river stage at various
river reaches in a groundwater model. This is typically a very sensitive parameter because of the importance of
surface water–groundwater interactions on the potentiometric surface. In this case, a 1% decrease in river stage
can lead to an increase in model error of more than 15%, highlighting the importance of obtaining accurate river
stage data in the field.
Groundwater Modeling 331
tion tool that is an example of an inverse model, which means that parameters are esti-
mated from model input and observations. This is different from conventional modeling,
in which the user specifies parameter values and produces model outputs that are then
compared to observations. Example parameters that can be estimated by PEST include
hydraulic conductivity and recharge.
PEST works by minimizing an objective function based on the observed values used
as calibration targets. Many modelers or model reviewers still believe that the main ben-
efit of automated calibration is that it takes less time than the brute-force, trial-and-error
process. However, a major benefit of PEST in its current state is that the modeler is no
longer limited to the zone-based approach in which polygons represent areas of uni-
form parameter values (e.g., the hydraulic conductivity is the same everywhere within a
hydraulic conductivity zone polygon). Instead, the modeler can use point values, termed
pilot points, to estimate parameters at individual X, Y, Z locations and then generate a
continuous parameter surface with kriging. This enables assessment and quantification
of heterogeneity within the model (or within zones of a model) and the creation of a vari-
able conductivity field that is more representative of reality. Furthermore, the use of PEST
with pilot points greatly facilitates uncertainty analysis through the creation of sensitiv-
ity parameters. PEST is also easily linked to advanced uncertainty analysis tools such as
Monte Carlo simulation.
Monte Carlo analysis with PEST is a statistical tool that involves the running of tens,
hundreds, or even thousands of simulations. As a first step, a random field of the param-
eter of interest, such as hydraulic conductivity, is generated based on a statistical distri-
bution. This field is then calibrated using PEST. This process is completed over and over
again using different realizations of the initial field. The outcome of Monte Carlo analysis
is a distribution of values for the parameter of interest that all meet calibration criteria
specified by the user. For example, one can obtain innumerable different hydraulic con-
ductivity fields that all represent viable calibrations based on statistical criteria. The mini-
mum, average, and maximum conductivities for each pilot point can then be calculated
and compared to assess the uniqueness of the model.
An example figure of two potential hydraulic conductivity fields generated using PEST
with Monte Carlo analysis in Groundwater Vistas is presented in Figure 5.22. Both of these
fields lead to comparable calibration statistics, and some modelers may therefore accept
either outcome as a potential solution. These fields are remarkably different, which is also
demonstrated in Figure 5.23 by the range of conductivities calculated for five different
pilot points. The conductivity at individual points can span several orders of magnitude.
It is further noted that this analysis was conducted with only 10 realizations, whereas a
332 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 5.22
Two Monte Carlo realizations of the hydraulic conductivity field generated by the automated calibration pro-
gram PEST.
2
Pilot Points
FIGURE 5.23
Variation in hydraulic conductivity at five pilot points from 10 Monte Carlo realizations.
Groundwater Modeling 333
presents an example use of Monte Carlo and PEST simulation in Groundwater Vistas to
quantify the capture probability for a well field in an alluvial aquifer system. This visu-
alization can be used to successfully refute an argument that because the model is non
unique it cannot be used to accurately assess the well field’s capture zone.
Because the conceptual objective of Monte Carlo analysis is quantifying uncertainty
arising from field-scale heterogeneity, the process may also eventually replace the use
of dispersivity in fate-and-transport modeling. If heterogeneity can be captured by the
calibration process, a mathematic construct to represent field-scale heterogeneity with
a geologic zone is no longer necessary. This remains an area where further research is
desirable.
FIGURE 5.24
Capture probability for a well field in an alluvial aquifer system. Colors are blue = 1% and red = 100% capture
probability. (Courtesy of Jim Rumbaugh.)
334 Hydrogeological Conceptual Site Models
The following industry standards, created by leading industry experts for the ground-
water modeling community under the auspices of ASTM International cover all major
aspects of groundwater modeling and should be followed when attempting to create a
defensible groundwater model that can be used for predictive purposes:
The following language accompanies the U.S. EPA OSWER Directive #9029.00 entitled
“Assessment Framework for Ground-Water Model Applications” (U.S. EPA 1994): The pur-
pose of this guidance is to promote the appropriate use of groundwater models in EPA’s
waste management programs. More specifically, the objectives of the framework are to
• Support the use of groundwater models as tools for aiding decision making under
conditions of uncertainty
• Guide current or future modeling
• Assess modeling activities and thought processes
• Identify model application documentation needs
The authors recommend that interested readers obtain copies of some or all of the above
standards for further reading and professional reference.
Downloaded by [University of Auckland] at 23:45 09 April 2014
5.7 MODFLOW-USG
The description of this new groundwater modeling program is provided courtesy of Dr.
Sorab Panday, the groundwater modeling practice leader at AMEC.
As described earlier, the USGS’s finite-difference model MODFLOW (Harbaugh 2005) is
the most popular groundwater code worldwide. MODFLOW solves the groundwater flow
equation on a rectangular finite-difference grid. Although there are options available for
blocking out portions of a rectangular grid that are outside the simulation domain and for
local grid refinement, a rectangular grid structure is generally less effective or efficient in
fitting irregular domain geometries or for having refined grid structures within areas of
high activity or interest. The MODFLOW-USG code extends the MODFLOW simulation
capabilities to irregular domains using unstructured grids.
Unstructured grids provide a high level of flexibility of discretization by implementing dif-
ferently shaped grid-block geometries and by use of nested grid structures. Figure 5.25 shows
some unstructured grid geometries that may be used by MODFLOW-USG. The grid may be
FIGURE 5.25
Examples of unstructured grid geometries supported by MODFLOW-USG. (Courtesy of Dr. Sorab Panday,
AMEC.)
336 Hydrogeological Conceptual Site Models
The integrated finite difference formulation discussed by Pruess et al. (1999) is applied
to the groundwater flow equation. The discretized formulation is identical to the finite-
difference approximation but with generalized computations for volumes, flow areas,
flow lengths, and connections. The cell-by-cell conductance term may be computed using
any of the averaging options provided by the block-centered flow and layer-property flow
packages of MODFLOW-2005, including harmonic, arithmetic, and logarithmic averaging.
The upstream weighting formulation of Niswonger et al. (2011) is also included along with
the Newton–Raphson linearization to provide added robustness for difficult nonlinear
problems. The conductance term may further be modified for presence of a horizontal flow
barrier conceptualized using the horizontal-flow barrier package of MODFLOW-2005.
Use of an unstructured formulation with arbitrary flow connections between nodes
allows for a natural extension of the basic groundwater flow solution to include various
other connected flow processes in a fully implicit manner. Therefore, there exists a frame-
work for incorporating additional flow domains and connections to the subsurface porous
medium system. An important flow process significant in several situations includes flow
through long-screened wells, conduits, and fracture networks. A conduit domain flow
(CDF) package has been included with the current version of MODFLOW-USG to simulate
flow through conduit networks and between the porous medium and conduits. Conduits
may be horizontal, vertical, or angled in any direction. Figure 5.26 shows typical conduit
geometries and connections that may be used by the model. Exchange of water between
the conduit and the porous medium is expressed via a linear leakance term or via the
Thiem Equation to simulate the net head loss between the conduit node and the porous
medium grid block. Therefore, the CDF package provides most of the combined function-
alities of the conduit flow process (Shoemaker et al. 2007) and the multinode well (Halford
and Hanson 2002; Konikow et al. 2009) packages of MODFLOW-2005 in a fully implicit
manner. Figure 5.27 shows comparison of groundwater flow solution for a karst conduit
embedded in an aquifer matrix using MODFLOW-USG and FEFLOW.
Groundwater Modeling 337
Conduit
Nodes
Conduit Networks
FIGURE 5.26
Examples of conduit domain geometries supported by MODFLOW-USG. (Courtesy of Dr. Sorab Panday, AMEC.)
FIGURE 5.27
Solution of groundwater flow toward a single conduit embedded in the aquifer matrix. Left: MODFLOW-USG
solution. Right: FEFLOW solution. (Courtesy of Dr. Sorab Panday, AMEC.)
338 Hydrogeological Conceptual Site Models
For finite-volume grids, the formulation is a second-order approximation only when the
perpendicular from the cell centroid to a face between two nodes coincides with the mid-
point of the face (Dehotin et al. 2011)—as is the case for isosceles triangles, rectangles,
and regular higher-order polygons. There is an error in the flux computation, however,
for irregular polygon shapes or nested grids in which the cell center does not coincide
with the perpendicular bisector of the face. A ghost node correction (GNC) module based
on the work of Edwards (1996) and Dickinson et al. (2007) has also been introduced with
MODFLOW-USG to maintain higher-order accuracy for irregular grids. The concept of
the GNC is that the nodal value of the head at an irregular grid block is not representative
for flux across the interface between grid blocks. Instead, a more representative value is
obtained by interpolating the head value to a ghost-node location that does lie along the
perpendicular bisector of the face. As illustrated in Figure 5.28, the ghost-node location
interpolation can be performed in one or multiple dimensions. The ghost-node correction
Downloaded by [University of Auckland] at 23:45 09 April 2014
is also applicable to conduits that are located off-center from the porous matrix grid block
to provide subgrid scale location adjustments without grid refinement.
MODFLOW-2005 boundary packages that have been currently converted for use with
unstructured grids include the recharge package (RCH), the evapotranspiration pack-
age (EVT), the transient flow and head boundary package (FHB), the well package (WEL),
the drain package (DRN), the river package (RIV), the head-dependent flux (general head
boundary) package (GHB), the stream package (STR7), the streamflow routing package
with unsaturated flow beneath streams (SFR2), the stream-gage monitoring package
(GAGE), and the lake package (LAK3). Boundary nodes are identified within these pack-
ages by indexing the global node number instead of the structured layer, row, and column
classification used by MODFLOW-2005.
Unstructured grids cannot be accommodated by the structured, symmetric solvers pres-
ent in MODFLOW-2005. Therefore, the MODFLOW-USG code incorporates its own suite of
Perpendicular Bisector
Ghost Node
FIGURE 5.28
Ghost-node conceptualization in MODFLOW-USG. (Courtesy of Dr. Sorab Panday, AMEC.)
Groundwater Modeling 339
unstructured solvers for symmetric and nonsymmetric matrices. The χMD solver (Ibaraki
2011) is an unstructured solver that has options for symmetric and nonsymmetric accelera-
tion methods. It uses a level-based ILU decomposition scheme with drop tolerance followed
by conjugate gradient (for symmetric systems), Orthomin, and Bi-CGSTAB (for nonsym-
metric systems) acceleration schemes. Nonsymmetric matrices may be generated by the
GNC package or by use of Newton–Raphson linearization using the upstream weighted
formulation of Niswonger et al. (2011). The generalized minimal residual (GMRES) solver
of Kipp et al. (2008) and the ORTHOFEM solver of Mendoza et al. (1994) are alternative
solvers for nonsymmetric systems. The PCGP solver of Hughes (2011) provides an alterna-
tive solver for symmetric systems.
The input/output (I/O) formats for MODFLOW-USG follow the MODFLOW-2005 conven-
tions. A name file is read by MODFLOW during problem initialization, which provides
all other file names that are opened by MODFLOW-USG. Input for a structured or an
unstructured grid is accommodated. If the structured grid I/O option is selected, the code
uses the finite-difference input data files of MODFLOW-2005 with minimal changes, such
as switching to one of the unstructured solver routines. Output from a structured grid is
consistent with MODFLOW-2005 ASCII and binary formats for easy postprocessing by
tools available for MODFLOW.
For unstructured grids, the code provides various levels of generalization to ease the
I/O burden. Furthermore, I/O is based on a global node number convention rather than
using the layer-row-column format. Nodal outputs are provided in layer lists to accommo-
date easier processing of results from each model layer. Flow outputs are provided for all
connections to a node. Only the upper triangular portion of the cell-by-cell flow matrix is
saved to avoid redundant output (because Qnm = –Qnm).
FIGURE 5.29
Biscayne Aquifer model example showing unstructured grids used to accommodate geometry of canals and
groundwater pumping centers. (Courtesy of Dr. Christian D. Langevin, USGS.)
FIGURE 5.30
Example of simulated contaminant migration through the vadose zone using VS2DTI.
soil versus those at the water table and within the underlying aquifer. VS2DTI has a user-
friendly graphical interface but does not readily link to external programs. The program is
quite unstable when attempting to model more complex scenarios and often terminates with-
out providing any explanation or hints as to possible reasons. The user guide is very brief,
and there is also no detailed explanation of quite a few numerical parameters appearing in
several input windows. Because of the general lack of related support literature, updates, and
well-documented modeling examples, this USGS program is not as widely used as would
be expected based on its capabilities. Unfortunately, it appears that VS2DTI is being quietly
abandoned by the USGS in favor of the new versions of MODFLOW that now include variably
saturated flow. Additional examples of VS2DTI model results and animation of results are
provided on the companion DVD.
342 Hydrogeological Conceptual Site Models
References
ASTM International, 1999. ASTM Standards on Determining Subsurface Hydraulic Properties and
Ground Water Modeling, Second Edition, 320 pp.
Aziz, C. E., Newell, C. J., Gonzales, J. R., Haas, P., Clement, T. P., and Sun, Y., 2000. BIOCHLOR:
Natural Attenuation Decision Support System v. 1.0: User’s Manual. EPA/600/R-00/008. US
Environmental Protection Agency, Cincinnati, OH.
Aziz, C. E., Newell, C. J., and Gonzales, J. R., 2002. BIOCHLOR Natural Attenuation Decision Support
System, Version 2.2, March 2002, User’s Manual Addendum.
Buschek, T. E., and Alcantar, C. M., 1995. Regression Techniques and Analytical Solutions to
Demonstrate Intrinsic Bioremediation. In Intrinsic Bioremediation, edited by R. E. Hinchee, J. T.
Wilson, and D. C. Downey. Battelle Press, Columbus, OH.
Chiang, W.-H., and Kinzelbach, W., 2001. 3D-Groundwater Modeling with PMWIN: A Simulation
Downloaded by [University of Auckland] at 23:45 09 April 2014
System for Modeling Groundwater Flow and Pollution. Springer, Berlin, 346 pp.
Dehotin, J., Vazquez, R. F., Braud, I., Debionne, S., and Viallet, P. 2011. Modeling of hydrological
processes using unstructured and irregular grids: 2D groundwater application. J. Hydrol. Eng.
16(2), 108–125.
Dickinson, J. E., James, S. C., Mehl, S., Hill, M. C., Leake, S. A., Zyvoloski, G. A., Faunt, C. C., and
Eddebbarh, A., 2007. A new ghost-node method for linking different models and initial investi-
gations of heterogeneity and nonmatching grids. Adv. Water Res. 30, 1722–1736.
Doherty, J., 2005. PEST: Model-Independent Parameter Estimation, User Manual: 5th Edition.
Watermark Numerical Computing.
Domenico, P. A., 1987. An analytical model for multidimensional transport of a decaying contami-
nant species. J. Hydrol. 91(1–2), 49–58.
Edwards, M. G., 1996. Elimination of adaptive grid interface errors in the discrete cell centered pres-
sure equation. J. Comput. Phys. 126, 356–372.
Franke, O. L., Reilly, T. E., Haefner, R. J., and Simmons, D. L., 1990. Study Guide for a Beginning
Course in Ground-Water Hydrology: Part 1-Course Participants. U.S. Geological Survey Open
File Report 90-183, Reston, VA, 184 pp.
Gelhar, L. W., Welty, C., and Rehfeldt, K. R., 1992. A critical review of data on field-scale dispersion
in aquifers. Water Resour. Res. 28(7), 1955–1974.
Halford, K. J., and Hanson, R. T., 2002. User Guide for the Drawdown-Limited, Multi-Node Well
(MNW) Package for the U.S. Geological Survey’s Modular Three-Dimensional Finite-Difference
Ground-Water Flow Model, Versions MODFLOW-96 and MODFLOW-2000. U.S. Geological
Survey Open-File Report 02-293.
Harbaugh, A. W., 2005. MODFLOW-2005, the U.S. Geological Survey Modular Ground-Water Model:
The Ground-Water Flow Process. U.S. Geological Survey Techniques and Methods 6-A16.
Harbaugh, A. W., and McDonald, M. G., 1996. User’s Documentation for MODFLOW-96, an Update
to the U.S. Geological Survey Modular Finite-Difference Ground-Water Flow Model. U.S.
Geological Survey Open-File Report 96-485, Reston, VA, 56 pp.
Harbaugh, A. W., Banta, E. R., Hill, M. C., and McDonald, M. G., 2000. MODFLOW-2000, the U.S.
Geological Survey Modular Ground-Water Model—User Guide to Modularization Concepts
and the Ground-Water Flow Process. U.S. Geological Survey Open-File Report 00-92, Reston,
VA, 121 pp.
Hemond, F. H., and Fechner-Levy, E. J., 2000. Chemical Fate and Transport in the Environment, 2nd
Edition. Academic Press, San Diego, CA, 433 pp.
Hughes, J. D., 2011. PCGP, An Unstructured, Symmetric Solver for MODFLOW. U.S. Geological
Survey Techniques and Methods, in print.
Ibaraki, M., 2011. χMD User’s Guide—An Efficient Sparse Matrix Solver Library, Version 1.30. School
of Earth Sciences, Ohio State University, Columbus, OH.
Janis, I. L., 1972. Victims of Groupthink. Houghton Mifflin, Boston, 277 pp.
344 Hydrogeological Conceptual Site Models
Karanović, M., Neville, C. J., and Andrews, C. B., 2007. BIOSCREEN-AT: BIOSCREEN with an exact
analytical solution. Ground Water 45(2), 242–245.
Kipp, K. L., Jr., Hsieh, P. A., and Charlton, S. R., 2008. Guide to the Revised Groundwater Flow and
Heat Transport Simulator: HYDROTHERM—Version 3. U.S. Geological Survey Techniques
and Methods 6-A25.
Konikow, L. F., Hornberger, G. Z., Halford, K. J., and Hanson, R. T., 2009. Revised Multi-Node
Well (MNW2) Package for MODFLOW Ground-Water Flow Model. U.S. Geological Survey
Techniques and Methods 6-A30, 67 pp.
Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC/Taylor & Francis
Group, Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources: Sustainability, Management, and Restoration. McGraw
Hill, New York, 852 pp.
Langevin, C. D., Panday, S., Niswonger, R. G., and Hughes, J. D., 2011. Evaluation of Mesh Alter
natives for an Unstructured Grid Version of MODFLOW. MODFLOW and More.
Downloaded by [University of Auckland] at 23:45 09 April 2014
Lindgren, R. J., Dutton, A. R., Hovorka, S. D., Worthington, S. R. H., and Painter, S., 2004.
Conceptualization and Simulation of the Edwards Aquifer, San Antonio Region, Texas. U.S.
Geological Survey Scientific Investigations Report 2004-5277, 143 pp.
McDonald, M. G., and Harbaugh, A. W., 1988. A Modular Three-Dimensional Finite-Difference
Ground-Water Flow Model. U.S. Geological Survey Techniques of Water-Resources
Investigations, Book 6, Chap. A1, 586 pp.
Mendoza, C., Sudicky, E. A., and Therrien, R., 1994. ORTHOFEM Users Guide, Version 1.04.
Moridis, G., and Pruess, P., 1992. TOUGH Simulations of Updegraff’s Set of Fluid and Heat Flow
Problems. Lawrence Berkeley Laboratory Report LBL-32611, Berkeley, CA.
Newell, C. J., McLeod, R. K., and Gonzales, J. R., 1996. BIOSCREEN: Natural Attenuation Decision
Support System User’s Manual. EPA/600/R-96/087, Robert S. Kerr Environmental Research
Center, Ada, OK.
Niswonger, R. G., Panday, S., and Ibaraki, M., 2011. MODFLOW-NWT, A Newton Formulation for
MODFLOW-2005. US Geological Survey Techniques and Methods 6-A37, 44 pp.
Pankow, J. F., and Cherry, J. A., 1996. Dense Chlorinated Solvents and Other DNAPLs in Ground
water. Waterloo Press, Guelph, Ontario, 522 pp.
Peaceman, D. W., 1977. Fundamentals of Numerical Reservoir Simulation. Elsevier, Amsterdam.
Pruess, K., Oldenburg, C., and Moridis, G., 1999. TOUGH2 user’s guide, Version 2.0, Lawrence
Berkeley Laboratory Report LBL-43134, Berkeley, California, various paging.
Reilly, T. E., and Harbaugh, A. W., 2004. Guidelines for Evaluating Ground-Water Flow Models. U.S.
Geological Survey Scientific Investigations Report 2004-5038, 30 pp.
Rumbaugh, J. O., and Rumbaugh, D. B., 2007. Guide to Using Groundwater Vistas, Version 5.
Environmental Simulations, Inc., Reinholds, PA, 372 pp.
Shoemaker, W. B., Kuniansky, E. L., Birk, S., Bauer, S., and Swain, E. D., 2007. Documentation of
a Conduit Flow Process (CFP) for MODFLOW-2005. U.S. Geological Survey Techniques and
Methods, Book 6, Chapter A24.
U.S. EPA, 1994. Assessment Framework for Ground-Water Model Applications. OSWER Directive
9029.00, US Environmental Protection Agency, Office of Solid Waste and Emergency Response,
Washington, DC.
U.S. EPA, 1996. Soil Screening Guidance: User’s Guide; Equation 10: Soil Screening Level Partitioning
Equation for Migration to Groundwater. U.S. Environmental Protection Agency, Office of Solid
Waste and Emergency Response, EPA/9355.4-23.
U.S. EPA, 1999. EPA Superfund Record of Decision: Montrose Chemical Corp. and Del Amo. EPA
ID: CAD008242711 and CAD029544731 OU(s) 03 & 03, Los Angeles, CA, 03/30/1999. Dual Site
Groundwater Operable Unit. II: Decision Summary. EPA/ROD/R09-99/035.
U.S. EPA, 2005. Decision Support Tools—Development of a Screening Matrix for 20 Specific Software
Tools. U.S. Environmental Protection Agency, Office of Superfund Remediation and Technology
Innovation, Brownfields and Land Revitalization Technology Support Center, Washington, DC,
18 pp.
Groundwater Modeling 345
6.1 Introduction
The ability to demonstrate comprehension of complex data is a fundamental element
of successful hydrogeological consulting and engineering. Seeing multiple data sets in
their true geospatial context is a powerful way to understand and communicate the
intricacies of a site. Indeed, if a conceptual site model (CSM) is the synthesis of assimi-
lated data into a graphical representation, a 3D or 4D visualization can itself become
the conceptual model. A 3D visualization is a representation of site data in a realistic or
relative context to its actual geospatial location. A 4D visualization includes the fourth
dimension of time and is therefore a transient depiction of a 3D model or, more simply,
a 3D animation.
3D visualization is not new. Geologists have illustrated their data with 3D drawings
and models for centuries. What have changed are the tools. What was once pen and
ink are now geospatial visualization technologies that allow us to rapidly create hun-
dreds of multidimensional views of a site, animated over multiple parameters and digi-
tally distributable in an interactive format. Hydrogeologists now work with innovative
technologies specifically designed to capture and assimilate vast amounts of geospatial
site data and render those data into interactive 3D or 4D visualizations. An expanding
assortment of visualization software is available to address the needs of the working
hydrogeologist. CTech’s MVS/EVS Pro, Esri ArcGIS 3D Analyst, RockWare RockWorks,
and EarthVision by Dynamic Graphics, Inc., are examples of some commonly used com-
puter programs.
Desktop workstations have the ability to process large data sets, and with the devel-
opment of publicly accessible supercomputers and distributed systems, data density
limitations are being overcome. Open source distributable software, such as VisIT and
ParaView, are gaining popularity as scientists recognize how their high-resolution data
can be customized and visualized in new ways using supercomputers. High-end 3D
visualization labs are emerging at universities across the world. Visualization soft-
ware originally designed for other industries, such as Autodesk 3DsMax, Maya, and
NewTek Lightwave, are finding hydrogeologic applications. Geologists are finding
themselves learning about software renderers and video editors as they develop their
CSMs.
Because all geospatial data have the same basic structure (X, Y, Z, n1, n2, n3,…), visualiza-
tion techniques are highly adaptable from one application to another. For example, output
from multiple geostatistical modeling packages can be exported into a single common
visualization engine for rendering, further analysis, or delivery. The same technologies
347
348 Hydrogeological Conceptual Site Models
used by doctors to visualize brains can be used by geologists to visualize aquifers. The
ability to move between packages taps the open-source nature of geospatial data, expand-
ing the toolbox and simplifying collaboration.
3D data availability has exploded. With online government geospatial databases, Web-
map services, and geospatial depots, more data are readily available to be visualized than
ever before. The Web has brought much of the world’s hydrogeologic data into the digi-
tal age. Groundwater-well and oil-well data sets are ever-expanding. Remote sensing and
geophysical data collection techniques are increasing in both coverage and resolution.
Land use, boundary and infrastructure vectors, and flood maps are readily available in
useable geospatial formats. Governments have invested significant resources in mapping,
and we are the beneficiaries.
Visualization displays are also rapidly increasing in both resolution and features.
Desktop computer monitors are available at resolutions approaching 60002 pixels. Monitors
Downloaded by [University of Auckland] at 23:45 09 April 2014
are being tiled into visualization walls, and large-scale visualization rooms and caves are
in use. Innovative delivery systems are being tested, including virtual reality, augmented
reality, 3D printing, holograms, and active and passive 3D. Tablets and handhelds are
beginning to bring rapid and real-time visualization to the field, and common file formats
such as .pdf are being expanding to support interactive 3D.
• Communication
• 2D and 3D paper graphics production
• Training and educating
• Project management
• Performance tracking and quality control/quality assurance (QA/QC)
• Data assimilation
• Visualization of project geospatial databases
CSMVs are built with flexible interfaces and adaptable workflows designed to allow
for rapid and effective deployment of 3D or 4D to meet hydrogeologic programmatic needs.
Every hydrogeologic CSM is unique. No two sites demand exactly the same sequence of
visualizations to be built, interpolations to be performed, or animations to be rendered.
However, because of the common structure of geospatial data, many commonalities exist
between all forms of 3D visualization. What follows is an approach to the development of
3D CSMVs.
Three-Dimensional Visualizations 349
A CSMV can be thought to exist within a digital 3D geospatial framework. The frame-
work establishes the space within which assimilated data can be visualized in a uni-
form format. Optimally, the framework consists of single site-wide verified data set
visualized using software that relates the spatial data. For this reason, site-wide digital
ground elevation data sets make good geospatial frameworks. Usually the framework
data set is the first data set to be visualized. The concept of a framework data set is
analogous to a basemap in 2D mapping applications. In general, the framework data set
is designed to
FIGURE 6.1
Explanation in text.
350 Hydrogeological Conceptual Site Models
Dataset (NED10) topography data as the basis for a regional visualization framework.
Figure 6.1 also shows four NED10 tiles stitched together and visualized in 3D at full reso-
lution. For each tile, a consistent color scheme has been applied that accounts for the mini-
mum and maximum extents of elevation for the combined visualized tiles. Larger areas
can be shown, but at lower resolutions on most workstations. Higher-resolution Light
Detection and Ranging (LiDAR) topographic data, visualized in Figure 6.2, may also be
incorporated into a visualization framework. Large seamless data sets covering regional
areas processed on supercomputers can eliminate the need for the kind of subdividing
shown in Figure 6.1. This is the future of visualization.
Once a 3D visualization framework is designed, it can be populated with project data.
A relational visualization database (RVD) is designed for CSMV development. Data incor-
porated into the RVD include all pertinent geospatial site data, both current and legacy,
within the coordinate extents of the visualization framework. The RVD may include dig-
Downloaded by [University of Auckland] at 23:45 09 April 2014
ital and tabular data as well as scanned paper maps and images. Common data types
include well logs, geologic descriptions, geophysical data, soil, 4D surface and groundwater
chemistry data, 4D potentiometric surface data, forward modeling output, preexisting maps,
preexisting GIS and geospatial databases, cultural data, surface and infrastructure data, and
conceptual data. Except in cases where on-the-fly coordinate projections will be used, all
data in the RVD must be defined within a consistent coordinate system. As discussed in
Chapter 3, it is advisable to always use one uniform coordinate system as many programs
for spatial data analysis cannot project on the fly.
Legacy paper and .pdf maps can also be rasterized and georeferenced to be draped
onto any related 3D surface and included in the database, a concept discussed further in
Chapter 7 with example applications. Figure 6.3 shows how georeferenced paper maps
can be draped from surface to surface. Figure 6.4 shows an example of five paper maps, an
aerial photograph, and four tiles of NED10 data in 3D geospatial synergy. Frame-by-frame
FIGURE 6.2
Explanation in text.
Three-Dimensional Visualizations 351
FIGURE 6.3
Downloaded by [University of Auckland] at 23:45 09 April 2014
Explanation in text.
FIGURE 6.4
Explanation in text.
352 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.5
Explanation in text.
FIGURE 6.6
Explanation in text.
Three-Dimensional Visualizations 353
ready, the visualizations are optimized and rendered for delivery. The interactive ani-
mations (IAs) can be organized in a file structure or interface, linking to source data if
desired, along with a visualization viewer (e.g., ArcScene by Esri), for end use.
Figure 6.6 shows a sample CSMV interface for a remediation site. Individual IAs are
launched into a distributable player that allows end-users to rotate, pan, zoom, capture,
manipulate, and analyze frames of the animations. The following discussions and figures
of fictionalized sites (built using CTech MVS and edited with Adobe Photoshop) demon-
strate some of the ways CSM data can be visualized in 3D.
geologic mapping techniques, such as cross sections, fence diagrams, and isopleth maps,
translate naturally into 3D. Geostatistical algorithm output from software geology build-
ers can be visualized across platforms. 2D and 3D grids used by modeling packages can
be imported and seen. Stratigraphic layers can be stretched apart or turned on and off like
lights. Faults can be visualized as complex 3D surfaces, and fault blocks can be displaced,
uplifted, and eroded through 3D animations. Slice planes can be run through models in
any direction. Multiple slice planes can be positioned anywhere, and 3D well-to-well fence
diagrams can be cut from a model. One can apply custom color scales, display contours,
or make geologic layers pinch out and disappear below specified thicknesses. Vertical
exaggeration can be applied to discern subtle topographic features or for projects that
have a large horizontal span relative to depth. Transparency values, lighting effects, and
visual effects, such as volume rendering, can be customized. Any parameter can be ani-
mated, from benzene concentrations over time to stratigraphic thickness isovolume levels.
Figures 6.7 and 6.8 show several views of two geologic models. Figure 6.7 is a kriged geo-
logic model based on correlated boring data, and Figure 6.8 is the result of indicator krig-
ing on cone penetrometer test (CPT) data.
Software packages such as MVS contain several tools to conceptually manipulate geo-
logic models within the context of the 3D visualization workspace. For example, Figure
6.9 depicts a soil plume to be remediated by excavation. Using the MVS overburden
module, the excavation surface is automatically calculated and visualized based on user
inputs. Excavation volumetrics are also calculated. The third frame of Figure 6.9 shows
the excavation turned inside-out, revealing the lithologic units to be excavated. If desired,
volumetrics of each lithologic unit within the excavation can be quickly calculated from
the model, informing how much clay, sand, gravel, or debris fill material will be removed.
Such volumetrics can be very useful when evaluating disposal options for contaminated
soils.
Virtually any element of a CSMV can be animated in 4D; multiple elements can be
animated simultaneously, and spatial analyses can be performed as needed. Figure 6.10
shows an example of a virtual core extracted from a geologic model at a proposed well
location. A copy of the virtual core accompanies the logging geologist to the field to act as
a guide for identifying marker beds during drilling. Figure 6.11 displays two frames from
a stratigraphic animation in which only the borings defining the active stratigraphic units
are displayed. Figure 6.12 shows surface volumetrics of a containment dike calculated by
simulating flooding. Figure 6.13 shows the effects of vertical exaggeration on a geologic
model.
354 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.7
Explanation in text.
Three-Dimensional Visualizations 355
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.8
Explanation in text.
FIGURE 6.9
Explanation in text.
356 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.10
Explanation in text.
FIGURE 6.11
Explanation in text.
FIGURE 6.12
Explanation in text.
Three-Dimensional Visualizations 357
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.13
Explanation in text.
FIGURE 6.14
Explanation in text.
FIGURE 6.15
Explanation in text.
Three-Dimensional Visualizations 359
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.16
Explanation in text.
FIGURE 6.17
Explanation in text.
FIGURE 6.18
Explanation in text.
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.19
Explanation in text.
whereby each frame represents a different chemical of concern (COC) at the appropriate
regulatory levels. Higher-resolution geophysical log data, CPT, or rapid optical screen tool
data can also be visualized (Figure 6.20). Because remote sensing and geophysical data are
often collected in very fine vertical intervals (centimeter scale or less), vertical exaggera-
tion must often be used to visualize subtle vertical changes, and grids must be of sufficient
resolution to capture these variations, which are often critical to the CSM.
FIGURE 6.20
Explanation in text.
362 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.21
Explanation in text.
Three-Dimensional Visualizations 363
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.22
Explanation in text.
created 2 ft down, shown in black, is the compaction zone. The spatial relationship between the
compaction zone and the regional potentiometric surface, in blue, is shown. The vertical exag-
geration is 200 times, which allows us to visually perform this 2-ft vertical analysis beneath a
1.3-mi2 region in a single view. While this represents an appropriate use of vertical exaggera-
tion, for many applications, this extent of exaggeration can be misleading and result in errone-
ous data interpretations. In some cases, excessive exaggeration may be used for manipulative
purposes, similar to the intentional modification of the axes of graphs to mislead the reader.
In general, the hydrogeologist must be mindful of vertical scale when reviewing 3D visualiza-
tions and maintain ethical standards in his or her own practice.
Many additional applications of 3D and 4D visualization are possible. Custom conceptu-
alizations of land development projects can be visualized (Figure 6.23) as can 3D buildings
and infrastructure built interactively or imported as 3D CAD models (Figure 6.24). Sediment
transport modeling, flood modeling (Figure 6.25), air modeling, and meteorologic data can
all be incorporated. In fact, any type of digital geospatial data can be visualized, includ-
ing the high resolution X-ray computed tomography data as shown in Figure 6.26. With a
FIGURE 6.23
Explanation in text.
364 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 6.24
Explanation in text.
FIGURE 6.25
Explanation in text.
FIGURE 6.26
Explanation in text.
Three-Dimensional Visualizations 365
good grasp of the structure of geospatial data and an understanding of modern 3D and 4D
visualization platforms, hydrogeologic CSMs can be developed that display site data and
modeling projections more comprehensively and intuitively than ever before.
Citations
Figure 6.3 contains draped maps obtained from D. A. Wierman, A. A. S. Broun, and B. B.
Hunt, “Hydrogeologic Atlas of the Hill Country Trinity Aquifer, Blanco, Hays, and Travis
Counties, Central Texas,” July 2010; and from R. J. Lindgren, A. R. Dutton, S. D. Hovorka,
S. R. H. Worthington, and S. Painter, “Conceptualization and Simulation of the Edwards
Downloaded by [University of Auckland] at 23:45 09 April 2014
Site investigation is generally the first phase of any project in professional hydrogeol-
ogy. It is the stage at which available information is compiled and new data collected
with the primary goal being the development of a scientific, defensible conceptual site
model (CSM). The term site investigation is used in different technical and regulatory
contexts, and the exact scope depends on the application in question. The term is most
widely applied in the context of hazardous-waste site investigation. For example, under
the Comprehensive Environmental Response, Compensation, and Liability Act, com-
monly known as Superfund, site-investigation activities are conducted during both the
preliminary assessment/site inspection and remedial investigation/feasibility study
phases. Under the Massachusetts Contingency Plan, site investigation activities occur dur-
ing phase 1 (site investigation) and phase 2 (comprehensive site assessment) of the cleanup
process. For water-resources applications, site investigation is a more general term that
encompasses initial exploration and data analysis activities.
Regardless of the exact context, site investigation is the process of collecting and ana-
lyzing hydrogeologic data to generate a CSM that can be used to make informed techni-
cal decisions. At hazardous-waste sites, a well-conceived site investigation is critical in
order to
For water-resource applications, the site investigation phase can be used to answer the fol-
lowing questions:
Data analysis and visualization are compulsory steps in the site investigation process.
The aim of this chapter is to outline data types and sources typically used in conducting
hydrogeological investigations and to provide examples of how these data can be analyzed
and visualized.
367
368 Hydrogeological Conceptual Site Models
processes that shape the site, such as regional geologic structure, watershed hydrology,
regional groundwater flow, regional groundwater supply development, geomorphology,
and regional depositional patterns. Similar to how large-scale trend should be removed
from data prior to evaluating local data variation with kriging, this regional conceptual
model is necessary to develop and justify a local conceptual model. For example, for sites
with karst geology, it will often be impossible to understand local groundwater flow direc-
tions without the broader regional understanding of recharge and discharge locations (i.e.,
springs).
For the reader’s benefit, a list of useful public-domain data sources is provided as Table
7.1. These data are often in a file format readily compatible with ArcGIS, such as shapefile,
feature class, or raster. This, again, highlights the importance of using GIS at each stage of
project implementation. After downloading or digitizing spatial data, they can be added to
a project geodatabase. Two sources of particular interest to hydrogeological investigations
TABLE 7.1
List of Useful Web Pages for Downloading Data Available in the Public Domain
Web Site URL
EDR—Environmental Data Resources, Inc. http://www.edrnet.com/index.php?option=com_content&
view=article&id=112&Itemid=213
ESRI Online Resources http://resources.esri.com/
MapMart—Aerial Imagery http://www.mapmart.com/Products/AerialPhotography
.aspx#HistoricalImagery
FEMA National Flood Hazard Layer https://hazards.fema.gov/femaportal/wps/portal/
NFHLWMS
GeoData.gov http://gos2.geodata.gov/wps/portal/gos
National Geospatial Program http://nationalmap.gov/viewers.html
U.S. National Atlas http://www.nationalatlas.gov/
USGS Coastal and Marine Geology Program http://coastalmap.marine.usgs.gov/
USGS Emergency Operations Portal http://eoportal.cr.usgs.gov/EO/gis.php
USGS Topo Quad Download http://store.usgs.gov/b2c_usgs/usgs/maplocator/
(xcm=r3standardpitrex_prd&layout=6_1_61_75&uiarea=2&
ctype=areaDetails&carea=%24ROOT)/.do
Source: Courtesy of Woodard & Curran, Inc.
Site Investigation 369
are the United States Geological Survey (USGS) and the GIS office for the state in which the
site resides. These two data sources are explained further in the following sections using
Massachusetts as an example state.
In keeping with this mission statement, the USGS publishes hundreds of excellent technical
reports, maps, and data sets each year to assist the professional, academic, and regulatory
communities. Unfortunately, many hydrogeologists fail to appreciate the wealth of infor-
mation produced by the USGS. This is especially concerning as USGS data are intended to
help professional hydrogeologists in their work, and USGS data are easily accessed online.
A standard step in every hydrogeological investigation should be a review of available
USGS information for the region containing the site of interest. In some instances, stream
flow, water level, precipitation, or aquifer testing data may be available at or immediately
adjacent to the hydrogeologist’s site. At the minimum, it is likely that data from the hydro-
logic atlas or water resources assessment reports will contain information relevant to the
site investigation.
Figure 7.1 presents a screen shot of the main data access page at the USGS website.
Clicking the “Find a USGS Publication” link will lead to the USGS Publications Warehouse
(http://pubs.er.usgs.gov/), which is an excellent search utility for finding current reports
and historic reports that have been scanned to electronic format. An advanced search may
be performed of the publications warehouse by keyword, author, title, or USGS publica-
tion number. Figure 7.2 presents an example results screen when searching for Edwards
Aquifer via keyword. Note that 115 publications were found. USGS publications often have
links to data in electronic format, so that the hydrogeologist may incorporate the data in
figures, geodatabases, groundwater models, and contour maps. An example exercise using
georeferencing to incorporate groundwater contours from a USGS report into a site map is
presented in Section 7.3.
370 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.1
Data available for download or order at the USGS Web page, http://www.usgs.gov/pubprod/. (From United
States Geological Survey (USGS), USGS Publications Warehouse, 2011. Available at http://pubs.er.usgs.gov/,
accessed June 5, 2011.)
FIGURE 7.2
Search results screen for “Edwards Aquifer.” Note that the online version of the highlighted report can be
downloaded by clicking the View Index Page link.
FIGURE 7.3
Data available for download from MassGIS at http://www.mass.gov/mgis/laylist.htm. (From MassGIS, Office of
Geographic Information, 2011. Available at http://www.mass.gov/mgis/, accessed May 20, 2011.)
and mapping will be much more inefficient as two separate databases need to be queried
before obtaining a consolidated data set.
While it may seem as if incorporating legacy data into an existing database is straightfor-
ward, in reality, the logistics of making this happen can be quite complex. Databases from
prior consultants may have completely different relationships and field names. Additionally,
prior consultants may have used a proprietary (enterprise) database that the hydrogeologist
is unable to access. Legacy data may be in a different coordinate system that is not labeled
clearly in the database. Spatial data may be stored in AutoCAD drawings only without real-
world coordinates. The list of potential issues can be daunting. While some degree of manual
manipulation of spreadsheets and/or spatial data will invariably be required, there are several
steps the hydrogeologist can take to incorporate historic data accurately and efficiently:
• Maintain a constant database structure for all projects for both tabular and spatial data
Downloaded by [University of Auckland] at 23:45 09 April 2014
• Use modules/macros to modify historic tables and make them compatible with
his or her database
• Perform quality control/quality assurance (QA/QC) checks on both the data in the
historical database (against raw laboratory reports, for example) and on the upload
of historical data into the current database (to make sure all data were transferred,
for example)
The investment in consolidation of historical data will be greatly beneficial to future spa-
tial data analysis exercises for the project in question.
7.3 Georeferencing
When inheriting a project, the hydrogeologist may obtain AutoCAD data that are in relative
rather than real-world coordinates. In other words, coordinates of the AutoCAD drawing are
based solely on the relative position of the items in the drawings. In this manner, the drawing
is to scale but does not have coordinates pertaining to any geographic or projected coordinate
system. The hydrogeologist may also receive very useful figures in hard-copy format or may
download geologic reports from the USGS as Portable Document Format (.pdf) files. USGS
reports may contain critical data, such as surficial geology or groundwater contours, that are
directly applicable to the site in question. The question thus arises: “How does the hydrogeolo-
gist link these AutoCAD and hard-copy/.pdf files to spatially referenced data in the real world
so that comprehensive data querying, analysis, and mapping can be performed?”
The process of defining the real-world location of unreferenced AutoCAD or raster data
is termed georeferencing. Georeferencing is easily performed in ArcGIS as described in
the following two sections.
transformation in ArcGIS. An example application illustrating use of the world file in the
georeferencing of AutoCAD data is presented below.
A hydrogeologist has inherited a legacy site from a previous consultant. All the spatial
data from the prior consultant are located in an AutoCAD drawing with relative coordi-
nates. Figure 7.4 presents the AutoCAD data, which consist of site buildings, roads, and 15
monitoring wells. The relative coordinates of two wells (MW-3 and MW-10) are also shown
in Figure 7.4. In order to create a spatial reference for the entire CAD data set, all that are
needed are surveyed, real-world coordinates for these two wells. After obtaining these
real-world coordinates, a world file can be created as depicted in the following:
4.037,3.674 491413.8,177708.8
7.750,7.681 491737.0,178058.5
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.4
AutoCAD data in relative coordinates as viewed in ArcGIS. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
374 Hydrogeological Conceptual Site Models
The top row contains the relative and real-world coordinates (X, Y) for MW-3, on the left
and right, respectively, separated by a space. The bottom row contains the relative and
real-world coordinates for MW-10 in the same format.
The first step of the georeferencing process in ArcMap is to import the AutoCAD data set,
named “Basemap_Drawing Scale.dxf” for this example. All relevant data in this drawing are
stored in the polyline layer. After opening the georeferencing toolbar, the user can select this
polyline layer as the georeferencing target. The relative and real-world coordinate transfor-
mation information is specified in the data-link table, which is where the existing world file
is loaded. Figure 7.5 is a screen shot depicting the georeferencing toolbar, the data-link table,
and the load option where the world file for this example is selected (“Basemap_worldfile
.wld”). Figure 7.6 presents the updated link table with the AutoCAD drawing in the back-
ground. MW-3 and MW-10 are the control points for this example, which are symbolized in
the background in Figure 7.6 with a green cross symbol. The blue lines in Figure 7.6 connect
Downloaded by [University of Auckland] at 23:45 09 April 2014
the control points to their respective real-world location. Note also that the control point
relative coordinates and real-world X, Y coordinates can be entered manually or by select-
ing locations on the map (which will be demonstrated in Section 7.5.2). After accepting the
updated link table, the CAD data layer is automatically adjusted to the real-world coordinate
system based on the survey data of the two points. The georeferenced data set is presented
in Figure 7.7 with labeled coordinates demonstrating a successful transformation.
FIGURE 7.5
After selecting the layer to be georeferenced (“Basemap_Drawing Scale.dxf”), the world file can be imported to
the link table. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.
Site Investigation 375
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.6
Screen shot of loaded world file coordinates and the resulting link points (shown in the table). Note that the
blue lines connecting the initial map locations with the real-world locations cannot be seen in their entirety
because of the excessive distance in between. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri.
All rights reserved.
FIGURE 7.7
Screen shot of georeferenced data in real-world coordinates. Note that the coordinates are in unknown units
because the data have not yet been assigned the correct projection. This is a simple step in ArcGIS using the
Define Projection tool. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.
376 Hydrogeological Conceptual Site Models
tiometric surface in and around the fault zone of interest. One map, presented as Figure 7.8,
is extracted from the .pdf report, saved as raster format (.jpg), and imported into ArcMap.
There are two requirements in order to georeference this figure. The following list shows
FIGURE 7.8
Figure of potentiometric contours and fault lines from Woolfenden and Koczot (2001). This example was selected
because of its classic demonstration of faults as groundwater barriers. The tectonic valley groundwater system
in this area has been highly studied, and an example case study is also presented in the work of Fetter (2001).
Site Investigation 377
• The ArcMap document must contain spatially referenced data in the study loca-
tion in real-world coordinates.
• Both the spatially referenced data and the scanned raster data (.jpg) must have
common features that can serve as control points in the georeferencing.
In order to satisfy the first requirement, the World Street Map layer from ArcGIS Online
is added to the ArcMap document in the study area using the Add Data command as
shown in Figure 7.9. The next step is identifying viable features to serve as control points.
Common features used in raster georeferencing are road or stream intersections, building
or street corners, the mouth of a stream, and physical land features such as rock outcrops
or land jetties (Esri 2010b). To reiterate a common theme of this book, the appropriateness
of features for georeferencing depends on the scale of the investigation in question. The
mouth of a stream may be well suited for a basin-scale data analysis, but for finer resolu-
Downloaded by [University of Auckland] at 23:45 09 April 2014
tion applications, features such as building corners or road intersections will prove more
accurate. Figure 7.8 clearly shows the intersection points of major roads, which can be
linked to the real world through the street map image. Therefore, roadway intersections
are selected as control points for this application.
For most applications, a minimum of three control points is typically required for the
data transformation. In general, it is recommended to distribute these control points across
the raster to achieve the most accurate shift (Esri 2010b). For this example application, four
control points are specified by first selecting a road intersection point on the raster and
then selecting the corresponding road intersection on the street map. The control points
are shown in Figure 7.10 (top and bottom) for the raster and street map, respectively. Note
that the resolution of the raster is worsened when displayed in ArcMap. The image quality
is further impaired by the georeferencing process if significant warping occurs. The link
table for these manually specified points is presented in Figure 7.11, which also shows the
residual transformation error caused by each point. This error can conceptually be inter-
preted as the difference between where the point was exactly specified to where it ends
up based on the overall spatial transformation (Esri 2010b). Ideally, the raster and the real
world line up perfectly, and this error is zero. Conversely, the raster data may have an inac-
curate scale or misrepresent spatial features, leading to a highly warped transformation.
In this case, the first three control points line up perfectly, while the fourth (at the top of the
map where the road intersection is less clearly defined on the raster) is slightly distorted.
Once satisfied with the temporary transformation, the georeferencing of the raster can be
FIGURE 7.9
A World Street map from ArcGIS Online is added as a basemap to the ArcMap document for use in raster geo-
referencing. Esri® ArcGIS Online graphical user interface. Copyright © Esri. All rights reserved.
378 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.10
Georeferencing control points are symbolized by the four red crosshair symbols that link the USGS map (top)
to the street map (bottom) in real-world coordinates. World Street Map sources: Esri, DeLorme, NAVTEQ,
TomTom, USGS, Intermap, iPC, NRCAN, Esri Japan, METI, Esri China (Hong Kong), Esri (Thailand).
Site Investigation 379
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.11
Link table for the four control points depicted in Figure 7.10. Note that these points can be exported as a world
file, which can be used to automatically georeference rasters with the same extents as the USGS map. Esri®
ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.
FIGURE 7.12
Digitized potentiometric surface contours (red lines) and faults (white lines) from the USGS map. Note that
the potentiometric surface can be more than 150 ft higher on the right side of the dividing fault than on the left
side. World Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and
the GIS User Community.
FIGURE 7.13
Colored, filled contours clearly show the influence of the fault as an aquifer boundary. The higher-elevation
basin to the right of the fault receives recharge from the mountain wash that cannot be transmitted across
the fault. This has important implications with respect to water-supply wells in the isolated basin. ArcGlobe
Imagery Source: Esri® and i-cubed.
Site Investigation 381
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.14
When interpolating across the fault, the potentiometric surface indicates that water flows uphill and has non-
sensical irregularities. In this case, the conceptual model for fault influence is critical in developing accurate
elevation contours. ArcGlobe Imagery Source: Esri® and i-cubed.
The above exercise demonstrates the power and efficiency of georeferencing tools in
ArcGIS. Hard-copy maps can be easily digitized and reformulated for quantitative spatial
analysis, greatly enhancing CSM development where limited site-specific data may exist.
“A basemap is used for locational reference and provides a framework on which users
overlay or mash up their operational layers, perform tasks, and visualize geographic
information. The basemap serves as a foundation for all subsequent operations and
mapping. Basemaps provide the context and a framework for working with information
geographically.”
More simply, the basemap is the underlying set of data layers that appears on most map-
ping figures produced by the hydrogeologist. Layers often included in a basemap are
imagery, topographic contours, street networks, property parcels, buildings, and hydro-
logic features such as rivers, lakes, and streams. As described in Chapter 3, preconstructed
382 Hydrogeological Conceptual Site Models
basemaps are immediately available for display within ArcGIS from ArcGIS Online. A well-
conceived basemap will clearly establish site-specific and regional features of importance
and enhance the overall visual appeal of the map. It is important to remain flexible when
creating a basemap as different clients and project stakeholders have different preferences
in terms of how maps are presented. For example, some clients may think that aerial pho-
tographs enhance the utility of maps, and others prefer simple CAD-style diagrams with
lines and polygons representing basic site features. Example basemap layers are presented
in the figures included throughout the subsequent sections of this chapter.
Downloaded by [University of Auckland] at 23:45 09 April 2014
It is often quite difficult to balance the above decision factors and devise a sampling plan
that the hydrogeologist (consultant), the client (project financier), and the regulator are all
100% satisfied with, which means that compromises are necessary. Almost every project in
hydrogeology will involve a trade-off between cost and certainty (i.e., risk) with the client
pushing for lower costs and the regulator pushing for greater certainty. It is the hydroge-
ologist’s job to balance these demands and use technical expertise (e.g., quantitative data
analysis) to minimize both cost and risk to the extent practicable.
Faced with the above competing objectives, the question of how many samples are
needed becomes very challenging to answer. The most widely used method of developing
appropriate, defensible sampling plans that balance both cost and risk is the seven-step
Site Investigation 383
U.S. EPA Data Quality Objective (DQO) process. The DQO process is an example of sys-
tematic planning, which is defined by the U.S. EPA (2006) as follows:
Execution of the DQO process involves completion of the following seven steps, the first
Downloaded by [University of Auckland] at 23:45 09 April 2014
six of which are entirely conceptual in nature. A summary of the DQO steps, adapted from
the U.S. EPA (2006) follows:
on the expected variability of contaminants at the site. Data variability is typically estimated
from historical site-specific data, data from comparable sites, or simply professional judgment
(Matzke et al. 2010). The most common hypothesis test used in VSP for a contaminant of
interest is comparing the mean concentration of site data to the fixed, regulatory action level,
or threshold. It is recommended to set the null hypothesis to the mean concentration being
greater than the threshold value (i.e., the site is contaminated). In this manner, the burden of
proof is placed on rejecting the null hypothesis and concluding that the site is not contaminated.
The three primary inputs to VSP for a contaminant of interest are the expected standard
deviation of the data, the regulatory action level, and decision error thresholds. Decision
error metrics that require specification in VSP are described below, assuming the null
hypothesis of the site being dirty:
• Alpha (α): Alpha is the type I error for the hypothesis tests and is the probability
Downloaded by [University of Auckland] at 23:45 09 April 2014
of falsely rejecting the null hypothesis (concluding the mean to be lower than the
threshold value when it is, in fact, greater). In the authors’ experience, the range
of acceptable type I error is typically 1%–10% (equivalent to a confidence level of
90%–99%). However, with substantive technical justification, it is possible to dem-
onstrate that data quality objectives are met at higher error levels within reason.
For example, everyone would agree that a false rejection error of 50% is unaccept-
able because type I error represents a risk to public health.
• Beta (β): Beta is the type II error for the hypothesis tests and is the probability of
falsely accepting the null hypothesis (concluding the mean to be greater than the
threshold value when it is, in fact, lower). In general, Beta is higher than Alpha
as there are no public health implications for a false positive. Costs for unneces-
sary assessment/cleanup are the primary consequences of a high type II error.
Potential consequences of decision error are summarized in Table 7.2.
• Width of the gray region: The width of the gray region is the range of true mean
concentrations below the threshold value within which it is considered acceptable
to falsely conclude the site to be dirty and conduct unnecessary cleanup. In other
words, it is generally accepted that a high probability of type II error exists when
the true mean concentration is 9.9 and the threshold value is 10.0. The prescribed
type II error is achieved exactly at the lower boundary of the gray region. Type II
error decreases for true mean concentrations below the gray region and increases
significantly for true mean concentrations within the gray region, where error
probabilities are typically 20%–95% (Matzke et al. 2010). The sample size is very
TABLE 7.2
Potential Consequences of Type I and Type II Decision Error
Type of Decision Error Impact Potential Consequences
False negative (type I error α): The site would not be remediated Contamination continues to
Mistakenly reject the null when it should be remediated. present a risk to human health
hypothesis (i.e., erroneously and the environment.
conclude that site contamination Relative severity: High
does not require remedial action).
False positive (type II error β): The site would be remediated Unnecessary costs for further
Mistakenly fail to reject the null unnecessarily. characterization and remediation
hypothesis (i.e., erroneously are incurred.
conclude that site contamination Relative severity: Low
requires remedial action).
Site Investigation 385
sensitive to the width of the gray region, indicating that a very large number of
samples will be required to determine that a true mean of 9.9 is less than 10.0 with
meaningful statistical significance.
VSP offers both parametric (i.e., distributional) and nonparametric hypothesis testing
options. Together with the need to assess data variability, this indicates that, as with krig-
ing, data exploration should be a mandatory precursor to running VSP. An example appli-
cation of VSP to determine the required number of soil samples at a hazardous-waste site
is presented in the following section. This represents the most common use of VSP in
professional practice.
This example application involves a hypothetical legacy site with polychlorinated biphenyl
(PCB) contamination in surface soils (0–2 ft below ground surface). Based on available his-
torical data, a comprehensive RI must be submitted to the U.S. EPA with a statistically based
sampling plan to fully delineate the nature and extent of contamination. To determine the
number of samples needed to complete the investigation, the site boundary, encompassing
40 acres, and the 16 historical sampling locations are imported into VSP. The site boundary
(imported as a shapefile) and historical sampling locations (imported as a text file with the
analytical results) are presented in Figure 7.15 as seen in VSP’s visual processor.
A regulatory standard of 10 mg/kg applies to the site, above which remediation is
required. Therefore, the appropriate statistical hypothesis test in VSP is to compare the
average PCB concentration to the fixed threshold value of 10 mg/kg. However, before
executing the hypothesis test, historical data exploration must be conducted to select
the appropriate form of the hypothesis test. Figure 7.16 presents a screen shot of the
FIGURE 7.15
Yellow polygon representing the site boundary with blue crosshair symbols representing the historic sampling
locations.
386 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.16
Summary statistics for the historic sampling data. Note that because of very low detection limits, a zero value
is substituted for nondetect results.
data analysis tab in VSP, which summarizes the statistics of the historical data. The
minimum result is zero (which was substituted for nondetect results because of a low
reporting limit), and the maximum result is 17 mg/kg, which exceeds the regulatory
criteria. The standard deviation, which is the critical parameter for the hypothesis test,
is 4.84 mg/kg.
Figure 7.17 presents a screen shot of statistical tests performed by VSP to evaluate
whether or not the historical data follow a normal distribution and to calculate the 95%
upper confidence limit (UCL) of the mean. As shown by Figure 7.17, the data do not follow
a normal distribution; therefore, a nonparametric approach should be employed to deter-
mine the sample size. It is very common for data at hazardous-waste sites to have skewed
distributions that are better represented by a lognormal distribution, rather than a pure
normal distribution. VSP does not have a means of transforming and back-transforming
the data to perform hypothesis tests for lognormal distributions. The use of data trans-
formations in hypothesis testing is an area in need of further exploration by the regula-
tory and research communities. Fundamentally, VSP treats skewed data as being from
two separate populations—one is representative of background conditions, and the other
is representative of contaminated soil. This is a different concept than treating the data
as belonging to one comprehensive lognormal distribution, which is an assumption that
greatly improves the accuracy of kriging and other analytical applications. Because of this
inability to transform the data, nonparametric methods will be used in most cases, and the
Site Investigation 387
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.17
VSP uses the Shapiro–Wilk test to evaluate if the historic sampling data follow a normal distribution. Note that
the nonparametric 95% UCL is significantly higher than that of a normally distributed data set, indicating that
there is greater uncertainty in the data characterization.
sample size estimate may be overly conservative. In VSP, nonparametric hypothesis tests
will always require more samples than those that assume a normal distribution.
As stated above, a nonparametric test is required for the PCB data. Therefore, the one-
sample, nonparametric Multiagency Radiation Surveys and Site Investigation Manual
(MARSSIM) Sign test was selected as the appropriate statistical method. The MARSSIM
Sign test will develop conservative estimates of the number of samples required as the
assumption is made that analytical data are asymmetric and not normally distributed.
This is an accurate assumption for the PCB data, which have high variability, a skewed dis-
tribution, and a mean considerably different from the median (which indicates asymme-
try). Note that the MARSSIM Sign test is a true test for the median and an approximate test
for the mean when the asymmetric assumption is used (Matzke et al. 2010). After selection
of the appropriate statistical test, decision error thresholds must be specified. Preliminary
discussion with the government regulator of the site indicates that type I errors of up to
10% are acceptable (a 90% confidence level). The regulator was not concerned about type II
error and did not specify criteria.
Figure 7.18 presents a screen shot of the VSP sample size determination for the above
example, using a type I error of 10%, a type II error of 20%, and a width of the gray region
equal to 2.5 mg/kg. In summary, VSP recommends the collection of 35 samples to charac-
terize the site with the desired levels of certainty. Note that the total number of samples
presented at the bottom of the figure includes a 20% overage that is recommended by
MARSSIM to account for additional uncertainty—this applies another level of conserva-
tism to the design beyond the nonparametric, nonsymmetric assumption. Analytical vari-
ability can also be added to the hypothesis test, which would increase the sample size
388 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.18
Sample size recommendation when comparing the true average concentration to a fixed threshold using the
nonparametric, nonsymmetric MARSSIM hypothesis test with an expected standard deviation calculated from
historic data.
further. In this case, analytical variability is not yet quantified; however, consideration
should be given to the collection of duplicate samples for QA/QC purposes.
When factoring in the 16 historical samples collected to date, this means that 19 addi-
tional (new) samples are required. VSP can automatically locate these additional samples
based on random placement, adaptive fill placement (filling in the largest unsampled areas
sequentially), or grid placement (Matzke et al. 2010). For this example, random sample
placement was used, resulting in the sample distribution presented in Figure 7.19. One
concept that is somewhat counterintuitive is that this sample size recommendation is com-
pletely independent of the size of the site. While this example site is 40 acres, the same set
of historical data and the same hypothesis test parameters will yield a recommendation of
35 samples even if the site were 1 acre or 400 acres in size. In theory, the overall variabil-
ity of site data should be a function of the homogeneity of the site. For example, using an
estimated standard deviation, VSP calculates that 40 samples are required to characterize
a 100-acre homogeneous site. While it may seem counterintuitive, VSP will determine that
the same number of samples would be required for a small subset of the same site if ana-
lyzed independently. This is because the exact same assumed standard deviation is used
for both analyses. However, one would expect that a larger site would inherently be less
Site Investigation 389
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.19
New 19 sample locations, represented by red diamonds, are randomly placed across the site by VSP.
homogeneous than a small site, therefore having a larger standard deviation and requir-
ing more samples for characterization. As illustrated in an example problem later in this
section, the user must rely on professional judgment to ensure that the assumed standard
deviation is representative of the entire area of interest.
According to statistical theory, all samples should be randomly placed. However, this is
often fundamentally at odds with the study objectives, which may require focused (biased)
sampling to delineate contamination in soil and groundwater. One hybrid approach used
frequently in professional practice is stratified random sampling, which locates samples
randomly within grid cells that have been deterministically placed according to the objec-
tives of the sampling.
A recommended component of a statistical sampling design in VSP is a sensitivity anal-
ysis that determines how changes in error thresholds affect the sample size recommenda-
tion. Figure 7.20 presents the VSP hypothesis test calculation for the PCB example when
the confidence is increased to 95% and the width of the gray region is decreased to 1 mg/
kg. As shown at the bottom of the figure, these relatively small changes in error param-
eters lead to an increase in sample size by more than one order of magnitude. The result-
ing sample placement map presented in Figure 7.21 demonstrates the absurdity of this
design relative to the scope of the historical sampling and that of Figure 7.19. To reiterate,
the width of the gray region is a very sensitive parameter, and the user is cautioned against
setting expectations with regulators of achieving a narrow gray region. In general, VSP
users must document sample-size sensitivity and establish the correct balance between
investigation cost and data collection in order to efficiently meet project objectives.
In addition to the use of overly stringent error thresholds, one other situation that often
results in exorbitant sampling recommendations is the presence of hot spots in the his-
torical data set. Including hot spot samples with background samples in a VSP analysis
can result in standard deviations that significantly exceed the regulatory threshold limit,
390 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.20
VSP calculates that 384 total samples are required to meet the more stringent error thresholds.
leading to an absurdly high sampling requirement (e.g., >10,000 samples). This problem
occurs so frequently in professional practice that many consultants completely abandon
VSP and go to great lengths to avoid its use. For example, two samples for the above PCB
problem are converted to hot spot samples by increasing the measured concentrations
to 900 mg/kg as shown in Figure 7.22. The summary statistics evaluation, depicted in
Figure 7.23, shows that the resulting standard deviation is 244 mg/kg, which means that
it will be almost impossible to state with confidence that the data are below 10 mg/kg. As
a result, the MARSSIM analysis presented in Figure 7.24 recommends collection of more
than 10,000 samples to characterize the site—a nonsensical approach.
To circumvent this problem, the hydrogeologist must be aware of hot spots and segre-
gate data populations that have significantly different statistical properties. For the above
example, the hot spot should be treated independently from the rest of the site, which
exhibits low levels of PCB impacts. Removing the two hot spot data points and rerunning
the VSP analysis will yield a much more reasonable number of samples that can be used
to evaluate conditions outside of the hot spot, where the extent of contamination is uncer-
tain. Within the hot spot, there is known contamination, and the question becomes how to
deterministically bound the hot spot, for example, with a grid system. VSP also supports a
hot spot identification tool, and the reader is referred to Matzke et al. (2010) for additional
information. To summarize, highly contaminated samples from a relatively small area
Site Investigation 391
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.21
Additional samples recommended by VSP are presented as magenta squares. This chicken-pox extent of sam-
pling is unreasonable and generally cost-prohibitive. Each sample has requisite work-plan costs, data-collection
costs in the field, laboratory-analytical costs, data-validation costs, data-presentation costs, and regulatory-
review and discussion costs. Spatial data interpolation is a much better alternative as described in Chapter 4.
FIGURE 7.22
Hot spot at 900 mg/kg is represented by the two large, red circles.
392 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.23
Summary statistics for the hot spot scenario, showing a significant increase in the variability of the data.
FIGURE 7.24
Because of a standard deviation that is significantly higher than the action level, VSP recommends that more
than 10,000 samples are needed at the site.
Site Investigation 393
of the site should be removed from the data set used to determine how many samples
are needed to evaluate the broader area of the site where the extent of contamination is
unknown and remediation may not be necessary.
There are many drilling methods available to the modern-day hydrogeologist. As with
most applications, the selection of the appropriate drilling method depends on the data
quality objectives of the site. Among the questions that need to be answered when evaluat-
ing site-specific drilling requirements are as follows:
In general, the least expensive drilling method that is also widely used in professional
practice is the Geoprobe direct push system. A Geoprobe machine uses hydraulic power,
static force, and percussion to directly push sample cores and casing into the subsurface
(Geoprobe Systems 2011a). The primary advantages of this system are that decent-quality
soil cores can be rapidly obtained in overburden formations, and shallow, small-diameter
monitoring wells can be rapidly installed without generating IDW. A Geoprobe machine
is also small and track-mounted and, therefore, can access remote locations with uneven
topography and limited clearance (see photo in Figure 7.25). Example soil cores of over
burden materials obtained from a Geoprobe system are presented in Figure 7.26. The qual-
ity of the sample depends on the formation in question; in some cases, Geoprobe sample
cores can collect intact cores that allow continuous classification and sampling. Figure 7.27
demonstrates a close-up photo of a Geoprobe soil core where dense nonaqueous phase
liquids (DNAPLs) were observed and sampled. A new trend in Geoprobe technology is
the use of in situ tools for down-hole chemical and physical parameter analysis, such as
soil conductivity and membrane interface probes, which can help evaluate soil type and
delineate groundwater contaminant locations in real time.
The obvious limitations of a direct-push system are that most sites have a practical
depth limitation of 50 ft or less, based on soil density, and bedrock drilling is not feasible
394 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.25
Photo of the National Oceanic and Atmospheric Administration’s Geoprobe unit driving sample cores into
beach sediments. (From National Oceanic and Atmospheric Administration (NOAA), Wyckoff Co./Eagle
Harbor Beach Investigation for Creosote, 2006. Available at http://response.restoration.noaa.gov/, accessed
June 15, 2011. With permission.)
FIGURE 7.26
Photo of continuous Geoprobe sample cores of overburden material. (Courtesy of Woodard & Curran, Inc.)
Site Investigation 395
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.27
Close-up photo of Geoprobe sample core showing accumulation of DNAPLs in a silty-sand layer immediately
above a dense, continuous silt. Sample was collected approximately 20 ft below the water table. (Courtesy of
Woodard & Curran, Inc.)
(Geoprobe Systems 2011b). Furthermore, only small-diameter wells can be installed, gen-
erally of 2 in. or less. Also, penetration tests are not readily conducted with a Geoprobe,
and sample collection can be time consuming and of poor quality where soil density is
high.
Where large-diameter, deep drilling is required in both unconsolidated and consoli-
dated formations, full-scale drilling rigs are required. Drilling technologies typically
defined as conventional include cable-tool, rock-coring, hollow-stem auger, and drive-and-
wash methods. The latter two examples can be fitted with a down-hole air hammer to drill
in bedrock formations. Examples of more modern, advanced (and generally more expen-
sive) technologies are air-rotary (or dual-rotary) and sonic drilling. The major decision
parameters regarding the use of drilling rigs are the quality of samples needed and IDW
and site access limitations. For example, rotary technologies pulverize soil and bedrock
and circulate and discharge cuttings to an above-ground cyclone. A dual-rotary rig, show-
ing the cyclone and a hay bale containment area is presented in Figure 7.28. Sonic and rock
396 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.28
Photo showing operation of a dual-rotary rig. (Courtesy of Woodard & Curran, Inc.)
FIGURE 7.29
Photo of a track-mounted, mini sonic drill rig. (Courtesy of Woodard & Curran, Inc.)
FIGURE 7.30
Rotary (pulverized) cuttings from a granite–gneiss formation. In some cases, these cuttings may be mistakenly
classified as silty, gravelly sand. (Courtesy of Woodard & Curran, Inc.)
398 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.31
Rock cores from a granite–gneiss formation collected with a mini sonic rig, enabling correct rock classification
and identification of fractures. (Courtesy of Woodard & Curran, Inc.)
FIGURE 7.32
Photo of dirt and gravel access road constructed for a dual-rotary rig. (Courtesy of Woodard & Curran, Inc.)
Site Investigation 399
tracking progress during site remediation. Field screening tools such as X-ray fluorescence
(XRF) instruments and photoionization detectors (PIDs) can be used in lieu of laboratory
analysis to further expedite data collection and visualization. For example, when exca-
vating soils contaminated with metals at a hazardous-waste site, an XRF can be used to
determine in real time if the excavation needs to extend further in the horizontal or verti-
cal directions. At the end of each day, field personnel can upload XRF data into the site
database with a thumb drive through a Web–database interface, shown in Figure 7.33. The
uploaded XRF data can be displayed on the Web utility along with laboratory analytical
data to confirm XRF results. This data display, shown in Figure 7.34, is essentially a win-
dow into the database, which is being updated behind the scenes.
To reiterate, the goal of the real-time data entry illustrated through Figures 7.33 and 7.34
is to inform the soil excavation process. Specifically, the data will determine whether addi-
tional excavation is needed to remove metal contamination. To track the remedial progress,
a Web map can be created through ArcGIS that is automatically updated with the database.
The associated Web map for the above example is presented in Figure 7.35. The excavation
area, abutting the coast line, is divided into grid cells that correspond to sample location
areas. Color coding is used to easily display which cells have been sampled and excavated or
proven to be clean by the field sampling. After a cell is excavated, samples that were collected
from that cell can be flagged as being excavated in the Web utility as shown in Figure 7.36.
This process automatically updates a risk assessment table in the database by removing the
soil samples flagged as excavated. Therefore, the risk assessment table is only representative
of samples that still remain in the ground and are indicative of concentrations that potential
receptors may encounter. In this manner, risk assessment can also be conducted in real time
to ensure that the excavation is protective of human health.
FIGURE 7.33
Screen shot of Web utility to upload XRF data to a geodatabase. (Courtesy of Ted Chapin, Woodard & Curran,
Inc.)
400 Hydrogeological Conceptual Site Models
Exposure Point Average Lead Count XRF Average XRF Count Lab Average Lab Count
EPC-237 5,628 86 0 5628 86
EPC-2/3 1,976 240 330 44 2346 196
EPC-234 561 8 1011 3 291 5
None 378 168 319 72 423 96
EPC-8 370 30 283 10 413 20
EPC-WW10 283 22 309 13 245 9
EPC-9A 239 9 124 3 297 6
EPC-9 216 19 373 9 75 10
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.34
Screen shot of summary data at relevant exposure points, including field-based (XRF) and laboratory-based
measurements. This data screen is viewed in real time on the Web page. (Courtesy of Ted Chapin, Woodard &
Curran, Inc.)
FIGURE 7.35
Map showing grid cells that is viewed on the Web and updated in real time based on field and lab-entered
data. (Courtesy of Ted Chapin, Woodard & Curran, Inc. World Imagery courtesy of Esri®, i-cubed, USDA, AEX,
GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User Community.)
Site Investigation 401
FIGURE 7.36
Screen shot of utility to classify samples as being excavated in real time to update risk calculations. (Courtesy
of Ted Chapin, Woodard & Curran, Inc.)
All the above operations, demonstrated through Figures 7.33 through 7.36, are executed
on a Web site accessible to anyone with the requisite log-in and password. Handheld com-
puters with GPS devices, such as a Trimble, can also be used to facilitate real-time data col-
lection. These handheld instruments can also access Web utilities and update databases as
shown in Figure 7.37. The mobile ArcGIS utility, ArcPad, can be loaded onto these machines,
so that data can be updated and visualized in a GIS environment as shown by Figure 7.38.
This enables usage of complex ArcGIS tools, such as ModelBuilder, to perform mapping
and spatial data analysis. Sampling points or other relevant locations can be located with
GPS technology, uploaded to the database, and displayed on the ArcPad map—all on the
handheld device. An example operation created in ModelBuilder to immediately translate
X, Y points in a table to features on a map is displayed in Figure 7.39.
In summary, the rapid expansion of Web and handheld computing tools has greatly
facilitated real-time data collection, analysis, and mapping. These tools can increase proj-
ect efficiency, reduce risk of error, and help clearly demonstrate important data and con-
cepts to clients, regulators, and other stakeholders. Once again, these operations primarily
reside in the ArcGIS environment, illustrating how important it is for hydrogeologists to
become competent in GIS technology. Alternative software tools and examples of real-
time data visualization are presented in Chapters 8 and 9 for groundwater remediation
and groundwater resource applications, respectively.
402 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.37
Example of data entry utility in ArcPad as seen on a handheld Trimble device. In this case, excavation quanti-
ties are being tracked by the truckload. (Courtesy of Ted Chapin and Aaron Townsley, Woodard & Curran, Inc.)
FIGURE 7.38
Example of an ArcPad map with site features and sample locations as seen on a handheld Trimble device.
(Courtesy of Ted Chapin and Aaron Townsley, Woodard & Curran, Inc.)
Site Investigation 403
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.39
Design of a model to automatically convert X, Y locations into a feature class for display on ArcGIS maps. In
this manner, X, Y points can be displayed on a map in real time to inform site decision making. (Courtesy of Ted
Chapin, Woodard & Curran, Inc.)
Systematic planning and real-time data measurement are two of the three key elements
of the U.S. EPA’s Triad approach, which was created to help modernize and streamline site
investigation and remediation processes (U.S. EPA 2004). The third element is a dynamic
work strategy, which requires experienced field staff to make real-time decisions in the
field based on the objectives determined through systematic planning. A conceptual dia-
gram of the Triad approach is presented in Figure 7.40. The intent of the Triad approach is
to help regulators and consultants alike develop “an accurate CSM that delineates distinct
contaminant populations for which risk estimation and cost-effective remedial decisions
will differ” (U.S. EPA 2004).
Additionally, use of the Triad elements will help the hydrogeologist identify and man-
age uncertainties related to the CSM. The Triad approach is really the genesis of many of
the concepts discussed throughout this book and represents a major step by the U.S. EPA
FIGURE 7.40
Conceptual diagram showing the three elements of U.S. EPA’s Triad approach. (From United States
Environmental Protection Agency (U.S. EPA), Improving Sampling, Analysis, and Data Management for Site
Investigation and Cleanup, Office of Solid Waste and Emergency Response, 4 pp., 2004.)
404 Hydrogeological Conceptual Site Models
to encourage the use of CSMs, real-time data collection, and computer analytical tools to
expedite site investigations and make them more accurate and less wasteful.
regulators, and other stakeholders. While written project summary reports are invariably
required, the best way to support technical decisions is through graphics such as maps,
geologic diagrams, and data charts and plots. Example graphics from real-life site investiga-
tion projects are presented and discussed in the following sections. The goal is to inform
the reader about the types of graphics that are generally used in professional practice and
to illustrate how well-conceived figures can clearly illustrate key hydrogeological concepts.
• Depicting the regional physiographic setting of the site in question (e.g., using a
regional-scale USGS topographic map, for example)
• Displaying relevant site features such as roads, buildings, wells, and topography
(generally the elements that comprise a basemap)
• Presenting the locations and results of samples collected from groundwater, soil,
or surface water for field or laboratory analysis
• Presenting contour maps of potentiometric surface, contaminant concentration, or
geologic layering (i.e., the bedrock surface)
• Displaying the results of analytical or numerical models
For investigations related to hazardous-waste site cleanup, plan-view maps with sampling
results are often the key figures associated with investigation reports. These figures are
often subject to rigorous review by regulators and stakeholders, each of whom may have
competing ideas about what these figures should contain. For example, some regulators
may want all data to be presented on the figures in tabular form so that they contain
exact numeric results for all relevant analytes and can be used for one-stop referencing
purposes. This is commonly accomplished through the use of chemical (chem) boxes, or
callout boxes, with data tables that point to the associated sampling locations. While there
is some value to having all the tabular data in a figure, this practice often leads to figures
that are practically illegible and make it very hard to discern the overall concept presented
in the figure. For example, the intent of the figure may be to show the overall nature and
extent of contamination relative to key hydrogeologic features or water-supply wells.
Excessive labeling can obscure such concepts as shown in Figure 7.41.
Site Investigation 405
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.41
Figure with excessive labeling of sampling results at soil borings. It is very difficult to discern which sampling loca-
tions correspond to which chem boxes, and it is not immediately clear which results exceeded regulatory criteria.
FIGURE 7.42
Symbology can clearly depict sampling results as was performed in this example of PCB concentrations in
sediment. (From ENSR/AECOM, Sediment Sampling Summary Report 2004–2005: Marsh Island. New Bedford
Harbor Superfund Site, New Bedford Massachusetts, US Army Corps of Engineers. United States Environmental
Protection Agency Contract DACW33-00-D-0003-Task Order 12 Document No. 09000-350-720, 54 pp., 2006.
Available at http://www.epa.gov/nbh/data.html#1998RODESDs, accessed May 30, 2011.)
406 Hydrogeological Conceptual Site Models
objectives while also providing for easy reference to companion tables that contain all
requisite data.
FIGURE 7.43
Figure that clearly displays which locations exceeded TCLP criteria. No concentration labeling is necessary.
(From New York State Department of Environmental Conservation (NYSDEC), RI/FS Scope of Work, Old Upper
Mountain Road Site, Site Number 932112, Routes 31 & 93, Lockport, NY, 39 pp., 2009. Available at http://www
.dec.ny.gov/docs/regions_pdf/oldupperscope.pdf, accessed June 20, 2011.)
Site Investigation 407
FIGURE 7.44
Potentiometric surface of a groundwater basin in Arizona prior to water-supply development at the two
wells represented by the orange crosshair symbols. The small blue circles represent water-level gaug-
ing locations used to generate the contours, which are presented in feet above mean sea level. World
Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User
Community.
408 Hydrogeological Conceptual Site Models
north with flow moving toward the Santa Cruz River in the northwest corner of the map.
Figure 7.45 is a contour map of the same basin after the two wells have been pumping for
several years and shows a pronounced cone of depression. It is likely that discharge to
the Santa Cruz River is significantly reduced, and the effects of this pumping on surface-
water hydrology should be assessed. Figures 7.44 and 7.45 are representative of basin scale,
and contours should be representative of the primary boundary conditions acting on the
regional aquifer. As described in Chapter 4, honoring each of the potentiometric surface
measurements at the monitoring wells (represented by the blue circles) makes no practical
sense at this scale, and a nugget effect should be used. These contours can then form the
basis of a numerical groundwater model to assess the long-term impacts of this pumping.
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.45
Potentiometric surface of the groundwater basin shown in Figure 7.44 after several years of water-supply pump-
ing. World Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and
the GIS User Community.
Site Investigation 409
and in 2003 (AFCEE 2005). There have been significant changes in the overall extent of
the plume, most likely because of the operation of a pump-and-treat system. Over the
seven-year period, a portion of the plume became detached as highlighted in Figure
7.47. However, the plume also spread further to the south. A limitation of this map is the
absence of topographic or groundwater-elevation contours, which would better inform the
reader about why the plume has changed and answer key questions regarding plume sta-
bility. Labeling the concentration of the contours and potentially adding one more labeled
contour line would also significantly help demonstrate whether the plume is stable, con-
tracting, or expanding.
FIGURE 7.46
PCE and TCE concentration contours in a shallow bedrock aquifer. (Courtesy of Woodard & Curran, Inc.)
410 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.47
Outlines of the Ashumet Valley plume at MMR in 1996 and 2003. While it is clear that changes in the plume
extent have occurred, the absence of groundwater and/or topographic contours and pumping-well locations
makes it very difficult to discern the conceptual explanation for the changes. (From Air Force Center for
Environmental Excellence (AFCEE), Ashumet Valley Plume Outline, 2005. Available at http://www.mmr.org/
Cleanup/plumes/images/Ashumet.pdf, accessed May 13, 2011.)
Site Investigation 411
Figure 7.48 is a typical boring log used in a hydrogeological site investigation. The boring
log presents a classification of the soil or rock at each depth interval along with a cor-
responding graphic. Well-construction information, in this case an open-hole bedrock
FIGURE 7.48
Example boring log for a bedrock well drilled using air rotary technology.
412 Hydrogeological Conceptual Site Models
well, is also provided. Note that different soil classification methodologies are used in
professional practice with common examples including the United Soil Classification
System (USCS), the United States Department of Agriculture Classification System, and
the Burmeister Classification System. In general, one should use a consistent classifi-
cation system for each boring log at a site and include more information than is pro-
vided simply by the USCS nomenclature. For example, solely classifying a soil as SM:
Silty Sand under the USCS system does not convey any information about the following
characteristics:
Therefore, it is advisable to use a more robust classification system that incorporates the
above parameters. With knowledge of the above parameters, it is very easy to determine
the corresponding USCS classification, and commonly, the USCS term can be added to
the comprehensive soil description at the end in parenthesis. For the reader’s benefit, a
soil classification guide prepared by the Alaska Department of Transportation and Public
Facilities is included in the companion DVD.
Similar to the classification itself, soil boring logs should contain as much information as
possible regarding the drilling method, the well construction, field screening results, and
pertinent notes and observations, such as the presence of heaving sands and/or weathered
(rotten) rock.
After consolidating soil boring logs, the hydrogeologist can interpret the data to create
informative cross sections that become key elements of the CSM. Cross-section develop-
ment is an art form that requires a detailed understanding of both site-specific data and
the regional depositional environment. Knowledge and experience in geology, most nota-
bly surficial processes, are paramount. Because professional judgment and interpretation
are used to create cross sections, two different hydrogeologists may create significantly
different cross sections using the same data. This phenomenon is demonstrated in Figure
7.49, which is a cross section created in AutoCAD, a program commonly used to create dig-
ital cross sections. The connected, colored polygons represent the original interpretation of
stratigraphy; however, red-line markups indicate alternative, equally plausible interpreta-
tions. The question regarding which interpretation is correct depends upon which version
is more representative of the geological processes applicable to the site.
While AutoCAD sections can be made smoother than that of Figure 7.49, graphical pro-
duction programs such as Canvas can be used to make higher quality cross sections that
better reinforce interpretative concepts. For example, Figure 7.50 was created in Canvas
and clearly demonstrates how the two aquifers are separated by a confining unit that helps
maintain a higher hydraulic head in the deeper aquifer. Cross sections can also be used
to show contaminant concentration contours, which is very useful in demonstrating how
contaminant fate and transport depends on geologic structure. For example, Figure 7.51
shows TCE concentration contours in groundwater relative to soil layers and the bedrock
surface. The area of the highest TCE concentration (> 10 mg/L) sits at the interface of the
very dense sand and the bedrock, indicating the potential presence of DNAPLs. It is appar-
ent that a TCE plume exists in both the overburden and the bedrock. The likelihood of
Site Investigation 413
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.49
Geologic cross section created in AutoCAD with small letters in large “tablecloth” format with alternative inter-
pretations marked in red. (Courtesy of Pall Corporation.)
A A’
West
East
720 720
680 680
S Clay
640 640
Elevation amsl, in feet
560 560
D Confining
Unit
520 520
?
480 480
Bedrock D Aquifer
440 440
?
0 1000 2000 ft
LEGEND
Groundwater flow Potentiometric level,
direction 23 September 2009
Monitoring well
with screen interval S Aquifer S Aquifer
D Aquifer D Aquifer
FIGURE 7.50
Example cross section created in a graphical production program that is more visually appealing. Note that
question marks indicate uncertainty in the interpretation because borings were not advanced to the bedrock
surface at those locations. Another advantage of this format, as opposed to typical AutoCAD cross sections, is
that it will look the same printed in hard copy as it does on the computer screen (i.e., very large-size printing
paper is not required). “S” signifies shallow, and “D” signifies deep.
414 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.51
Geologic cross section showing TCE concentrations in groundwater. These types of figures are critical in pre-
senting the CSM for contaminant fate and transport. (Courtesy of Woodard & Curran, Inc.)
DNAPL in the bedrock and the documented existence of a fractured bedrock TCE plume
have important implications with respect to the feasibility of groundwater remediation to
drinking-water standards. This concept is discussed further in Chapter 8.
Regardless of the exact content and scale of a cross section, it should contain the follow-
ing common elements:
Cross sections can also be used to present geophysical data that can enhance the site
investigation process. In general, subsurface geophysical methods can be used to collect
subsurface data along a transect at a density that would be impractical for a drilling rig.
The broad categories of geophysical methods are surface, borehole, and waterborne (USGS
2009). Examples of geophysical methods include
FIGURE 7.52
Setup for determining presence and 3D orientation of fractures using electrical resistivity tomography between
boreholes. (Courtesy of Peter Thompson, AMEC.)
FIGURE 7.53
Electrical resistivity tomography results shown in panels between boreholes. Concentration of COCs are for dis-
crete sampling points shown as spheres. (Courtesy of Peter Thompson, Scott Calkin, and Rod Rustad, AMEC.)
416 Hydrogeological Conceptual Site Models
data, and contaminant concentrations at discrete sampling points (i.e., soil borings) are
presented as spheres. As shown in Figure 7.53, areas of high contaminant concentration
generally correlate to areas of low resistivity (i.e., conductivity increases because of the
contaminants). In this manner, the tomography can be used to efficiently map contamina-
tion in the subsurface.
In addition to mapping contamination, resistivity geophysics can be used to delineate
bedrock surfaces and identify high-yielding fractures. For example, Figure 7.54 is a resis-
tivity cross section that shows the top of the bedrock surface to range from 15 to 45 ft below
ground surface (gold area). The resistivity low in the profile correlated well with a mapped
photolineament and purported high-yield residential wells in the vicinity. A water-supply
well was drilled at the 410-ft mark along the profile line.
Electrical conductivity logging (e-logging) is similar to resistivity surveying but can
be accomplished with direct-push drilling technology. The primary use of e-logging is
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.54
Example geophysical 2D resistivity cross section. (Courtesy of Peter Thompson and Scott Calkin, AMEC; data
courtesy of Northeast Geophysical Services, Inc.)
Site Investigation 417
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.55
E-log for a soil boring advanced with direct push technology. (From Kansas Geological Survey (KGS), Hydro
stratigraphic Characterization of Unconsolidated Alluvial Deposits with Direct-Push Sensor Technology,
Open File Report 99-40, 1999. Available at http://www.kgs.ku.edu/Hydro/Publications/OFR99_40/index.html,
accessed July 5, 2011.)
FIGURE 7.56
Electrical conductivity cross section created through interpolation of individual E-logs. Note that fine-
grained layers at the Hcynd3 location are significantly thinner. (From Kansas Geological Survey (KGS),
Hydrostratigraphic Characterization of Unconsolidated Alluvial Deposits with Direct-Push Sensor Technology,
Open File Report 99-40, 1999. Available at http://www.kgs.ku.edu/Hydro/Publications/OFR99_40/index.html,
accessed July 5, 2011.)
418 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.57
Creation of a three-dimensional model in Google Earth using USGS soil maps and cross sections. Arrow A
shows the location of the model in the places window, arrow B points to a corner handle used to resize the
model, and arrow C points to the center handle used to resize the model. (From Walsh, G. J., A Method for
Creating a Three Dimensional Model from Published Geologic Maps and Cross Sections, U.S. Geological
Survey Open-File Report 2009-1229, 16 pp., 2009.)
Graphs can often be combined with plan-view figures to create nice visualizations. For
example, Figure 7.58 includes plots of contaminant concentrations as a function of depth
in the vadose zone for the boring locations shown in plan-view.
To reiterate, graphs are most useful in demonstrating key elements of the CSM. Figure
7.59 is a hydrograph of water levels in monitoring wells adjacent to a stream. Beaver
Site Investigation 419
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.58
Figure showing vertical profiles of vadose zone contamination in milligrams per kilogram. The magnitude of
contamination is greater at locations SB-3 and SB-7; however, contamination is deepest at location SB-4. These
differences may be attributable to recharge variation at the site.
108.00
Beaver Dam Removal
Winter 2008-2009
107.50
Water Level Elevation (ft AMSL)
107.00
106.50
106.00
105.50
105.00
MW-1
MW-2
104.50 MW-3
MW-4
104.00
02/05/96
10/12/96
06/19/97
02/24/98
11/01/98
07/09/99
03/15/00
11/20/00
07/28/01
04/04/02
12/10/02
08/17/03
04/23/04
12/29/04
09/05/05
05/13/06
01/18/07
09/25/07
06/01/08
02/06/09
10/14/09
06/21/10
Date
FIGURE 7.59
Hydrograph of water levels at monitoring wells in an unconfined aquifer adjacent to a stream before and after
the removal of large beaver dams.
420 Hydrogeological Conceptual Site Models
dams are located within the stream and create losing reservoirs that buffer against
water-table recession and effectively increase the potentiometric surface across the site.
However, over the winter of 2008–2009, the beaver dams were removed because of local
flooding concerns. The dam removal had a significant and immediate effect on the aqui-
fer as water levels dropped to record lows at several locations in 2009 despite normal
precipitation.
Figure 7.60 is a typical plot of contaminant concentrations along a plume centerline at
a hazardous-waste site. The primary contaminants of concern (COCs) are chlorinated
VOCs (CVOCs). To demonstrate the effects of natural attenuation at the site, the graph
plots concentrations of parent VOCs and daughter VOCs. Parent VOCs represent chlori-
nated solvents in their original form, including TCE and PCE. Daughter compounds are
the degradation products of these solvents with common examples, such as cis-1,2-dichlo-
roethene and vinyl chloride. As shown in Figure 7.60, the percent composition of VOCs is
Downloaded by [University of Auckland] at 23:45 09 April 2014
shifting toward daughter compounds moving downgradient from the source. This plot
clearly shows that chlorinated solvents are degrading in the source area and that natural
attenuation processes are being effective. Additional discussion regarding the demonstra-
tion of natural attenuation processes is presented in Chapter 8.
One key attribute of Figure 7.60 is the use of different line types for each constituent.
In this manner, despite being in black and white, it is still easy to distinguish the differ-
ent lines. The inclusion of many data sets in a graph is only useful in demonstrating an
overall trend that applies universally to all the data (e.g., concentrations are decreasing,
pumping rates in the basin are decreasing, and water levels are increasing; see Figure 7.61).
Otherwise, it is advisable to group data sets or to break up the graph into several different
plots, so that the data are legible and the original intention of the graphs is preserved.
(+)
PCE
Concentration or Mass
cis-1,2 DCE
O R P (mV)
TCE
ORP
VC
ne
he
Et
(-)
Background Source Downgradient
Distance and Direction of Groundwater Flow
FIGURE 7.60
Theoretical concentrations of parent and daughter CVOCs along a plume centerline at a hazardous-waste site.
(Modified from The Interstate Technology & Regulatory Cooperation Work Group (ITRC), Natural Attenuation
of Chlorinated Solvents in Groundwater: Principles and Practices, Technical/Regulatory Guidelines, 1999.)
Site Investigation 421
875 MW-1
Groundwater Elevation (feet, amsl)
MW-4
MW-5
MW-3
MW-7
870 MW-9
MW-8
MW-12
MW-2
MW-11
865 MW-13
1/1/2007 1/1/2008 1/1/2009
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 7.61
Groundwater elevations at a series of monitoring wells over time.
FIGURE 7.62
1,2-DCA emission rate from a single gingerbread man similar to the one shown was measured at 0.16 µg/min.
(Emissions data from Doucette, W. J. et al., Ground Water Monitor. Remediat., 30, 1, 65–71, 2009.)
References
Air Force Center for Environmental Excellence (AFCEE), 2005. Ashumet Valley Plume Outline. Available
at http://www.mmr.org/Cleanup/plumes/images/Ashumet.pdf, accessed May 13, 2011.
Doucette, W. J., Hall, A. J., and Gorder, K. A., 2009. Emissions of 1,2-dichloroethane from holiday dec-
orations as a source of indoor air contamination. Ground Water Monitor. Remediat. 30(1), 65–71.
ENSR/AECOM, 2006. Sediment Sampling Summary Report 2004–2005: Marsh Island. New Bedford
Harbor Superfund Site, New Bedford Massachusetts. US Army Corps of Engineers. United States
Environmental Protection Agency Contract DACW33-00-D-0003-Task Order 12 Document
No. 09000-350-720. Available at http://www.epa.gov/nbh/data.html#1998RODESDs, accessed
May 30, 2011.
Esri, Inc., 2010a. Georeferencing a CAD Dataset. ArcGIS 10 Help Library.
Esri, Inc., 2010b. Fundamentals for Georeferencing a Raster Dataset. ArcGIS 10 Help Library.
Esri, Inc., 2010c. Working with Basemap Layers. ArcGIS 10 Help Library.
Fetter, C. W., 2001. Applied Hydrogeology, Fourth Edition. Prentice-Hall, Inc., Upper Saddle River, NJ.
Foremost Industries, 2003. Benefits of Dual Rotary Drilling in Unstable Overburden Formations.
Available at http://pierregagnecontracting.com/images/dr_benefits.pdf, accessed May 12, 2011.
Geoprobe Systems, 2011a. What’s Behind Our Name. Available at http://geoprobe.com/whats-
behind-our-name, accessed June 15, 2011.
Geoprobe Systems, 2011b. Geoprobe® FAQs. Available at http://geoprobe.com/geoprobe-faqs,
accessed June 15, 2011.
The Interstate Technology & Regulatory Cooperation Work Group (ITRC), 1999. Natural Attenuation
of Chlorinated Solvents in Groundwater: Principles and Practices. Technical/Regulatory
Guidelines.
Site Investigation 423
8.1 Introduction
The cleanup of hazardous-waste sites with groundwater contamination is a major element
of professional hydrogeology in the United States. It is likely that every professional hydro-
geologist will work on a groundwater remediation project at some point in his or her career.
The long-term remediation of these sites is largely governed by 1980’s Comprehensive
Environmental Response, Compensation, and Liability Act (CERCLA), commonly known
as the Superfund program. States often have similar regulatory programs to address sites
not listed on the National Priorities List (NPL), such as the Massachusetts Contingency
Plan. To the general public, invoking the term Superfund conjures up images of burning
rivers, rusted 55-gallon drums leaking fluorescent fluids on the ground, contaminated
wells, and mutated aquatic life. While in many instances these stereotyped attributes
(mutations aside) are, in fact, accurate, often the perception of contamination is just as
important as the actual data defining the nature and extent of the contamination, and the
risks associated with any existing or potential exposures to the contamination [i.e., the
Conceptual Site Model (CSM)]. Similarly, the perception of environmental cleanup is often
more important than rigorous quantification of the costs and true environmental, social,
and economic benefits of complicated remediation projects. This is discussed further in
Section 8.5 in the context of sustainable remediation, a very important emerging concept.
Because of the Superfund program and similar state-led initiatives, groundwater reme-
diation has become a lucrative field with innumerable technology vendors offering solu-
tions for a wide variety of contaminants. In general, remediation technologies fall into one
of the following categories (U.S. EPA 2010):
425
426 Hydrogeological Conceptual Site Models
For a technical overview of each technology, the reader is referred to Appendix B of U.S.
EPA (2010). The following technologies are discussed in greater detail later in this chapter
with example data visualizations:
As we learn more and more over time about the behavior of contaminants in the subsur-
face and the efficacy of various technologies, the preferred technology from regulatory
and commercial perspectives changes frequently. Trends in technology usage over time
can be measured through public record of decision (ROD) documents produced by the U.S.
EPA for each Superfund site. As described in U.S. EPA (2011a)
“A ROD contains site history, site description, site characteristics, community participation,
enforcement activities, past and present activities, contaminated media, the contaminants
present, scope and role of response action and the remedy selected for cleanup.”
As a result, RODs and other CERCLA decision documents [ROD amendments and expla-
nations of significant differences (ESDs)] provide informative snapshots in time of the
remediation marketplace, illustrating changes in the preferred remedial approaches and
individual technologies. Figure 8.1 presents the number of Superfund technology selec-
tions over time for the overall remedy categories most applicable to hydrogeologists (and
bulleted above). Note that in situ source control includes both in situ treatment and in situ
containment remedies. Ex situ source technologies are not depicted in Figure 8.1 because
they are more often used for soil excavations for contaminants that are not impacting
groundwater quality, such as polychlorinated biphenyls (PCBs). Summarized here are sev-
eral interesting concepts highlighted in Figure 8.1.
• The use of pump and treat increased dramatically in the late 1980s and was the
dominant remedial approach throughout the 1990s. Note that the early 1990s also
Groundwater Remediation 427
80
In-Situ Groundwater Treatment Pump and Treat
In-Situ Source Control Monitored Natural Attenuation
70
60
Number of Selections
50
40
30
20
Downloaded by [University of Auckland] at 23:45 09 April 2014
10
Fiscal Year
FIGURE 8.1
Original graph illustrating total number of technology selections by fiscal year (FY) for the depicted remedy
categories at Superfund sites. Fiscal year 1985 includes all selections between 1982 and 1985. Raw data obtained
from Appendix A of U.S. EPA (2010) for all categories except MNA, for which raw data were obtained from
Figure 7 of U.S. EPA (2010) (2005–2008) and Appendix E of U.S. EPA (2007a) (1985–2004). Data for FY 1982–2004
are project-level data except for MNA, from which data are derived from ROD documents exclusively; data from
FY 2005–2008 are decision document-level data.
represented the time frame with the greatest number of RODs published, peaking
at 197 in 1991 (U.S. EPA 2010). The number of annual pump and treat selections
decreased dramatically between 1997 and 2001, but the frequency of selection
remained relatively constant throughout the 2000s. The late 1990s’ decline in
pump and treat selections reflects an increased understanding of the technology’s
limited efficacy in terms of mass removal and cleanup time reduction, while the
stabilization at approximately 20 selections per year in the 2000s reflects the ongo-
ing need for the technology as a containment measure to prevent further migra-
tion of contaminant plumes.
• After peaking in 1991 and immediately declining, the annual number of in situ
source control selections has remained relatively constant since 1993. The annual
number of in situ source control remedies has closely paralleled that of pump and
treat since 2001, and these two approaches are often used in tandem to reduce
cleanup time frames.
• The use of in situ groundwater treatment and MNA steadily increased throughout
the 1990s, and by the mid-2000s, these two technologies were selected more often
than in situ source control and pump and treat. Note that the reduced number
of MNA selections between 2001 and 2003 was at least partially triggered by the
1999 publishing of U.S. EPA guidance regarding the proper use of MNA (U.S. EPA
2007a). The rebound in 2004 was achieved after consultants and regulators alike
adjusted to the new MNA framework and figured out how to best incorporate the
technology.
428 Hydrogeological Conceptual Site Models
the case as MNA requires extensive monitoring over long periods of time.
4. PRP groups with experience operating pump and treat systems for years and
years with no end in sight became more receptive to innovative technologies with
higher capital costs to avoid perpetual operation of pump and treat systems.
40
Multi-Phase Extraction
25
20
15
10
5
Downloaded by [University of Auckland] at 23:45 09 April 2014
Fiscal Year
FIGURE 8.2
Original graph illustrating total number of technology selections by FY at Superfund sites for the five in situ source
control technologies with the most selections between 2005 and 2008. Fiscal year 1985 includes all selections
between 1982 and 1985. Note that these are also the five technologies with the most overall selections between
1982 and 2008 with the exception of thermal treatment, which has one less selection than chemical treatment
during that time span. (Raw data obtained from the United States Environmental Protection Agency (U.S. EPA),
Appendix A, Superfund Remedy Report, Thirteenth Edition. Office of Solid Waste and Emergency Response,
EPA-542-R-10-004, 2010. Available at http://www.clu-in.org/asr/, accessed July 10, 2011.)
25
Air Sparging
Bioremediation
20 Chemical Treatment
Permeable Reactive Barrier
Number of Selections
Multi-Phase Extraction
15
10
Fiscal Year
FIGURE 8.3
Original graph illustrating total number of technology selections by FY at Superfund sites for the five in situ
groundwater remediation technologies with the most selections between 1982 and 2008. Fiscal year 1985 includes
all selections between 1982 and 1985. (Raw data obtained from the United States Environmental Protection
Agency (U.S. EPA), Appendix A, Superfund Remedy Report, Thirteenth Edition. Office of Solid Waste and
Emergency Response, EPA-542-R-10-004, 2010. Available at http://www.clu-in.org/asr/, accessed July 10, 2011.)
430 Hydrogeological Conceptual Site Models
continues into the new decade. As with air sparging and SVE, there may be a backlash
against in situ chemical oxidation and bioremediation based on the increasing availability
of performance data, which may not be as encouraging as originally anticipated.
Despite the continually evolving technological and political landscape, one thing that
has remained constant in Superfund is the objective of restoring contaminant ground-
water to its beneficial use. This beneficial use is often considered to be drinking water as
measured by maximum contaminant levels (MCLs) to ensure that a pristine groundwater
resource is available to future generations.
The rest of this chapter outlines important concepts and provides example data visu-
alizations for established and emerging remedial technologies important to the current
commercial marketplace. Concepts and visualizations are also provided for technical
impracticability and alternative remedial metrics and endpoints. Lastly, the overall suc-
cess of groundwater remediation in the United States is discussed in light of recent cost
Downloaded by [University of Auckland] at 23:45 09 April 2014
of contaminant concentration decline resulting from pump and treat operation, and rebound
is the relatively rapid increase in concentration after pumping cessation (U.S. EPA 1996a). A
conceptual diagram of tailing and rebound effects is depicted in Figure 8.4 for reference.
Some of the physical and chemical processes that result in the impracticability of using pump
and treat as a pure remediation measure are listed below (Cohen et al. 1994):
Matrix diffusion involves contaminant diffusion into and back-diffusion out of low-
permeability media, such as clay and bedrock, and is a subject that has received consider-
able attention in academia. Additional discussion regarding the implications of matrix
diffusion with respect to the remediation of sites with DNAPL contamination is provided
in Section 8.4, and modeling examples are provided in Chapter 5.
One illustrative example regarding the inability of pump and treat systems to restore
groundwater to MCLs in reasonable time frames involves one of the longest and deepest
horizontal wells in the world. Installation and design details of the horizontal well, which
is approximately 4500 ft long, are presented in Figure 8.5. The well was installed as part of
a pump and treat design to control the migration of 1,4-dioxane and restore groundwater
to the applicable drinking water standard of 85 µg/L. Historic groundwater samples col-
lected at the site contained 1,4-dioxane concentrations above 200,000 µg/L. Concentration
contours and the overall extent of 1,4-dioxane in groundwater are presented in Figure 8.6
FIGURE 8.4
Schematic of tailing and rebound effects. (Modified from Cohen, R. M. et al., Methods for Monitoring Pump and
Treat Performance, EPA Contract No. 68-C8-0058, Office of Research and Development, EPA/600/R-94/123, Ada,
OK, 114 pp., 1994; Keely, J.F., Performance Evaluation of Pump and Treat Remediations, USEPA/540/4-89-005, Robert
S. Kerr Environmental Research Laboratory, Ada, OK, 19 pp., 1989.)
432 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.5
Installation and design details of one of the longest horizontal wells in the world installed to extract contami-
nated water from approximately 100 ft below ground surface. Bottom left: daylight of the drill bit after passing
through more than 2000 ft of subsurface sediments. Middle: custom-designed well screen. (Courtesy of Farsad
Fotouhi, Pall Corporation.)
at the time of horizontal well installation (top) and after three years of operation at flow
rates above 1000 gallons per minute (bottom). As shown in Figure 8.6, significant mass
removal was achieved with the elimination of plume concentrations above 10,000 µg/L.
However, concentrations above 1000 µg/L, which is over one order of magnitude above the
cleanup goal, persist across the plume extent after three years of pumping.
Groundwater Remediation 433
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.6
Plan-view concentration contour maps of the 1,4-dioxane plume in 2001 when the horizontal well was installed
and in 2004, three years after operation. (Courtesy of Farsad Fotouhi, Pall Corporation.)
A time series plot of 1,4-dioxane concentrations at one of the vertical extraction wells
associated with the pump and treat design is presented in Figure 8.7 and demonstrates
significant concentration tailing after approximately one year of operation. The concen-
tration in this well continues to decline at a very slow rate and remains above 1000 µg/L
as of early 2011. The site geology is comprised of heterogeneous glacial sediments, and it
is likely that high concentrations of 1,4-dioxane diffused into less permeable sediments,
creating a secondary contaminant source capable of feeding highly permeable zones for
434 Hydrogeological Conceptual Site Models
FIGURE 8.7
Downloaded by [University of Auckland] at 23:45 09 April 2014
Time-series plot of 1,4-dioxane concentrations at a vertical extraction well in the groundwater plume. (Courtesy
of Farsad Fotouhi, Pall Corporation.)
decades. Even with the installation of an advanced horizontal well costing more than
$1,000,000, attainment of the cleanup goal is not a realistic objective for this site unless
the applicable time frame is on the order of decades. It is also important to note that
1,4-dioxane is well suited to pump and treat remedies because it has very high solubility
and does not adsorb to soils. If this case study involved a more hydrophobic contaminant,
such as a chlorinated solvent, the initial pump and treat performance would have been
significantly worse.
Pump and treat’s optimal usage is as a plume containment, or management of migra-
tion, remedy. Two important terms with respect to plume containment are the pump and
treat system’s capture zone and radius of influence. These two terms are not synonymous,
and often there is significant confusion on this issue. The capture zone for a pumping well
is the three-dimensional volume (i.e., X, Y, Z) of the porous media within which ground-
water will flow to the pumping well for extraction and subsequent treatment. The radius
of influence of a pumping well is the farthest horizontal distance away from the well
where pumping causes a discernible aquifer response (e.g., a measurable drawdown). The
concept of a capture zone is illustrated in Figure 8.8 in plan view (top) and cross section
(bottom). The concept of a capture zone versus the radius of well influence is illustrated in
Figure 8.9. While MW-3 is within the radius of influence of pumping well PW (i.e., there
is measurable lowering of the hydraulic head), it is not within PW’s capture zone, and
it would be erroneous to conclude that contaminated groundwater at MW-3 would be
extracted by the pump and treat system.
Another important concept in the analysis of the hydraulic head distribution created
by a pump and treat system is the difference between drawdown as measured by the
water level in the pumping well and the theoretical drawdown, or formation loss, result-
ing from groundwater flow through the porous media. The difference between the actual
drawdown in the pumping well and the formation loss is termed the well loss, which is an
efficiency loss caused by factors such as poor well design, insufficient well development,
turbulent flow through the filter pack and screen, or unavoidable disturbance of the near-
well porous media during drilling. The water level based on the actual drawdown in the
pumping well is not representative of the true aquifer potentiometric surface and, there-
fore, should not be used to create contour maps to delineate capture zones. Consequently,
it is important to install monitoring wells or piezometers very close to extraction wells
Groundwater Remediation 435
Map View
2
97
974
976
984
982
988
986
980
978
Partially Penetrating
Extraction Well Ground Surface
Downloaded by [University of Auckland] at 23:45 09 April 2014
Capture Zone
Flowlines
970
972
974
976
984
986
978
988
982
980
FIGURE 8.8
Illustration of horizontal capture zone in plan view (top) and vertical capture zone in cross section (bottom),
highlighting the importance of a three-dimensional approach to pump and treat system design and data
analysis. (Modified from United States Environmental Protection Agency (U.S. EPA), A Systematic Approach
for Evaluation of Capture Zones at Pump and Treat Systems: Final Project Report, Office of Research and
Development, EPA 600/R-08/003, Washington, DC, 2008.)
FIGURE 8.9
Conceptual cross section illustrating the difference between the capture zone and the radius of influence of the
pumping well. Note also the effects of well loss on the water level of the pumping well.
436 Hydrogeological Conceptual Site Models
in order to measure the true aquifer response to the pumping. The concept of well loss is
also presented in Figure 8.9. The design of pump and treat systems for optimal hydraulic
containment involves specification of numerous parameters, the most important of which
are defined and described next (U.S. EPA 2005):
• Well layout and design: The number, locations, and design specifications for all
extraction wells. Design specifications include well-construction materials, screen
size and interval, filter-pack parameters, and drilling method.
• Design flow rate: The individual extraction well flow rates and the total cumula-
tive flow rate for the pump and treat system calculated from estimated extraction
rates necessary to achieve remedy goals (e.g., hydraulic containment). This value
should be used to calculate the design mass removal rate and evaluate discharge
Downloaded by [University of Auckland] at 23:45 09 April 2014
Aquifer testing and groundwater modeling are the two primary methods of determin-
ing the above design parameters for a new pump and treat system and are typically used
in tandem. It is strongly recommended that a multiday (e.g., 72-hour) aquifer pumping
test be conducted to evaluate well yields and potential capture zones for the pump and
treat system. This is particularly important for unconfined aquifers with a delayed gravity
Groundwater Remediation 437
FIGURE 8.10
Photograph of a pumping test in support of a pump and treat design with a gray frac tank in the background for
water storage. The site is paved, and all wells are flush mounted, which is necessary for pump and treat systems
at operational industrial facilities.
438 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.11
Photograph of a long-term groundwater discharge test at a pilot-scale rapid infiltration basin (RIB). The pilot RIB
was excavated to a depth of 3 ft below ground surface and is approximately 100 ft long by 20 ft wide. The pilot test
flow rate of 75 gallons per minute only flooded a very small portion of the RIB, indicating that significant capacity
remains. The native coarse sand soils at the site are well suited for surface infiltration. Rip-rap was placed around
the RIB sidewalls and at the hose discharge point to prevent erosion. White PVC piezometers were installed within
and surrounding the RIB to monitor groundwater mounding during the test. (Courtesy of Woodard & Curran, Inc.)
FIGURE 8.12
Photograph of a 6-in. groundwater injection well fitted with a ball valve, flow meter, and upstream bag filters for a
long-term pilot test. Surface soils are fine sands that are not ideally suited for infiltration; however, the underlying
aquifer intercepted by the well’s screened interval contains highly conductive gravelly sand. During the pilot test,
sustained flow rates of up to 45 gallons per minute were injected into the well. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 439
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.13
Grain size distribution chart created to select a commercial filter pack and help size the well-screen slot size for a
pump and treat extraction well. A commercial #2 sand reasonably matches the shape of the site sample and has a D70
(70% retained or 30% finer) multiplier of approximately 4. A rule of thumb is that the D70 multiplier or ratio of the
filter pack D70 to the site sample D70 should be between 4 and 6. A #1 sand pack would be overly conservative (too
fine) and potentially reduce well efficiency. The D70 line is highlighted in green. Another rule of thumb is to select a
well-screen size that retains between 85% and 100% of the filter pack. In this case, a 40-slot (0.040-in. slot opening, line
highlighted in red) screen is selected, which will retain 95% of the filter pack. Note that while custom filter pack and
well-screen sizes can be ordered, it is often advantageous for the project schedule and budget to use commercially
available materials (i.e., #00, #0, #1, and #2 sands and slot sizes divisible by 10).
FIGURE 8.14
Photograph illustrating extensive use of personal protective equipment (PPE) in performing a pumping test at
a contaminated site.
440 Hydrogeological Conceptual Site Models
Numerical groundwater modeling is extensively used in the design of pump and treat
systems as computer programs with graphical user interfaces (GUIs), such as Groundwater
Vistas and Processing MODFLOW, are readily available. Modeling can be used to deter-
mine the optimal locations and number of extraction wells and to determine necessary
flow rates to achieve hydraulic containment of the groundwater plume under different
seasonal conditions. Most importantly, modeling helps evaluate failure mechanisms that
cannot be properly assessed when limited field data exist. For example, Figure 8.15 depicts
a pump and treat design failure that a numerical groundwater model would easily identify.
The groundwater discharge design can also be incorporated into the groundwater
model and is necessary when groundwater discharge will be used to enhance contami-
nant flushing (through discharge upgradient of the plume) or to augment containment
(through discharge downgradient of the plume aimed at establishing a hydraulic barrier).
The most common means of assessing the viability of a pump and treat design with mod-
Downloaded by [University of Auckland] at 23:45 09 April 2014
eling is forward and reverse particle tracking, which helps delineate horizontal and verti-
cal capture zones. Examples of particle tracking used to support a pump and treat design
are provided in Figures 8.16 and 8.17.
Numerical fate-and-transport models are often used to help estimate cleanup time
frames for pump and treat systems. However, this is subject to considerable uncertainty
as the contaminant source term is very difficult to represent accurately, particularly
where NAPL or bedrock contamination exists. Therefore, the best application of fate-and-
transport modeling in pump and treat design is creating comparative scenarios evaluating
FIGURE 8.15
Example graphic where flow will move between and beyond extraction wells (symbolized in red) despite the
fact that limited aquifer head measurements in the pumping area are higher than the potentiometric surface
at each pumping well, which is 360 ft. A numerical groundwater model can be used to evaluate the potential
for this occurrence. (Adapted from United States Environmental Protection Agency (U.S. EPA), Pump and Treat
Ground-Water Remediation: A Guide for Decision Makers and Practitioners. Office of Research and Development,
EPA/625/R-95/005, Washington, DC, 90 pp., 1996a; Cohen, R.M. et al., Methods for Monitoring Pump and Treat
Performance, EPA Contract No. 68-C8-0058, Office of Research and Development, EPA/600/R-94/123, Ada, OK,
114 pp., 1994.)
Groundwater Remediation 441
300 feet
T5-3
TOW 32
T5-4 T4-3
T5-2 T4-2
T4-1 T1-7
T5-1
T2-5
T2-3
Downloaded by [University of Auckland] at 23:45 09 April 2014
TCS-1
TOW-26
TDW
Source Area
FIGURE 8.16
Particle tracking simulation under ambient (nonpumping conditions). Particles, which run through the con-
taminant source area, will ultimately discharge to the surface water body. Modeling for this figure and Figure
8.17 was conducted in Groundwater Vistas.
FIGURE 8.17
Reverse particle tracking (i.e., running the model backwards) for a circle of particles released around a single
extraction well pumping at 20 gallons per minute. The source area is located entirely within the capture zone as
delineated by the reverse particle flow paths, indicating that this flow rate and well location should be sufficient
for the pump and treat design.
442 Hydrogeological Conceptual Site Models
the efficacy of different design configurations. In this manner, the relative impact of different
pump and treat designs (with or without source term remediation) on cleanup time frames
can be assessed. An example of this process is provided in Section 8.4 in the context of tech-
nical impracticability.
The treatment process design of a pump and treat system is equally important as the
extraction and discharge design, which has been the subject of this section as it directly
involves the hydrogeologist. Treatment plant design is typically the realm of environmen-
tal engineers with expertise in wastewater treatment.
the time frame of system operation and to increase system efficiency to lower annual oper-
ating costs and the cumulative life-cycle cost. Strategies for pump and treat optimization
are listed and briefly described below:
Simulation optimization: Most commercially available numerical groundwater model-
ing programs have optimization algorithms that can be used to design the most efficient
pump and treat system with the lowest total flow rate, least number of wells, and the best
overall performance. Simulation optimization involves automated running of hundreds
of scenarios to determine a pumping configuration that results in the optimal capture of
targeted groundwater or that minimizes the time required for groundwater to reach a
threshold concentration. While simulation optimization sounds like the perfect mathe-
matical solution, in practice, it can be very difficult to implement and can fail to adequately
represent site-specific limitations or secondary objectives of the pump and treat system.
For example, pumping restrictions may be necessary to prevent dewatering an adjacent
stream during low-flow conditions. While it is possible to include such constraints in an
optimization algorithm, this further increases complexity beyond the standard trial-and-
error approach. As with most computer modeling exercises, a combination of mathemati-
cal optimization and trial and error with professional judgment will generally yield the
best solution.
Optimization of equipment sizing: One common problem in pump and treat design is the
overdesign of equipment, which consists of sizing equipment to be excessively large to con-
servatively ensure that peak flows and concentrations can be accommodated. Overdesign
can lead to higher than necessary capital costs, gross inefficiency in energy usage, and
even poor system performance for process elements when operating concentrations and/
or flow rates are significantly lower than design values. For example, using 5-HP pumps
in extraction wells when 1-HP pumps will suffice can result in excess energy costs of more
than $10,000 a year (U.S. EPA 2005). Optimization of treatment-system sizing should be
conducted during both the initial design and periodic reviews of system operation. One
key step that can prevent overdesign of treatment equipment is the use of concentration
data obtained from monitoring wells during pumping conditions at design flow rates (i.e.,
through preliminary aquifer tests), rather than under ambient conditions (U.S. EPA 2005).
Example concentration differences at individual monitoring wells measured under pump-
ing and ambient conditions and the effect of these differences on the design influent con-
centration are presented in Figure 8.18.
Phased extraction well construction: Installing and initializing operation of extraction wells
in phases can significantly increase design efficiency, particularly where limited predesign
data are present. With a phased approach, the aquifer response data from the first phase of
Groundwater Remediation 443
60,000
57,000
Ambient Conditions
40,000
30,000
20,000 18,230
16,300
Downloaded by [University of Auckland] at 23:45 09 April 2014
10,000
7,700
4,200 4,200
1,980 1,932
1,190 1,130
0
MW-1 (20 gpm) MW-2 (20 gpm) MW-3 (30 gpm) MW-4 (30 gpm) Design Influent
FIGURE 8.18
Graph illustrating how TCE concentrations at monitoring wells within a TCE plume adjacent to proposed
extraction wells are significantly lower when sampled under sustained, 100 gallons per minute pumping condi-
tions versus ambient conditions (for example, during a remedial investigation sampling event). If the ambient
monitoring results are used for the pump and treat design, the actual influent concentration at system startup
(and equivalent TCE mass loading rate) will be nearly an order of magnitude lower than the design value. This
will result in a significantly overdesigned system with unnecessarily high operational costs. Such concentration
reductions are commonly seen during sustained pumping because of changes in groundwater flow patterns,
plume dilution with clean groundwater, and potential changes in redox conditions. (Adapted from United
States Environmental Protection Agency (U.S. EPA), Cost-Effective Design of Pump and Treat Systems, Office of
Solid Waste and Emergency Response, EPA 542-R-05-008, 2005. Available at http://www.clu-in.org/download/
remed/hyopt/factsheets/cost-effective_design.pdf, accessed July 15, 2011.)
well installation and operation can be used to better design well locations and target flow
rates of subsequent phases (U.S. EPA 1996a).
Adaptive and pulsed pumping: Adaptive pumping involves the design of the well field
such that operating conditions can be varied to minimize the buildup of stagnation zones.
Under adaptive pumping, wells are periodically shut down and operated at varying flow
rates. Pulsed pumping is similar, but it involves temporarily shutting down wells with
the specific objective of allowing contaminant concentrations to increase through dissolu-
tion, diffusion, and desorption before resuming pumping (U.S. EPA 1996a). Use of pulsed
pumping improves the ratio of contaminant mass removed to the ratio of groundwater
pumped and treated, thereby improving system efficiency and lowering operating costs
(but not decreasing cleanup time frames).
Long-term operational changes: As illustrated in Figure 8.7, significant dissolved-phase con-
centration reductions are generally realized within a few years of pump and treat opera-
tion. This is especially true when a pump and treat system is combined with a short-term
in situ source area remediation. After this period of significant decline, tailing effects lead
to a prolonged pump and treat with concentrations generally remaining above cleanup
goals and changing very slowly over a long duration. However, the postsource remedia-
tion/postsignificant decline period has a substantially lower influent concentration than
444 Hydrogeological Conceptual Site Models
the design condition, and it is likely that the spatial distribution of contaminants is con-
siderably different as well. As a result, after several years of operation, it may be possible
to eliminate certain process treatment elements for contaminants that no longer exceed
regulatory standards, or it may be possible to shut down peripheral extraction wells and
focus more on remaining areas of relatively high concentration.
Alternatively, it may be more beneficial to replace original extraction wells with new
ones in locations better suited to the current nature and extent of contamination. If the
flow rate can be lowered significantly by removing wells, it may also become possible to
switch the mechanism of treated effluent disposal. For example, groundwater discharge
may become feasible at a lower flow rate, which would enable the elimination of certain
treatment elements as regulatory concentration standards for groundwater discharge are
generally significantly higher than those for surface water discharge, primarily as a result
of the absence of ecological receptors.
Downloaded by [University of Auckland] at 23:45 09 April 2014
Optimization may also involve making the pump and treat monitoring program more
efficient by removing redundant wells and eliminating unnecessary analyses. As previ-
ously mentioned, the public-domain programs Monitoring and Remediation Optimization
System (MAROS) and Visual Sample Plan can be used to evaluate well redundancy. The
formalized framework for conducting a postconstruction pump and treat optimization
analysis is outlined in U.S. EPA (2007b), which also lists other strategies for consideration.
For Superfund sites, technology assessment and selection generally occur during the
Feasibility Study (FS) and/or predesign phases when treatability studies can be conducted
and site-specific cleanup levels developed, considering the above factors. The treatment
technology, cleanup criteria, and performance standards are finalized in the ROD. While
exact cleanup standards can vary significantly based on a site-specific risk characterization
and the beneficial use of groundwater, a general expectation of the Superfund program is
Groundwater Remediation 445
In addition, the thermal conductivity of soil is generally much less variable than conventional
remediation parameters such as soil permeability. Therefore, in situ thermal treatment can
remediate low-permeability materials, such as silt and clay, in which advective flow cannot
be established. Another important advantage of thermal heating is that the combined boiling
point of a volatile organic compound (VOC) immersed in water is lower than that of the VOC
in air as governed by Dalton’s law of partial pressures. For example, the boiling point of tetra-
chloroethene (PCE) is 121°C in air but only 87°C in water. This means that PCE will boil and
volatilize at a temperature lower than that of water (Beyke and Fleming 2005).
The three primary commercially available forms of in situ thermal treatment are
water flows (Kingston et al. 2009). To avoid excessive soil drying and the resultant loss of
electrical conduction, ERH systems typically incorporate wetting systems around elec-
trodes. High groundwater seepage velocities in excess of several feet per day can cause
significant heat losses as warm water is continuously flushed out of the treatment area and
replaced with cool water. This limitation is typical of karst aquifers with preferential flow-
paths for example. A management system consisting of upgradient pumping wells and/or
downgradient injection wells can help reduce excessive groundwater flux and associated
heat losses (Kingston et al. 2009).
Steam-Enhanced Extraction
SEE for hazardous-waste remediation involves the use of steam injection wells to create
a pressure gradient for recovery of NAPLs and to heat the subsurface to volatilize and
extract contaminants in the vapor phase (UFC 2006). Similar to ERH, SEE is capable of
heating the subsurface to a maximum temperature equal to the boiling point of water or
the steam distillation temperature at approximately 100°C. After NAPL has been displaced
and recovered in the liquid phase, pressure cycling may be used to enhance mass removal
in the vapor phase. Pressure cycling promotes contaminant volatilization by creating ther-
modynamically unstable conditions within soil pores (UFC 2006).
The primary design parameter for SEE is the permeability of the material to be treated as
this dictates injection flow rates and pressures. Unlike ERH, SEE can have difficulty treating
fine-grained soils because the permeability may be insufficient to conduct steam. However,
it may be possible to remediate a low-permeability zone if it is of limited thickness such that
steam can be injected above and below the zone, and heat (not steam) will conduct through it
(USACE 2009). It can also be difficult to treat shallow soils with SEE as steam can escape to the
surface. One rule of thumb is that the injection pressure should not exceed 0.5 psig/ft of over-
burden located above the injection screen. Injection at higher pressures may lead to formation
fracturing and surface escape of steam. In general, sites with thick clay zones or competent,
minimally fractured bedrock are not amenable to SEE treatment (Kingston et al. 2009).
Conductive Heating
As the name implies, this technology heats the subsurface primarily through conductive
heat transfer. Heat is provided at point-source vertical or horizontal heater units installed
in the subsurface, termed wells and blankets, respectively. Heaters are typically operated
at temperatures above 500°C, and heat spreads from these units through the subsurface by
thermal conduction and fluid convection (i.e., hot water flow; Kingston et al. 2009).
Groundwater Remediation 447
Unlike both ERH and SEE, thermal conductive heating is capable of heating the subsurface
to temperatures significantly greater than the boiling point of water. Therefore, in addition to
chlorinated solvents and light petroleum hydrocarbons, conductive heating can be used to
remediate high boiling–point contaminants with low volatility such as coal-tar products
and PCBs (USACE 2009). Similar with ERH, sites with high groundwater velocities are chal-
lenging to remediate with conductive heating as heat losses may be excessive and prevent
achievement of the target temperature. Installation of a groundwater management system
for flux control can help mitigate this limitation (Kingston et al. 2009).
Performance Expectations
The performance of thermal remediation projects completed to date is comprehensively
reviewed in a state of the practice overview presented in the work of Kingston et al. (2009
[overview report], 2010 [detailed final report]). With respect to remedial performance,
Downloaded by [University of Auckland] at 23:45 09 April 2014
“Despite the relatively large number of applications to date, there are limited data
on post-treatment monitoring. Of the 182 sites, there was sufficient documentation to
assess post-treatment groundwater quality improvements and source zone mass dis-
charge reductions for only 14 applications.”
This lack of available performance data makes it very difficult to define accurate perfor-
mance expectations. This problem is not necessarily limited to thermal remediation technolo-
gies as a majority of site-remediation projects are either confidential or have involved parties
that are very sensitive about the dissemination of site data and treatment results. Therefore, it
becomes very difficult to practice evidence-based remediation as the successes and failures of
previous remediation projects are not well documented, and too much reliance is placed on
the judgment of regulators and technology vendors. There is somewhat of a conflict of interest
here as both regulators and technology vendors have incentives to portray every remedia-
tion as a success regardless of its ability to efficiently meet the original objectives. While it is
acknowledged that entities responsible for remediating sites do not want it widely known that
they are involved (most likely because of fear of additional litigation), the practice of minimiz-
ing data distribution to the general public is very damaging to their own interests, and addi-
tional efforts should be expended to publish and consolidate data from remediation projects.
Based on the 14 applications with sufficient documentation, Kingston et al. (2009) con-
clude that in situ thermal treatment can achieve one to two orders of magnitude reduc-
tions in dissolved groundwater concentration and mass discharge when the source zone is
appropriately defined. Better performance can be achieved by overdesigning the system to
extend beyond the delineated source zone and by optimizing systems and allowing them
to run for longer durations (Kingston et al. 2009). However, extending the treatment zone
size and heating duration result in significant cost increases for a technology that already
typically costs several or more million dollars to implement. For example, SEE heating at the
Visalia Pole Yard site in Rosemead, CA, lasted for approximately three years and resulted in
a thermal remediation cost of $21.5 million (USACE 2009). Additional discussion regarding
this site is provided in Section 8.5. In summary, while thermal remediation is able to effec-
tively treat DNAPLs and overcome many of the difficulties presented by soil heterogeneity
and contaminant partitioning, it is not a perfect solution capable of uniformly achieving
MCLs and rapid site closure, and when attempting to do so, costs can escalate significantly.
Given the lack of reliable performance data in the literature, one approach to estimat-
ing remedial performance and cost is thermal bench testing, which is a relatively simple
448 Hydrogeological Conceptual Site Models
FIGURE 8.19
Typical energy pie illustrating consumptive uses of energy during ERH remediation. (Courtesy of TRS Group, Inc.)
Downloaded by [University of Auckland] at 23:45 09 April 2014
and inexpensive process. Figure 8.19 is a typical energy pie, illustrating the percentage of
contribution of different components to the total energy consumed during a typical ERH
application. While heat losses and the energy required to heat the subsurface typically
fall within consistent ranges, the energy required to boil VOCs is a factor of site-specific
geology and contaminants, and the site-specific remedial goals. A thermal bench test can
be used to quantify the approximate boiling energy required to reach remedial goals,
thereby enabling estimation of remedial costs and performance expectations. One bench-
testing approach employed by TRS Group, Inc., the leading ERH technology vendor, is to
measure concentrations of target VOCs in site soil samples after boiling off incremental
quantities of water from the samples. The boil-off quantities can easily be converted to
boiling-energy densities (in kilowatt hours per cubic yard), using the latent heat of evapo-
ration of water, and the result is a plot of target VOC concentrations versus boiling-energy
density as shown in Figure 8.20. This graph can then be used to estimate boiling energy
40
35
30
25
20
15
10
0
0 20 40 60 80 100 120 140
Boiling Energy Density (kWh/yd3)
FIGURE 8.20
Boiling energy plot for a VOC determined from thermal bench test results. The remedial goal is a soil concentra-
tion of 10 mg/kg, which would require a boiling energy density of approximately 100 kWh/yd3 based on linear
interpolation.
Groundwater Remediation 449
ogy generally consists of heterogeneous glacial outwash, till, and alluvial deposits with
groundwater typically encountered between 9 and 12 ft below the ground surface. NAPL
was observed at depths ranging from near the ground surface to as much as 52 ft below
ground surface. Hydraulic conductivity varies with soil type and ranges from 10 to 1000
ft per day. Gravel zones with high conductivity and seepage velocities presented a signifi-
cant challenge for the remediation because of heat losses, described further later in this
section (USACE 2007).
The performance standards for the Fort Lewis remediation were as follows (Beyke and
Fleming 2005):
• Minimize the time to implement the remedy while maximizing mass removal.
• Establish and verify that the subsurface reaches target temperatures of 90ºC in the
vadose zone and 100ºC in the saturated zone.
• Maintain these target subsurface temperatures for a minimum of 60 days.
• Establish, maintain, and verify control of contaminant migration in groundwater,
soil vapors, and air emissions.
• Provide a system for near real-time data delivery, performance and compliance
monitoring, and project communications.
In total, 304 electrodes with colocated multiphase extraction wells, 62 monitoring wells,
58 temperature monitoring points, and 16 hydraulic control wells were installed for the
remediation. The maximum electrode depths below ground surface were 39, 52, and 37 ft
in areas 1, 2, and 3, respectively (USACE 2007). Extracted vapors were treated with a ther-
mal oxidizer unit. The ERH layout for area 2 is depicted in Figure 8.21. A conceptual depic-
tion of the ERH process is provided in Figure 8.22, and a photograph of the installed ERH
system at Fort Lewis is provided in Figure 8.23.
The heating durations at areas 1, 2, and 3 were 231, 172, and 107 days, respectively
(USACE 2007). The total energy consumption for the remediation was approximately
23,000 MW-hours, which is equivalent to the annual energy requirements of approximately
1800 single-family homes (Bussey 2007; U.S. EPA 2011b). In terms of performance, the ERH
remediation removed approximately 4500 kg of TCE from the subsurface before reaching
a point of diminishing returns (Bussey 2007). A chart of TCE mass removal from each of
the remedial areas is presented in Figure 8.24. In area 1, the maximum post-treatment con-
centration was less than 10 µg/L while the average post-treatment TCE concentrations in
areas 2 and 3 were less than 150 µg/L (Bussey 2007). As shown by the TCE concentration
450 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.21
ERH layout for Fort Lewis area 2, depicting locations of electrodes, hydraulic control wells, temperature moni-
toring points, and the treatment system. (From United States Army Corps of Engineers (USACE), Cost and
Performance Report: In Situ Thermal Remediation (Electrical Resistance Heating) East Gate Disposal Yard, Ft.
Lewis, WA, 2007.)
plot in Figure 8.25, minor rebound occurred at area 3 following the treatment; however,
this rebound was only temporary as concentrations continued to decline significantly in
the subsequent months, as shown by Figure 8.26.
As indicated by the average treatment zone temperature plot in Figure 8.25, the boil-
ing point of water was not achieved throughout the treatment zone for area 3, nor was
it in the other two areas. This was because of unexpectedly high seepage velocities (up
to 10 ft per day) through gravel seams in the treatment zone (Beyke and Fleming 2005).
After it became clear that high groundwater flux made attainment of the temperature
performance standard infeasible, a modified temperature goal was prescribed based on
the boiling temperature of TCE. The following excerpt is from the August 2007 Cost and
Performance Report for the Fort Lewis site (USACE 2007):
“The initial contract expectation for consistent heat-up throughout the defined treat-
ment area proved unrealistic. In all three areas hydrological and thermal equilibrium
were reached before the contract temperature specifications were achieved for all
depths. The intervals that proved the most difficult to heat-up were locations with the
highest groundwater flow velocities and lowest potential residual contamination.”
The modified temperature specification ensured that the most important remedial objec-
tive was achieved (removal of TCE DNAPL) while also allowing the remediation to proceed
Groundwater Remediation 451
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.22
Conceptual cross section of an ERH application showing colocated multiphase extraction wells and electrodes,
and above-ground infrastructure. (Courtesy of TRS Group, Inc.)
FIGURE 8.23
Photograph of ERH system at Fort Lewis. Note that the surface was paved to minimize cooling effects of aerial
recharge. (Courtesy of TRS Group, Inc.)
452 Hydrogeological Conceptual Site Models
3,500
Area 1 Total: 2,981 kg
3,000 Area 2 Total: 1,334 kg
Area 3 Total: 1,132 kg
2,500
Total Kilograms
2,000
1,500
1,000
Downloaded by [University of Auckland] at 23:45 09 April 2014
500
0
0 28 56 84 112 140 168 186 224
Days of Treatment
FIGURE 8.24
Plot of cumulative TCE mass removal for each of the three ERH remedial areas at Fort Lewis. Note that for area
3 in particular, but also area 2 to a lesser extent, thermal treatment continued for many days despite diminished
TCE mass removal. (From United States Army Corps of Engineers (USACE), Cost and Performance Report: In
Situ Thermal Remediation (Electrical Resistance Heating) East Gate Disposal Yard, Ft. Lewis, WA, 2007.)
100000 90
C07
80
E03
10000 Average Temperature 70
60
1000 50
40
100 30
20
10 10
Date
FIGURE 8.25
Plot of groundwater TCE concentrations at two source-area wells before, during, and after the ERH remedia-
tion at area 3, using data from Kingston et al. (2010). The interval of thermal treatment is shaded in gray, and
the dashed black line represents the average temperature of the treatment zone, which peaked in the high 80s.
Groundwater Remediation 453
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.26
Bar graph of average TCE groundwater concentrations in the three treatment areas before, immediately after,
and 20 months after completion of the remediation. The continued decline at area 3 resulted in a nearly four-log
removal of TCE. (Courtesy of TRS Group, Inc.)
in a financially responsible manner that prevented waste of money and energy resources
(USACE 2007). Contours of the remedial temperatures achieved for a target depth at area
2 are presented in Figure 8.27, and a plot of treatment-zone temperature during a typical
ERH application is provided in Figure 8.28.
Note that if the target contaminant was a more recalcitrant compound with a higher
boiling point, such as naphthalene, the failure to reach the boiling point of water may have
resulted in poor remedial performance as steam-stripping, rather than pure compound
boiling, would be the primary contaminant-removal mechanism.
While the Fort Lewis remediation project was successful in removing significant
DNAPL mass, the total remedial cost was approximately $15 million, and TCE concen-
trations remain above the MCL, requiring continued operation of the site’s preexisting
pump and treat system. However, modeling indicates that the ERH application will likely
reduce the operating duration of the pump and treat system from centuries to decades
(Bussey 2007). While this is an important accomplishment, a 2009 (post-thermal remedia-
tion) cost and performance report using actual site data performed by the Environmental
Security Technology Certification Program (ESTCP) indicates that over this time frame
(i.e., decades), in situ bioremediation may have achieved similar performance results at a
lower life-cycle cost (ESTCP 2009). It is, therefore, always important to evaluate technolo-
gies other than in situ thermal remediation taking into consideration the likelihood that
MCLs will not be met in a few years even with aggressive thermal treatment.
454 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.27
Contours of treatment zone temperature for a target depth in area 2 created using data measurements from TRS
(2009). World Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and
the GIS User Community.
Temperature (°C)
0 20 40 60 80 100 120 140
275
265
Elevation (ft AMSL)
255
245
235
225
Coolest Point
Average Temperature
215
Hottest Point
Temperature Goal
FIGURE 8.28
Example graph of temperatures achieved during an ERH remediation. (Courtesy of TRS Group, Inc.)
Groundwater Remediation 455
Evaluation of the above design factors is largely dependent on the selected oxidant.
The most common oxidants currently used to remediate groundwater are permanga-
nate (potassium or sodium), catalyzed hydrogen peroxide (CHP; e.g., modified Fenton’s
Reagent), and activated sodium persulfate. Each of these technologies is briefly described
below, highlighting important benefits and limitations.
Permanganate
Chemical oxidation of pollutants in drinking water with the permanganate ion (MnO4–)
has been used by engineers for decades. Permanganate, primarily potassium permanga-
nate (KMnO4) but also sodium permanganate (NaMnO4), is now the most widely used
ISCO reagent for groundwater remediation projects (Tsitonaki and Bjerg 2008). This is
most likely because of its ease of handling and use and its relatively low cost. Additional
benefits of permanganate are as follows:
456 Hydrogeological Conceptual Site Models
• Permanganate is a highly stable oxidant that can persist in the subsurface for
months after an application. This facilitates permanganate distribution in ground-
water and provides time for diffusion into low permeability layers.
• Permanganate is effective over a wide range of pH.
• No activation method is required as oxidation occurs via direct electron transfer.
• The ability of permanganate to degrade chlorinated ethenes (PCE, TCE, and asso-
ciated degradation products) and other unsaturated organic compounds has been
extensively documented at the field scale (ITRC 2005).
In addition, a potential problem with permanganate and all other ISCO reagents is that
treatment may mobilize metals because of redox reactions and/or pH changes. However,
this metal mobilization period is often short-lived and, therefore, rarely results in signifi-
cant migration beyond the treatment zone (ITRC 2005).
• The range of oxygen species produced by the CHP process includes both oxidants
and reductants, increasing the likelihood of complete contaminant mineralization.
• CHP is the least stable of the conventional ISCO reagents and rapidly decomposes
in the subsurface. Half-lives for CHP during ISCO applications can vary signifi-
cantly but rarely exceed 48 hours (Watts 2011). This significantly limits the achiev-
able injection well radius of influence.
• Without proper stabilization, CHP ISCO can result in rapid and dangerous heat
and gas evolution, which can also cause unwanted contaminant migration.
• Iron mineral catalysts require pH adjustment (lowering) with acids.
Downloaded by [University of Auckland] at 23:45 09 April 2014
The best current CHP stabilization practice to avoid unwanted gas and heat evolution is to
combine H2O2 with iron chelates to allow reactions to occur at a neutral pH. Proper stabi-
lization enables application at high concentrations (>1%), facilitating generation of super-
oxide, which has been proven critical in expanding the capability of CHP to remediate
compounds traditionally recalcitrant to oxidation and in enhancing destruction of NAPLs
and sorbed contaminant mass (Watts et al. 2006).
Activated sodium persulfate treats the same range of contaminants as CHP but is much
more stable with typical subsurface half-lives between 10 and 20 days. Activated sodium
persulfate is also less reactive with natural organic matter than permanganate, and its
chemical reactions have been shown to increase subsurface permeability in some instances.
On its own, the persulfate anion is a more powerful oxidant than both hydrogen peroxide
and permanganate, although reaction kinetics can be slow for contaminants of interest
such as TCE and PCE (Watts 2011). To further increase the oxidative strength of the reagent
and enhance treatment kinetics, persulfate can be activated to produce the free sulfate
radical (SO −4 ), which is second only to the hydroxyl free radical in oxidative strength (ITRC
2005).
There are different available methods of persulfate activation, including iron activa-
tion, hydrogen peroxide activation, and alkaline (i.e., base) activation. The most common
method in current use is alkaline activation, the success of which is dependent on pH and
is typically most effective when the pH is maintained above 10. Similar to CHP, alkaline-
activated persulfate has the advantage of producing multiple reactive species, including
reductants (Watts 2011).
458 Hydrogeological Conceptual Site Models
Although a less sensitive parameter for persulfate than permanganate, soil oxidant
demand is still a design consideration for persulfate and must be evaluated to ensure that
non-target demand does not result in application failure.
• Acid and base buffering capacity: most relevant for CHP and alkaline-activated per-
Downloaded by [University of Auckland] at 23:45 09 April 2014
ISCO pilot tests are best thought of as full-scale applications performed over a small por-
tion of the site, and they should be as representative of the full-scale approach as possible
(ITRC 2005). While a specific reagent may be conclusively selected based on the bench test
results, it is not uncommon for pilot tests to evaluate two different technologies over dif-
ferent portions of the site. For example, CHP may be pilot tested in a NAPL area, while per-
manganate may be pilot-tested in plume areas of high concentration. Typical parameters
evaluated during ISCO pilot testing are listed below, adapted from ITRC (2005):
• Oxidant concentrations: verifying the results of the pilot test and determining field-
scale modifications
• Injection rates, pressures, and volumes: determines the duration of the full-scale
implementation in conjunction with the radius of influence and the oxidant
concentration(s)
• Water-quality changes: including temperature, pH, specific conductance, and oxida-
tion–reduction potential
• Injection well radius of influence: note that this is not equivalent to the hydraulic
radius of influence but is better thought of as the radius of ISCO reagent travel
around a well or, alternatively, the radius of successful treatment around a well
• Reagent stability: analysis for the ISCO reagents themselves or indicator parameters
can be used to evaluate achieved half-lives in the subsurface
• Gas and heat evolution and metal dissolution: evaluating unwanted effects of the ISCO
application
• Treatment efficiency and rebound effects: evaluating the suitability of the design in
achieving short- and long-term performance objectives, used to determine the
total amount of oxidant required
Groundwater Remediation 459
Unfortunately, in many instances, bench and pilot testing are not conducted, and a
reagent is selected solely based on contaminant compatibility charts published in litera-
ture. More alarming is when consultants and/or contractors ignore factors such as reaction
kinetics, activation or stabilization requirements, total contaminant mass, and non-target
demand. Proceeding to a full-scale ISCO remedy without bench and pilot testing and blindly
using default quantities of CHP, for example, has a very low likelihood of success. One com-
mon justification for this practice is the lack of financial resources; however, default ISCO
remediation will ultimately increase costs over the correct, conceptual approach when regu-
lators demand additional rounds of reagent application and associated monitoring.
Figure 8.29 presents a three-dimensional visualization created in ArcScene of an unsuc-
cessful ISCO concept that resulted in considerable financial loss to the client. This example
illustrates the negative consequences of neglecting to perform a bench or pilot test to inform
the remedial design. Groundwater contamination was discovered when an old underground
Downloaded by [University of Auckland] at 23:45 09 April 2014
storage tank was removed from the site, and significant historic leakage was observed. The
green polygon in Figure 8.29 represents the area excavated around the former underground
storage tank in an attempt to remove all LNAPL contamination. Unfortunately, monitoring
wells installed downgradient of the tank found that groundwater was impacted by petro-
leum hydrocarbons, and ISCO was selected as the optimal remedy to eliminate the resid-
ual contamination and restore site groundwater to drinking water-quality standards. The
brown layer represents the bedrock surface, which is above the water table (blue layer) for
most of the year. The bedrock surface nearly touches the former tank graveyard, indicating
the potential for leaked oil to have entered the fractured bedrock aquifer directly.
The client hired an ISCO contractor to remediate the site. The contractor did not develop a
thorough CSM, assumed that no petroleum mass remained in the tank excavation area, and
installed an injection trench, represented by the black polygon, in the overburden downgradi-
ent of the former tank area. CHP was selected as the ISCO reagent, yet no attempt was made
to quantify contaminant mass or to understand and manage CHP stability in the presence
of site soils. As previously stated, no bench testing or pilot testing was performed to better
understand treatment mechanisms in the complex fractured bedrock environment. Instead,
more than 1000 gallons of CHP were injected into the subsurface over three separate injec-
tion periods spanning multiple years. Despite these repeated efforts, hydrocarbon concen-
trations remained above cleanup standards in the groundwater, and the site remained in the
remedial action stage with significant annual monitoring and reporting costs.
FIGURE 8.29
Three-dimensional visualization of an ISCO design created in ArcScene that did not meet remedial goals
despite multiple rounds of injections over the course of several years. Blue vertical lines are monitoring wells,
and brown vertical lines are soil boring locations.
460 Hydrogeological Conceptual Site Models
It is highly likely that the majority of unstabilized CHP was rapidly consumed in the
vadose zone before even reaching the bedrock surface, resulting in minimal contact with
the residual petroleum source and minimal contaminant destruction. Furthermore, even if
the injection strategy were better conceived and resulted in direct contact with the plume,
residual LNAPL in the bedrock beneath the former tank would have recontaminated the
treated groundwater and resulted in significant rebound. This example represents a com-
mon occurrence in industry when contractors oversimplify complex CSMs and take a
default approach to remediation. An example of a conceptual approach to ISCO design
and implementation, with much better results, is presented in the following section.
site in southern New England, completed by Woodard & Curran, Inc. The contaminant
source at the site consists of a former drum disposal area located on top of a hill. A solvents
plume, composed primarily of PCE, extends more than 2000 ft downgradient of the drum
disposal area and discharges in a large glacial pond. The plume is located in both glacial
till and fractured bedrock materials. The CSM is presented in Figure 8.30, and a geologic
cross section through the source area is presented in Figure 8.31. The source area lies in a
significant groundwater recharge area with water table fluctuations of up to 10 ft, moving
above and below the bedrock surface into and out of the glacial till. The average pretreat-
ment source-area groundwater concentration of PCE was approximately 5000 µg/L in the
glacial till, and PCE was historically detected above 1000 µg/L in the fractured bedrock.
The Remedial Investigation/Feasibility Study (RI/FS) completed for the site developed
a defensible, holistic approach to remediation that did not require a long-term pump and
FIGURE 8.30
Visual representation of the CSM illustrating contaminant release mechanisms, regional hydrogeology, and
locations of potential human and ecological receptors. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 461
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.31
Geologic cross section through the source area. The presence of residual DNAPL was suspected in the glacial
till and fractured gneiss unit. (Courtesy of Woodard & Curran, Inc.)
treat component. The conceptual design for the site is presented in Figure 8.32 and illus-
trates how the following elements are integrated into the cleanup:
FIGURE 8.32
Graphical depiction of the conceptual design of the soil and groundwater remedy illustrating fate-and-transport
mechanisms and key remedial components. A conceptual design should be prepared prior to completing the
detailed design for all remedial applications. (Courtesy of Woodard & Curran, Inc.)
462 Hydrogeological Conceptual Site Models
The ROD selected ISCO with permanganate for source-area remediation, prescribing the
following remedial methods and objectives:
• Source-area soil: ISCO applied through soil mixing to reduce PCE and TCE concen-
trations to below prescribed standards and to facilitate seepage of reagents into
the fractured bedrock
• Source-area groundwater: ISCO applied through injection wells to remove 90% of
the residual dissolved contaminant mass and to reduce concentrations of PCE and
TCE to below their MCLs
The remedial design process was augmented by completion of ISCO bench and pilot test-
ing to evaluate parameters such as soil oxidant demand, injection well radius of influence,
and achievable injection flow rates. The first phase of the source area remediation was
Downloaded by [University of Auckland] at 23:45 09 April 2014
mechanical mixing of potassium permanganate into the impacted vadose zone and inter-
mittently saturated glacial till materials. Mixing occurred over two application periods, as
additional vadose zone impacts were discovered during the first mixing period. A photo-
graph of the permanganate mixing process is provided in Figure 8.33. In total, more than
30,000 lb of potassium permanganate was mixed with impacted soils, and cleanup criteria
were satisfied after completion of the second mixing round.
The second phase of source-area remediation was installation of horizontal and vertical
groundwater injection wells in the source area and subsequent groundwater injection of
FIGURE 8.33
Photograph of direct permanganate mixing with PCE-contaminated glacial till. Note that surficial soils were
excavated and stockpiled to expose the glacial till. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 463
FIGURE 8.34
Photograph of horizontal injection-well installation in a gravel-filled trench in the glacial till. (Courtesy of
Woodard & Curran, Inc.)
464 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.35
Photograph of ISCO treatment area. The trailer contains a pump to inject the permanganate under pressure.
(Courtesy of Woodard & Curran, Inc.)
FIGURE 8.36
Photograph of injection well with associated piping and valve assembly. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 465
10000
Overburden
Bedrock
PCE Concentration, in ug/L
1000
100
10 MCL
1
Downloaded by [University of Auckland] at 23:45 09 April 2014
0.1
5/7/08 10/4/08 3/3/09 7/31/09 12/28/09 5/27/10 10/24/10 3/23/11 8/20/11
Date
FIGURE 8.37
Graph of PCE concentrations in micrograms per liter before, during, and after remediation at an overburden
and bedrock well in the middle of the source area. The two vertical green bars represent the two soil-mixing
applications, and the two vertical yellow bars represent the two groundwater-injection applications.
operations, maintenance, and monitoring, was approximately $2.4 million, which is very
reasonable considering the size of the site and the extent of the contamination. Remedial
goals for source-area soils were definitely achieved; however, PCE concentrations remain
above MCLs, particularly in the fractured bedrock. It is likely that significant PCE mass
remains in the bedrock, which is very difficult to contact with ISCO injections, and matrix
1000
PCE Concentration, in ug/L
100
Overburden
Bedrock
10
MCL
1
9/20/04 2/2/06 6/17/07 10/29/08 3/13/10 7/26/11
Date
FIGURE 8.38
Graph of PCE concentrations in micrograms per liter before, during, and after remediation at an overburden
and bedrock well in the downgradient plume area. The two vertical green bars represent the two soil-mixing
applications, and the two vertical yellow bars represent the two groundwater-injection applications. Evidence
of residual permanganate was found several hundred feet downgradient of the source area. Combined with the
relatively stable nature of concentrations prior to the ISCO, the rapid drop in overburden PCE concentration is
likely attributable to the ISCO application.
466 Hydrogeological Conceptual Site Models
diffusion will likely make the attainment of MCLs impracticable in the former DNAPL
source area (see additional discussion in Section 8.4). MNA is a component of the overall
remedy, and the ISCO work completed to date has successfully set the stage for full tran-
sition to MNA. Completion of additional rounds of injections may achieve slightly more
mass removal, but the value of these applications would be questionable as the plume has
already been stabilized, exposure to human and ecological receptors has been mitigated,
and MCLs will not be reached in the bedrock in the short term.
for more than 40 years. However, within the past 20 years (see Figure 8.3), use of the tech-
nology has rapidly increased as a result of research showing the technology’s potential
effectiveness in degrading a wide range of contaminants, including chlorinated solvents,
PCBs, dioxin, and metals (Hazen 2010). A groundbreaking study was published in Science
magazine by Maymó-Gatell et al. (1997) describing the isolation of a novel bacterium,
Dehalococcoides ethenogenes 195, capable of completely dechlorinating PCE to the innocu-
ous end product ethene under anaerobic conditions. The process of bacterial growth using
PCE as the terminal electron acceptor (TEA) is termed dehalorespiration (Magnuson et al.
1998). The identification of Dehalococcoides (Dhc) was a watershed moment in the advance-
ment of in situ bioremediation technologies and led to the proliferation of bioremediation
technologies in the 2000s.
In situ bioremediation strategies are classified either as natural attenuation strategies or
engineered strategies. MNA is described further in Section 8.3.4.2. Engineered bioremedi-
ation involves either biostimulation, bioaugmentation, or remedial designs that use both.
Biostimulation is the process of adding organic or inorganic compounds to the subsurface to
stimulate indigenous organisms (i.e., organisms that are already present) capable of degrad-
ing the target contaminant(s) (Hazen 2010). Biostimulation is useful when the required
organisms are present in insufficient numbers to degrade contaminants at the scale and rate
required for successful remediation. The primary biostimulation additives are substrates
(electron donors), nutrients, and buffering agents. Substrates provide an electron donor and
carbon source for cell growth. Hydrogen is the preferred electron donor for dehalorespira-
tion. Nutrients, such as nitrogen and phosphorus, and buffering agents are important in
establishing prime conditions for bacterial growth and metabolism (ITRC 2008).
Biostimulation additives can be gases, such as ambient air, oxygen, and methane; liq-
uids, such as lactic acid, molasses, vegetable oil, and hydrogen-release compound (HRC);
or solids, such as bulking agents like saw dust. Selection of the optimal additives depends
on the biogeochemistry of the hydrogeologic system with the most important parameters
being the redox states of the current environment and that of the target degradation path-
way. TEAs will be utilized in order of the amount of energy that can be derived from
their utilization (most to least). Oxygen is the preferred TEA, followed by nitrate, iron (III),
sulfate, and carbon dioxide—the latter being the necessary TEA for methanogenesis. The
implications of microbial energetics are that in order to stimulate dehalorespiration pro-
cesses in an aerobic environment, sufficient electron donors would be required to deplete
all TEAs preceding carbon dioxide such that methanogenic conditions favorable to Dhc
may develop (Hazen 2009). Figure 8.39 presents a summary of common geochemical pat-
terns found during anaerobic biodegradation of chlorinated solvents.
Groundwater Remediation 467
(+)
Dissolved Iron
CH 4
Chlor
ide
(e.
g., BO
Concentration or Mass
SO4
me D
ORP (mV)
tha
no
l)
NO3
Ac
et
at
e
DO
(-)
Background Source Downgradient
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.39
Common geochemical patterns relative to a chlorinated solvents source area that is degrading under anaer-
obic conditions. (Modified from Interstate Technology and Regulatory Council (ITRC), Natural Attenuation
of Chlorinated Solvents in Groundwater: Principles and Practices, Interstate Technology and Regulatory
Cooperation Work Group, In Situ Bioremediation Work Team, and Industrial Members of the Remediation
Technologies Development Forum (RTDF). Available at http://www.itrcweb.org/Documents/ISB-3.pdf, accessed
June 2, 2011.)
• The correct microorganisms must be naturally present (i.e., Dhc in the case of PCE).
• Additives must be able to stimulate the target microorganisms.
• Additive delivery to the target remediation area must be feasible.
• A carbon-to-nitrogen-to-phosphorus ratio of 100:10:2 should be achievable (Hazen
2010).
Bioaugmentation involves the injection of microbial cultures into the subsurface to degrade
target contaminants in situ and is required when the correct organisms are either not pres-
ent naturally or are present in such low numbers that biostimulation alone is infeasible
(ITRC 2008). The most commonly used organisms are pseudomonads for oil spills and Dhc
for chlorinated solvent remediation, both of which have several commercially produced
cultures (Hazen 2010). Bioaugmentation is particularly useful in controlled engineering
applications, such as recirculation systems, in which a highly selected culture can rapidly
increase degradation kinetics. A conceptual diagram of a recirculation design for in situ
bioremediation is presented in Figure 8.40. Bioaugmentation with Dhc cultures can often
take advantage of an existing anaerobic community to enhance dechlorination of PCE as
illustrated in Figure 8.41.
One common problem in evaluating the success of bioaugmentation applications is
that biostimulation additives are typically applied in addition to the microbial cultures.
Therefore, it is often difficult to prove that degradation was caused by the added culture
alone, rather than stimulated native populations. As a result, there is only one bacterium
that has been conclusively demonstrated to perform better than biostimulation, D. etheno-
genes. The success of Dhc as a bioaugmentation additive is likely a result of its classification
468 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.40
Conceptual diagram of a groundwater recirculation design for in situ bioremediation. (Courtesy of Sam Fogel,
PhD, of Bioremediation Consulting, Inc.)
Organic Acids
Fermentors + Organics H2
FIGURE 8.41
Conceptual representation of how the existing anaerobic microbial community can provide electron donors
and nutrients for amended Dhc cultures to degrade dichloroethene (DCE) to ethene. (Courtesy of Sam Fogel,
PhD, of Bioremediation Consulting, Inc.)
Groundwater Remediation 469
140
BCI Culture
Amendment
105
70
Downloaded by [University of Auckland] at 23:45 09 April 2014
35
0
3/3 4/2 5/2 6/1 7/1 7/31 8/30 9/29 10/29 11/28 12/28 1/27 2/26 3/28
FIGURE 8.42
Concentrations of CVOCs in a monitoring well in contaminated fractured rock before and after bioaugmenta-
tion of a Dhc culture provided by Bioremediation Consulting, Inc. (BCI). This is a full-scale recirculation system
installed for a Superfund site in Pennsylvania. BCI provided 80 L of Dhc culture that was also resistant to tri-
chloroethane (TCA) and was grown in site groundwater to ensure acclimation. Cell density was about 1 × 1011
cells/L. (Courtesy of Sam Fogel, PhD, of Bioremediation Consulting, Inc.)
The most critical limitation of in situ bioremediation is its infeasibility for sites with low
hydraulic conductivity. Similar to ISCO, if low-permeability deposits will not accept injected
amendments or cultures at required flow rates, appropriate conditions for biodegradation
will not develop, and the application will fail. A general rule of thumb is that for in situ
bioremediation to be a viable option, the hydraulic conductivity of the target formation
should be at least 10 –4 cm/s (0.28 ft/day). This means that in situ bioremediation is generally
470 Hydrogeological Conceptual Site Models
not an option for sites with contamination in clay, silts, or glacial till. For bioaugmentation
applications, conductivities may need to be an order of magnitude higher (i.e., 10 –3 cm/s)
depending on the size and adherence properties of the amended organism (Hazen 2010).
Because of the number of variables that can affect selection of the appropriate enhance-
ment approach, bench testing and field pilot testing should be conducted prior to full-scale
design of an in situ bioremediation. Diagnostic indicators commonly evaluated during bio-
remediation bench and pilot testing include dissolved oxygen concentration and oxidation–
reduction potential (ORP), parent/daughter compound ratios, electron-donor concentrations,
concentrations of competing electron acceptors, and pH. Classifying the redox state of the
groundwater is an essential step in determining the feasibility of bioremediation as it dic-
tates what biodegradation processes are currently occurring and informs the design team
as to the level of effort required to manipulate redox potential to create conditions favorable
for degradation of the target contaminant(s). For example, if groundwater with chlorinated
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.43
Criteria and threshold concentrations used to classify redox processes in ground water. Redox process: O2, oxy-
gen reduction; NO3, nitrate reduction; Mn(IV), manganese reduction; Fe(III), iron reduction; SO4, sulfate reduc-
tion; CH4gen, methanogenesis. Notes: mg/L, milligram per liter; —, criteria do not apply because the species
concentration is not affected by the redox process. (From Jurgens, B. C. et al., An Excel Workbook for Identifying
Redox Processes in Ground Water. US Geological Survey Open-File Report 2009–1004. 2009. Available at http://
pubs.usgs.gov/of/2009/1004/, accessed November 12, 2010.)
Groundwater Remediation 471
Plume Stability
Evaluation of plume stability involves quantification of both concentration-based and mass-
based metrics. Concentration-based methods are more traditional and typically include plots
of contaminant concentrations over time at key monitoring wells, contour maps of contaminant
concentrations at different times, and statistical trend analyses at individual wells. Mass-based
metrics are more computationally intensive and include changes in total plume mass over time
(zeroth spatial moment), changes in the location of the center of contaminant mass over time
(first spatial moment), and changes in the spread of the plume about the center of mass (second
spatial moment; Parsons 2009). Because of the labor-intensive nature of these calculations, com-
puter programs, such as MAROS, can be used to perform statistical trend analyses and spatial
472 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.44
Cartoon from TNO, NICOLE, and Loet van Moll (www.loetvanmoll.nl) explaining the MNA process and the
field, laboratory, and quantitative analysis required to document its efficacy. (With kind permission of Loet van
Moll.) For full cartoon booklet see Sinke and van Moll (2010).
Groundwater Remediation 473
moment calculations. See Chapter 3 for a description of MAROS and example tabular and
graphical output. While MAROS is an excellent computational tool and can be used to make
defensible arguments regarding plume stability, it has two significant limitations:
MAROS does not have a user-friendly, high-quality GUI capable of producing nice visualiza-
tions that easily demonstrate the results of concentration- and mass-based analyses. One way
around this limitation is to import MAROS results into a program such as ArcGIS to manually
create visualizations. An alternative to MAROS that overcomes this limitation entirely is the
Ricker MethodSM performed by Earth Consulting Group, Inc., and documented in the work of
Downloaded by [University of Auckland] at 23:45 09 April 2014
Ricker (2008). This method integrates plume stability analysis with a robust visual processor
by performing calculations using grid math tools in Surfer by Golden Software (see discussion
in Chapters 3 and 4). As with MAROS, the Ricker MethodSM includes calculation of the total
plume mass and the center of mass at different time increments and trend analyses on these
parameters; however, equally important parameters, such as the overall plume area and aver-
age concentration, are also calculated for constituents of interest (Ricker 2008). More impor-
tantly, all these parameters are easily visualized in Surfer and other graphical processors such
that the results are immediately significant to the nonstatistician. Statistical results and data
visualizations created using the Ricker MethodSM are presented in Figures 8.45 through 8.48
for a site with TCE-contaminated groundwater.
Plume Area:
Plume Mass:
Concentraon (µg/l)
5 55 105 155 205 255 305 355 405 455 Monitoring Well and
Concentration (ug/l)
FIGURE 8.45
Comparison of TCE concentration isopleth maps between December 2001 and March 2011 for a site where
MNA is part of the remedial design. Figures 8.45 through 8.47 apply to the same site and represent a successful
application of the Ricker MethodSM in demonstrating the efficacy of MNA. (Courtesy of Joe Ricker, PE of Earth
Consulting Group, Inc.)
474 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.46
TCE plume stability analysis completed as part of the Ricker MethodSM showing plots and associated Mann-
Kendall trend and linear regression statistics for the plume area, average concentration, and mass. (Courtesy of
Joe Ricker, PE of Earth Consulting Group, Inc.)
Groundwater Remediation 475
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.47
TCE plume center of mass analysis showing a visualization of the center of mass location over time and a
time-series plot and associated trend analysis of the distance from the contaminant source area (represented
by MW-4) to the center of mass. The uniformly decreasing trends shown in Figures 8.46 and 8.47 clearly dem-
onstrate that the plume can be classified as shrinking, or decreasing. (Courtesy of Joe Ricker, PE of Earth
Consulting Group, Inc.)
MNA processes are likely working effectively at the TCE site in question, and the plume
stability analysis leads to the following observations:
The absence of an appropriate method to account for nondetect data in trend analyses is a
limitation shared by many statistical packages available in the public domain, including
the robust ProUCL software. The significance of nondetect data is often entirely lost in
professional practice, and many experienced professionals often resort to substitution or
fabrication methods using one-half the laboratory reporting limit (Helsel 2005). When per-
forming plume stability analysis, substituting a nondetect result with one-half the report-
ing limit can result in completely erroneous results that can result in rejection of MNA as
a viable alternative or vice versa.
For illustrative purposes, a Mann-Kendall trend analysis was conducted in ProUCL,
using one-half the reporting limit for nondetect results for a real-life data set of PCE
476 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
EXPLANATION
Dec-01 Plume Extent RED SHADING INDICATES AREAS WHERE
CONCENTRATIONS INCREASED FROM DEC-01 TO MAR-11
Mar-11 Plume Extent
BLUE SHADING INDICATES AREAS WHERE
CONCENTRATIONS DECREASED FROM DEC-01 TO MAR-11
FIGURE 8.48
Visualization of the spatial difference in TCE plume concentration between the first and last sampling dates
completed as part of the Ricker MethodSM. This is generated by subtracting the recent grid file from the older
grid file and allows the user to evaluate portions of the plume that expanded versus portions that contracted.
For example, in the graphic shown, concentrations increased near the upgradient portion of the plume (not
unexpected because TCE is a daughter product of PCE, which is also present at the site). However, the vast
majority of the plume decreased in concentration and area, which is clearly seen in the graphic. This type of
analysis is very useful in demonstrating that the overall plume is clearly decreasing, even though one or some
wells are stable or increasing. This has always been one of the biggest drawbacks of well by well trend analysis
without visualization. (Courtesy of Joe Ricker, PE of Earth Consulting Group, Inc.)
FIGURE 8.49
ProUCL Mann-Kendall graphical output indicating that concentrations are increasing with visual and statisti-
cal significance.
for reference. The significance of this error and potential mitigation strategies are also
discussed in the work of Parsons (2009).
toring well, the associated rate constant is more representative of an overall attenuation
constant rather than a biodegradation constant. The biodegradation constant should be
estimated using site-specific data and can be accomplished through laboratory micro-
cosm studies, field-based tracer studies, or during fate-and-transport model calibration
(Kresic 2009).
Groundwater modeling is discussed at length in Chapter 5. The most important model
parameter when assessing MNA remedial time frames is the contaminant source term,
which is notoriously difficult to estimate. Source migration processes that merit consid-
eration in MNA modeling are identified in the work of Parsons (2009) and include verti-
cal leaching, dissolution, diffusion, volatilization and gas-phase diffusion, and gas-water
partitioning.
MNA Sustainability
Evaluation of the short- and long-term sustainability of MNA processes has only recently
been identified as an important step in conducting MNA feasibility and effectiveness
studies. Chapelle et al. (2007) identify quantification of mass and energy balances as the
primary means of assessing MNA sustainability. The mass balance refers to the mass of
contaminants in the system over time and is a measure of the short-term sustainability of
MNA that determines whether the rate of contaminant transformation exceeds the rate
of contaminant loading. The energy balance is a novel concept that measures the long-
term sustainability of MNA and is related to the amount of metabolizable organic carbon
in the groundwater system. Natural attenuation is sustainable in the long-term when the
pool of bioavailable organic carbon is large relative to the carbon flux required of electron
acceptors to drive biodegradation to completion (Chapelle et al. 2007). This speaks to the
importance of collecting electron-acceptor data and classifying redox conditions when
evaluating the viability of MNA at a site. A conceptual diagram of MNA sustainability is
provided in Figure 8.50.
One model available in the public domain that can help determine MNA cleanup time
frames and assess MNA sustainability is Natural Attenuation Software (NAS), avail-
able for download at http://www.nas.cee.vt.edu/index.php and originally documented
in the work of Chapelle et al. (2003). A key parameter calculated by NAS is the natural
attenuation capacity, which is defined as the capacity of a system to absorb or trans-
form contaminants through dispersion, advection, biodegradation, sorption, volatil-
ization, and/or plant uptake. NAS utilizes code from SEAM 3D, which is a numerical
model for three-dimensional solute transport and sequential electron acceptor-based
bioremediation.
Groundwater Remediation 479
Human activity
Chemical/biochemical
transformation
to innocuous
byproducts
Unsustainable
natural attenuation Sustainable
natural attenuation
FIGURE 8.50
Visual representation of the MNA sustainability evaluation. (Modified from Chapelle, F. H. et al., A Framework
for Assessing the Sustainability of Monitored Natural Attenuation, US Geological Survey Circular 1303, 2007.)
“Existing procedures for setting ground water cleanup goals do not adequately account
for the diversity of contaminated sites and the technical complexity of ground water
cleanup. . . Although the committee recognizes that different agencies must operate
under different authorities, all regulatory agencies should recognize that ground water
480 Hydrogeological Conceptual Site Models
The NRC findings also indicate that there is considerable uncertainty about whether goals
established under existing procedures are overprotective or underprotective of public
health and the environment and about the overall costs to society when these goals can or
cannot be achieved.
As mentioned above, DNAPLs present unique challenges to site remediation because
of their physical and chemical properties. As of 2003, there were no documented, peer-
reviewed case studies of DNAPL remediation below the water table where concentrations
were permanently reduced below MCLs (U.S. EPA 2003). The following excerpt describing
the general recalcitrance of DNAPLs to remediation is taken from U.S. EPA (2009a):
Downloaded by [University of Auckland] at 23:45 09 April 2014
“Due to their specific gravity, DNAPLs tend to sink in the subsurface. Their migration
pathways tend to be complex and hard to predict due to the heterogeneous nature of the
underlying soil and fractured bedrock. As a result, a complicated DNAPL architecture
(shape and size) can develop that is made up of pools, ganglia, and globules in multiple
soil layers and bedrock fracture zones. And because of their low solubility, tendency to
displace water from larger soil pores, and tendency to diffuse into silt and clay [and the
bedrock matrix (added)], DNAPLs can release dissolved constituents for long periods of time
forming large groundwater plumes. Constituents in the migrating plume can diffuse into
aquifer materials under certain conditions only to back diffuse out at a later time.”
Matrix diffusion is the term given to the phenomena when DNAPLs diffuse into low-
permeability layers only to back-diffuse out into higher flux zones once the concentration
gradient has been reversed. Matrix diffusion and matrix storage of DNAPL constituents
are most problematic when transmissive zones comprise a small fraction of the aquifer’s
total volume, which is most often the case in fractured bedrock settings. Where significant
contaminant mass has diffused into the bedrock matrix, attainment of MCLs in bedrock
fractures may take hundreds of years or more. Case studies regarding matrix diffusion
and DNAPL contamination of bedrock are presented in Section 8.4.2, related to technical
impracticability.
Taking these challenges into consideration, the Executive Summary of a 2003 national
panel report on DNAPL remediation provides the following conclusions regarding
“Appropriate Metrics for Performance Assessment” (U.S. EPA 2003, p. xi):
“The Panel assessed the technical basis for using drinking water standards, such as
Maximum Contaminant Levels (MCLs), as the single performance goal for successful
DNAPL source-zone remediation and the use of chemical analyses in ground water
samples from monitoring wells as the primary metric by which to judge performance
of ground water remediation systems. Although an MCL goal may be consistent with
prevailing state and federal laws for all ground water considered a potential source of
drinking water and is a goal that is easily comprehended by the public, this goal is not
likely to be achieved within a reasonable time frame in source zones at the vast major-
ity of DNAPL sites. Thus, the exclusive reliance on this goal inhibits the application of
source depletion technologies because achieving MCLs in the source zone is beyond the
capabilities of currently available in situ technologies in most geologic settings.”
However, despite the recommendations of the panel, since 2003 there has been renewed
interest in DNAPL source-zone remediation because of recent experiences with aggressive
Groundwater Remediation 481
technologies such as in situ thermal remediation and in situ chemical oxidation (both of
which are profiled earlier in this chapter). In a 2009 status update report, the U.S. EPA
declares that there are now five DNAPL sites where MCLs have been achieved through
source-zone remediation (U.S. EPA 2009a). However, it is important to note than none of
these five sites involved contamination of bedrock groundwater, which remains the most
complex remedial scenario. In addition, it is questionable whether two of these sites should
even be classified as complex because of limited contaminant mass (Dry Clean USA No.
11502, Orlando, FL, and Pasley Solvents and Chemicals, Inc., Hempstead, NY). In general,
the most complex sites will have large, deep pools of DNAPL and/or discharges of DNAPL
to bedrock (U.S. EPA 2009a). The most complex project highlighted in the U.S. EPA report
is the Visalia Pole Yard remediation, the true benefits of which are examined in Section 8.5.
Regardless of these few successes profiled by the U.S. EPA, the recommendations of the
2003 national panel report remain valid and should be given further consideration. The
Downloaded by [University of Auckland] at 23:45 09 April 2014
U.S. EPA (2000) estimates that site owners will spend billions of dollars over the next sev-
eral decades remediating chlorinated-solvent contamination. Similarly, the Department of
Defense has an estimated 3545 sites requiring further investigation and remediation, a num-
ber of which have high complexity (Deeb et al. 2011). Therefore, given the paucity of com-
plex DNAPL sites that have been remediated to drinking-water standards, there remains
great demand for alternative remedial endpoints and metrics that can be used to set real-
istic performance expectations while also protecting human health and the environment.
Deeb et al. (2011) present an overview of alternative remedial endpoints (i.e., other than
MCLs) discussed in published literature, listing the following available options:
The rest of this section highlights important concepts and visualizations related to TI
waivers, risk-based alternative endpoints, and mass flux as an emerging remedial metric.
Hi John,
I just glanced through the graphs you sent and saw some telling
things (first of all, the presentation of data is very nice and
illustrative – you did a great job on that!):
I can probably add a few more things if I spend more time on the
review (you have to tell me how much you can bear in terms of
time/cost). However, the bottom line is this:
FIGURE 8.51
Paraphrased e-mail from Jane Doe, PG, PhD (the independent technical reviewer, fictitious name), to John
Smith (the environmental consultant for the site, fictitious name) regarding potential future injections at the
karst site.
Groundwater Remediation 483
impracticability…):
Please let me know how to proceed and when do you want to have a
call.
Thank you,
Jane
FIGURE 8.52
Graph of VOC concentrations at well MW-YZ referenced in the fourth bullet of the e-mail presented in Figure
8.51.
484 Hydrogeological Conceptual Site Models
determining what the next remedial steps should be, specifically if another round of ISCO
injections is merited.
Following is an excerpt from a five-year review report for a Superfund site that illus-
trates a similar point:
which is unlikely. If the removal rate could be increased to 5,000 pounds/year it would
take 200 years to remove one million pounds, which is likely only a fraction of the con-
taminants present. Considering the contaminant mass, complexity and heterogeneity
of the aquifer, and the limited success of previous remediation technologies, it appears
that the SIA aquifer system is going to remain grossly impacted for hundreds of years.
It would seem that a more effective course of action, rather than treatment actions that
have a low expectation of real, positive results, would be to focus on the protection of
off-site receptors by increased off-site monitoring” (USACE 2010).
What the above five-year review failed to emphasize is that this is a typical karst site
together with a major impacted karst spring (flow rate > 30 million gallons per day or 1.3
m3/s), hydraulic conductivity of deep preferential flow paths (>300 ft below ground sur-
face) greater than 1000 and even 10,000 ft/day, and contaminated groundwater detected
deep in the karst aquifer with COC concentrations several orders of magnitude higher
than MCLs. In spite of the overwhelming evidence of karst, the U.S. EPA is still character-
izing the impacted groundwater system as residuum, weathered bedrock, and competent
bedrock (Wischkaemper 2007). The failure to classify the system as karst is a fundamental
error in the CSM that obscures the remedial decision-making processes.
This site has been investigated for more than two decades now, including multiple itera-
tions and versions of the RI report, FS, groundwater-remediation pilot tests, and interim
correction measures including pump and treat, with the estimated cost to this point
upward of $100 million. Despite all this, the U.S. EPA has not yet issued a ROD for the
site. In other words, the agency has not yet decided what the cleanup of the site should be.
Additional investigations and pilot tests are being planned and are likely being performed
as this is being written, partially because previous pilot tests were qualified as “hastily
conceived and implemented” (Wischkaemper 2007).
The above two karst examples are classic cases when remediation to MCLs is technically
impracticable, which means that remedial actions are either infeasible from an engineer-
ing perspective or unreliable in terms of meeting ARARs. One of the authors participated
on a panel on technical impracticability held at a national groundwater conference coor-
ganized by the U.S. EPA. Incidentally, the same $100 million site was a subject of discus-
sion. After a remark by the author that more than $100 million spent on the site so far
could have been used to better the lives of the people living in the impacted community,
including building libraries, schools, and possibly even a regional opera house (the com-
munity is drinking safe water provided by the PRP), one of the U.S. EPA regulators replied
that people in Europe cannot drink water from their faucets because it is contaminated.
Groundwater Remediation 485
After this perplexing statement by the U.S. EPA regulator, the panel discussion changed
the subject.
The previous two examples are not exceptions. There are numerous other sites in the
United States contaminated with DNAPLs in difficult hydrogeologic settings such as frac-
tured rock and karts aquifers with a very similar history of failed attempts to clean ground-
water to MCLs (drinking-water standards). The U.S. EPA recognized this early on and, in
1993, issued a directive titled “Guidance for Evaluating the Technical Impracticability of
Ground-Water Restoration” (U.S. EPA 1993).
The following citation is from a memorandum by Elliott P. Laws, assistant administra-
tor of the U.S. EPA dated July 31, 1995, and addressed to the regional administrators of all
10 U.S. EPA regions at the time and the directors of various programs within the regions:
“During our meeting, we discussed the fundamental changes that have occurred in
Downloaded by [University of Auckland] at 23:45 09 April 2014
annual number of TI waivers issued is approximately equal before and after the TI direc-
tive. As the number of ROD documents has decreased over time, it is also important to
look at the TI statistics on a percent-selected basis. Figure 8.53 shows that the percent of
CERCLA decision documents granting TI waivers increased only by approximately one
percentage point in the late 1990s, which may or may not be attributable to the 1993 guid-
ance document. More alarmingly, in 2008 and 2009, the number and percent of TI waivers
granted dropped significantly. This recent trend proves that the U.S. EPA has so far com-
pletely ignored the recommendations of its own Ground Water Task Force (2007):
ticability (TI) should be updated. Also, there was support for identifying mechanisms
for acknowledging complex site conditions that would be useful in the decision-making
process for cleanup programs other than Superfund.
Recommendation. Develop guidance on how to acknowledge technical limitations
posed by DNAPLs in EPA cleanup decisions, including updated guidance on the use
of technical impracticability (TI) decisions in the Superfund program. The guidance
should also discuss mechanisms for acknowledging technical limitations posed by site
complexities other than DNAPLs.
Long-term cleanup goals for Superfund sites and RCRA Corrective Action facilities
do not always include attaining MCLs throughout the plume. For ground waters that
are not designated by states as current or future sources of drinking water, drinking
water standards are generally not used as cleanup levels and alternative cleanup goals
are typically established, such as control of sources and containment of the plume. Also,
where the remedy calls for on-site management of waste materials (such as a landfill),
cleanup levels generally do not need to be attained in ground water beneath the waste
management area. In such cases, attaining MCLs throughout the plume applies only to
that portion of the plume outside the waste management area.
FIGURE 8.53
TI waivers for groundwater contamination granted between 1988 and November 2010, using data published in
the work of Deeb et al. (2011).
Groundwater Remediation 487
Furthermore, both the Superfund and RCRA Corrective Action programs generally
allow alternative cleanup goals to be established at sites where attaining MCLs through-
out the plume is determined to be technically impracticable (TI). Both of these EPA
cleanup programs also establish alternate cleanup limits (ACLs) in lieu of MCLs, under
appropriate circumstances. However, ACLs defined under CERCLA are somewhat differ-
ent from those in RCRA Corrective Action. Some state cleanup programs have provisions
for establishing contaminated ground water containment or management zones. Within
such a zone, active cleanup of contaminated ground water may be deferred or may not be
required. The specifics of how containment or management zones are defined, and what
alternative cleanup goals are applied, differ from state to state.
For the reasons discussed above, sites where DNAPLs are present in the subsurface
are very difficult to clean up to drinking water standards. Cleanup technologies appli-
cable to these sites often include individual approaches or various combinations of
approaches intended to control migration of contaminants (containment), remove con-
Downloaded by [University of Auckland] at 23:45 09 April 2014
taminants from the subsurface (extraction), or treat contaminants in place (in situ treat-
ment). Each of these technology types have been used (with varying degrees of success)
on DNAPLs in the source zone or on dissolved contaminants in the plume.”
Unfortunately, not only did the agency fail to update its own TI waiver policy and thus
withhold related technical assistance to working professionals, but it appears that any
references to TI, including related case studies, are completely absent from various U.S.
EPA Web sites. One cannot find any list of RODs where TI is a part of the remedy and can-
not use any search engine to learn more about the TI process or somehow stumble upon a
Superfund site with the issued TI waiver. The agency has mostly ignored TI in its annual
reviews of applied technologies or any other technical publication (U.S. EPA 2010). It is also
indicative that some U.S. EPA regions have issued only a handful of TI waivers in almost
20 years, and region 4 has yet to issue a single standing TI waiver. Interestingly, region 4
covers several main karst states, including Tennessee, Kentucky, Alabama, and Florida. It
therefore appears that U.S. EPA region 4 is afraid of setting a precedent by acknowledging
groundwater remediation at a karst site may be technically impracticable. This policy is,
for the most part, followed by the states in region 4, including at their own sites. The only
known document to provide a consolidated listing and description of sites with TI waivers
is that of Deeb et al. (2011), which is a study funded by the ESTCP through the Department
of Defense, which, as previously mentioned, has a great number of complex sites for which
cleanup would be funded by federal tax dollars.
The decreasing trend in annual TI waivers granted and the absence of substantive tech-
nical guidance regarding TI indicate that the U.S. EPA adopted an informal policy against
the use of TI waivers many years ago. This policy was formalized through two U.S. EPA
documents published in summer 2011. The first document, published in July 2011, is a
Groundwater Road Map that summarizes the groundwater evaluation and remediation
process at Superfund sites through a conceptual flowchart and related narrative (U.S. EPA
2011f). This road map document is included in the companion DVD for the reader’s ben-
efit. Surprisingly, TI documentation is an element of the road map. However, the road only
leads to TI after the following steps have been completed:
• Remedial design
• Remedial action
• Remedy operation complete with performance monitoring, five-year reviews, and
potential system optimization
488 Hydrogeological Conceptual Site Models
• A feasibility evaluation for other technologies in the event that remedial goals are
not being achieved
In other words, the road map indicates that TI is a last resort option only to be considered
after remedial actions have been conducted. Even if the best available technology has a
very low probability of success in meeting MCLs (e.g., karst sites with DNAPL), some form
of a costly remedial effort is recommended.
The second document, published in September 2011, is a memorandum from the direc-
tor of the U.S. EPA Office of Superfund Remediation and Technology Innovation clarifying
the previously referenced July 31, 1995 memorandum from Elliott P. Laws, the former assis-
tant administrator of U.S. EPA. The following text is taken directly from the September
2011 memorandum (Woolford 2011):
Downloaded by [University of Auckland] at 23:45 09 April 2014
“This (memorandum) is to clarify that: 1) the 1995 memorandum was intended to apply
only to remedy decisions made in Fiscal Year 1995 and 2) DNAPL contamination in and
of itself should not be the sole basis for considering the use of a TI waiver at any given
site. Improvements in science and technology have shown much progress in characteriz-
ing and successfully treating/extracting DNAPLs from subsurface areas. Recent remedy
decisions have selected a variety of in-situ technologies to address DNAPL contamination
such as in-situ chemical oxidation, in-situ bioremediation and in-situ thermal remedia-
tion. . .For the reasons stated above, the 1995 memorandum entitled, Superfund Groundwater
RODs: Implementing Change This Fiscal Year, July 31, 1995 (OSWER Directive 9335.3-03P)
should no longer be considered when making current site decisions (emphasis added).”
This memorandum rescinds the 1995 mandate that U.S. EPA regions utilize TI waivers at
DNAPL sites or provide written justification otherwise. While it seems as though the 1995
directive was never truly followed, U.S. EPA rejection of its policy prescription is now in
writing. In addition, the memorandum formally endorses the use of aggressive in situ tech-
nologies such as thermal remediation and ISCO despite limited evidence that these tech-
nologies lead to cost-effective, better outcomes in the long term. While this memorandum
is decidedly anti-TI, there is one important statement, quoted below, that conflicts with the
road map publication:
In other words, a TI waiver can theoretically be obtained at the remedy selection phase
in lieu of aggressive remedial action. While DNAPL alone does not imply TI, “sufficient,
science-based justification” can be used to invoke a TI waiver where appropriate (Woolford
2011). The U.S. EPA should therefore revise its road map to reflect that TI is a viable option
at the initial remedy selection phase. Regardless of this caveat, it is likely that the 2011
memorandum will further diminish the usage of TI at complex Superfund sites.
As a consequence of inadequate technical and regulatory guidance from U.S. EPA,
including inconsistencies between various regions regarding TI waivers, working profes-
sionals developing CSMs for groundwater remediation at difficult sites are left between a
rock and a hard place. It is therefore not surprising that many of them are referring to TI as
“PI” (political impracticability) and are producing for-internal-use-only flowcharts similar
to the one shown in Figure 8.54. Note that a colleague not working for either of the authors’
Groundwater Remediation 489
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.54
Political impracticability. Because of the lack of substantive guidance, regulations, or general policy, decisions
cannot be made without wasteful spending on excessive field investigation and data analysis.
consulting companies volunteered this figure; this colleague wishes to remain anonymous
as he or she is afraid of alienating some of the regulators she or he must interact with on
a regular basis.
One of two criteria needs to be met in order to apply for a TI waiver: (1) engineering
infeasibility or (2) unreliability (U.S. EPA 1993). A remedial action can be considered infea-
sible from an engineering perspective if current engineering methods designed to meet
the ARARs cannot be reasonably implemented. An action can be considered unreliable if
it is shown that existing remedial alternatives are not likely to be protective in the future.
Together, these two criteria define technical impracticability from an engineering perspec-
tive. Furthermore, a TI waiver would only be granted based on demonstration that cleanup
cannot be achieved within a reasonable time frame using the best available technology.
As discussed by the U.S. EPA, a reasonable time frame for restoring groundwater to ben-
eficial uses depends on the particular circumstances of the site and the restoration method
employed. A comparison of restoration alternatives from the most aggressive to passive
will provide information concerning the approximate range of time periods needed to
attain groundwater cleanup levels. An excessively long restoration time frame, even with
490 Hydrogeological Conceptual Site Models
the most aggressive restoration methods, may indicate that groundwater restoration is
technically impracticable from an engineering perspective (U.S. EPA 1996b). Notably, how-
ever, the various U.S. EPA regions have differing views on the use of TI waivers, including
the definition of reasonable.
It should be noted that a reasonable time frame is sometimes generically applied to
be 100 years, based on the order of magnitude number provided by the U.S. EPA (1993).
However, there is no accepted definition of reasonable as applied to groundwater restora-
tion because it is dependent on the applicable technologies and site-specific conditions,
such as hydrogeology. This leads to confusing and contradictory interpretations of what
reasonable really means. In some instances, stakeholders have interpreted time scales on
the order of 600 years to be reasonable, whereas others have viewed time scales greater
than 30 years to be technically impracticable (Deeb et al. 2011).
While there is little that can be done if a cleanup time frame of several hundred years
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.55
Left: Rock chip samples for methanol extraction (MERC) and determination of matrix diffusion. Right: Rock
cores for determination of matrix porosity by helium and mercury intrusion methods. (Courtesy of Peter
Thompson, AMEC.)
Matrix porosity analysis was conducted for 43 rock samples initially selected from 11
cored boreholes within OU 12. The resulting porosity measurements ranged from 0.2%
to 2.4%. Eighty-eight percent of these values were less than 0.9%. A weak correlation of
decreasing matrix porosity with increasing depth was evident (sample depths ranged
between 20 and 300 ft below ground surface). Two methods of determining porosity were
used: low-pressure helium injection and high-pressure mercury injection. The mercury
method confirmed the results of the helium method.
Additional rock matrix samples were collected at a later date from a different part of
OU 12, termed the Quarry Site, where PCE DNAPL was inferred in the fractured bedrock.
Weathered rock matrix surrounding specific fractures exceeded 3% porosity and, when
extracted and analyzed, showed the presence of substantial contaminant mass in some
samples.
FIGURE 8.56
Concentration of COCs diffused into rock matrix; composite of six borings. (Courtesy of Peter Thompson,
AMEC.)
492 Hydrogeological Conceptual Site Models
In 1999, the OU 12 ROD was finalized with remedial alternatives selected for 18 ground-
water plumes. This included two TI waivers based on the presence of DNAPL in fractured
bedrock at one site and matrix diffusion in weathered bedrock at another. The field inves-
tigation described above successfully demonstrated that the presence of significant VOC
mass in the rock matrix made remediation to MCLs infeasible. Without this conclusive,
field-based proof, it is likely that the TI waivers would not have been obtained, and some
form of aggressive in situ source remediation would have been erroneously applied.
to remove 90%–99% of the source area contaminant mass in one year of treatment. The
resulting time it takes for the groundwater plume to reach MCLs may then be compared
to that of the baseline alternative. In this manner, it may be determined if the remedial
intervention results in a reasonable cleanup time.
The following text describes a real-life modeling exercise conducted to evaluate how
cleanup times are affected by an aggressive in situ remediation in a karst aquifer system.
At the site in question, a significant DNAPL mass resides in the residuum in multiple dis-
perse source areas, and dissolved-phase contaminant concentrations in the bedrock aqui-
fer are several orders of magnitude above MCLs in these areas. A dissolved-phase plume
more than 1 mi in length has developed in the bedrock aquifer. No DNAPL was observed
in the bedrock aquifer during site investigation work; however, based on dissolved-phase
concentrations significantly above 10% of the contaminant solubility limit, it is highly
likely that DNAPL is present in the bedrock. To be consistent with the field investigation
findings and avoid potential controversy, all DNAPL was assigned to the residuum layer
in the model. Even with this liberal assumption, baseline modeling scenarios indicate that
it will take more than 1000 years for natural attenuation processes to reduce concentrations
to below MCLs across the site.
An alternative modeling scenario was conducted to simulate the effects of a highly suc-
cessful and aggressive in situ remediation. In this scenario, 90% of DNAPL mass and 100%
of the dissolved-phase mass was removed from all residuum source areas in year 1 (the
first year of the simulation). Once again, this is a very liberal assumption as these levels
of contaminant destruction in a karst residuum represent a highly optimistic remedial
outcome for any technology. Following the year-long remediation, the model predicts it
will take 28.5 additional years for all remaining DNAPL in the residuum to be depleted
by natural dissolution into infiltrating water from aerial recharge. While this may seem
like a major success, because of the extent and high concentration of the bedrock plume,
dissolved-phase concentrations in the bedrock aquifer are predicted to remain signifi-
cantly above the MCL well after 130 years. In some places, concentrations are still more
than two orders of magnitude above the MCL. The results of the source-remediation mod-
eling scenario are presented in Figure 8.57.
For the above site, the model was successful in demonstrating that cleanup could not be
achieved within a reasonable time frame using the best available technology. Therefore, the
aggressive source-zone remediation has not yet been selected, and emphasis was placed
on risk reduction and creating conditions favorable to long-term MNA. Note that this out-
come is by no means guaranteed for a different site with the exact same modeling results.
For example, a remedial measure estimated to reduce the cleanup time to 250 years may
Groundwater Remediation 493
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.57
Fate-and-transport simulation results for the remedial action alternative. All slides show dissolved concentra-
tions of the COC in the bedrock aquifer underlying the residuum. The width of the shown portion of the model
is approximately 5000 ft.
be considered reasonable and practicable versus a 500-year cleanup time estimate under
baseline conditions. It is not clear how a cleanup time of 250 years is reasonable. Consider
for example that 250 years ago the year was 1761, and the United States of America did not
even exist.
There are also definitive technical reasons why choosing between remedies with cleanup
time frames in the hundreds of years is fundamentally wrong. Economically, the majority
of the net present value of any remedial alternative will be captured in the first 30 years of
costs, indicating that long-term remedial cost differences 100 or more years in the future
494 Hydrogeological Conceptual Site Models
are irrelevant. The accuracy of numerical modeling at these time scales is also question-
able, as quoted below from U.S. EPA (1999b):
“The longer the time frame simulated, the greater the uncertainty associated with the
modeling result. While the time to reach remedial objectives at all points in the Joint Site
groundwater will likely be on the order of 100 years, simulations greater than the order
of 50 years into the future are generally not reliable or useful. EPA has used simulations
of 10–25 years for comparing remedial alternatives, even though the remedial action is
not complete in that time frame under any of the alternatives. This provides a measure
of each alternative’s relative performance and progress at 25 years toward meeting the
remedial objectives.”
This U.S. EPA statement directly contradicts other ROD documents in which modeling is
used to justify selection of one remedial approach over another when both alternatives
Downloaded by [University of Auckland] at 23:45 09 April 2014
have estimated cleanup times greater than 100 years. Inconsistent regulatory guidance
regarding technical standards of practice and, more fundamentally, inconsistent interpre-
tation of what constitutes a reasonable timeframe will continue to hinder the advancement
of technical impracticability as a viable remedial alternative until social and economic fac-
tors are considered in the cleanup process. This concept is discussed further in Section 8.5.
The end result of a human health–risk assessment is a hazard index (HI) for noncarcino-
gens and an excess lifetime cancer risk (ELCR) for carcinogens. Typical risk limits for life-
time exposure are a cumulative HI of 1 and an ELCR between 10 –4 and 10 –6 (U.S. EPA
1991). An ELCR of 10 –6 equates to one excess cancer per 1 million exposed individuals.
These ELCR thresholds are extremely conservative considering that in the United States
today, men have slightly less than a one in two lifetime risk of developing cancer, while
women have slightly more than a one in three lifetime risk of developing cancer (ACS 2011).
Additional discussion on the public health benefits of remediation is presented in Section
8.5. Ecological-risk characterization is generally more difficult to quantify at a site-specific
level as it often requires fish tissue analyses and other methods. Therefore, ecological-risk
characterization is often based on the comparison of contaminant concentrations in sur-
face water and sediment to established regulatory ecological benchmark values.
A fundamental element of the CSM and the human health– and ecological-risk assess-
ment is the conceptual exposure model, composed of the following elements:
based decision making has been successful in expediting the cleanup of Underground
Storage Tank (UST) sites that pose a low risk to human health and the environment (U.S.
EPA 1995). It is the hope that in the future, a similar risk-based approach may be used
for more complex CERCLA sites where contamination presents very low risk to human
health and the environment. Unfortunately, for the time being, it is unlikely that low risk
will be accepted by the regulators as justification for an alternative remedial approach or
end point for a site with groundwater that can be classified as a potential future drinking-
water source.
Regardless, there is increasing interest in using risk-based metrics where MCLs may
traditionally apply. One such mechanism is a greater risk ARAR waiver, which may be
obtained when remedial actions completed to meet an ARAR would result in greater risk
to human health and the environment than an ARAR waiver and selection of another
Construction
Site visitors
workers
workers
Infiltration/
Site
percolation
Ingestion
Runoff Sediment Inhalation
Dermal contact
Ingestion
Soil Dermal contact
Potentially complete exposure Fugitive dust inhal.
pathway evaluated in HHRA
Pathway deemed incomplete Vapor emission
Soil vapor inhalation
because no volatile COPCs are present
FIGURE 8.58
Schematic/flowchart of a conceptual exposure model for human health–risk assessment. (Courtesy of Laura
Smith, AMEC.)
496 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.59
Cartoon diagram of a conceptual exposure model for ecological-risk assessment. (Courtesy of Lisa Campe,
Woodard & Curran, Inc.)
alternative. A groundwater remediation site may be a candidate for a greater risk ARAR
waiver if one or more of the following conditions can be demonstrated (Deeb et al. 2011):
FIGURE 8.60
Remediation site workers must thoroughly protect themselves against exposure to contaminants at high con-
centrations. The risks they face may outweigh those of less intense, nonoccupational on- or off-site exposures.
Another critical and often overlooked example of a situation where greater risk may
apply is related to the occupational exposures of on- and off-site workers who perform
remediation activities. Remedial technologies involving the extraction of concentrated
contaminant streams can be very dangerous to implement and require significant health-
and-safety planning and expenditure (Figure 8.60). In some cases, the occupational risk
faced by these workers may be greater than the theoretical lifetime cancer risk of the tradi-
tional receptors (e.g., potential future consumption of contaminated groundwater) that is
driving remediation in the first place (Holland et al. 2011). Developing risk thresholds for
remediation workers and other stakeholders who may be directly or indirectly at risk of
injury or fatality because of the remediation project is an essential element of sustainable
human health–risk assessment, a term introduced in the work of Holland et al. (2011).
In sustainable human health–risk assessment, the remedial strategy is considered in
light of both the risks posed by the contamination and those presented by the remedial
actions themselves. Another key tenet of sustainable human health remediation is quan-
tification of the life-cycle human health risks associated with a remediation (for example,
risks caused by the manufacture and transportation of remedial materials and waste;
Holland et al. 2011). It is likely that interest in sustainable human health assessment will
greatly increase over the coming years. Additional discussion regarding sustainable reme-
diation concepts is provided in Section 8.5.
calculated by integrating these mass flux measurements across the defined plane or cross
section. Mass discharge, therefore, represents the total mass of the solute conveyed by
groundwater through a defined plane and is expressed in the useful units of mass per time
(e.g., g/day; ITRC 2010).
There are three primary methods of mass flux determination as adapted from ITRC
(2010):
Transect methods are the most commonly used because they are easier to implement in
the field, and data are less subject to significant temporal fluctuations. A conceptual dia-
gram of the transect method for mass flux and mass discharge measurement is provided
in Figure 8.61.
Potential uses of mass flux and mass discharge estimates include
FIGURE 8.61
Conceptual diagram of the transect method of calculating mass flux and mass discharge. (Adapted from
Interstate Technology and Regulatory Council (ITRC), Use and Measurement of Mass Flux and Mass Discharge,
Technology overview document, 2010. Available at http://www.itrcweb.org/Documents/MASSFLUX1.pdf,
accessed July 20, 2011.)
Groundwater Remediation 499
Fine Sand
Source K = 1.0 m/day
Zone
J = 0.03 g/d/m2
Gravelly Sand
K = 33.3 m/day
J = 1 g/d/m2
Downloaded by [University of Auckland] at 23:45 09 April 2014
Sand
K = 5 m/day
J = 0.15 g/d/m2
FIGURE 8.62
Conceptual cross section illustrating the use of mass flux in prioritizing treatment zones. The contaminant con-
centration (C = 10,000 µg/L) and hydraulic gradient (I = 0.003 m/m) are identical in all three layers. However,
variation in hydraulic conductivity (K) leads to a range in mass flux estimates (J) that spans nearly two orders
of magnitude with the gravelly sand layer contributing the greatest contaminant flux to the downgradient
groundwater plume. This analysis can help justify remediating the gravelly sand layer first and/or to the great-
est extent. (Adapted from Interstate Technology and Regulatory Council (ITRC), Use and Measurement of
Mass Flux and Mass Discharge, Technology overview document, 2010. Available at http://www.itrcweb.org/
Documents/MASSFLUX1.pdf, accessed July 20, 2011.)
Mass discharge is now an official alternative remedial metric as its use was specified in
the ROD for Superfund Site 12A located in Tacoma, WA, as quoted below from U.S. EPA
(2009b):
“The primary goals for the first tier of [Remedial Action Objective (RAO)] compliance
are to address residual sources, minimize the risk to receptors due to contaminated
surface soils and achieve a contaminant discharge reduction of at least 90% from the high
concentration source area [emphasis added] near the Time Oil building to the dissolved-
phase contaminant plume. Soil removal, ITR [in situ thermal remediation] and EAB [in
situ enhanced anaerobic bioremediation] will be considered complete and the Remedy
will be considered operational and functional when the tier 1 criteria have been met.
Once the tier 1 criteria have been met, the operations and maintenance of OU1 will be
turned over to the State of Washington.”
500 Hydrogeological Conceptual Site Models
This specific ROD (U.S. EPA 2009b) is also more progressive in that it provides a frame-
work for discontinuing the existing pump and treat system and transitioning to MNA
following the mass discharge reduction and specifically mentions TI as an option if MNA
cannot be shown to comply with ARARs in a reasonable timeframe. However, consid-
eration of TI would only occur after implementation of the $16,210,000 net present value
source-remediation remedy and, therefore, is something of a moot point (U.S. EPA 2009b).
Regardless, this site strikes a positive chord by using mass discharge as an adaptive site-
management alternative end point in that reaching the 90% mass discharge reduction trig-
gers the transition from one remedial technology or approach to another (and, in this case,
from one regulatory approach to another).
In summary, mass flux and discharge monitoring over time could lead to better compli-
ance strategies, optimized remediation approaches, and updated long-term monitoring
guidance (Kram et al. 2011). However, the fine spatial and temporal resolution of monitor-
Downloaded by [University of Auckland] at 23:45 09 April 2014
ing along one or more transects can make mass flux estimation expensive. Furthermore,
the accuracy of the method is dependent not only on contaminant concentration data but
also on precise characterization of the groundwater flow field (i.e., hydraulic conductivity
and gradient are components of the calculation). For this reason, relative changes in mass
flux are commonly calculated as opposed to depending on absolute numbers.
One novel approach that can significantly improve the utility and accuracy of mass
flux estimation while also reducing the costs of data collection, analysis, and visualiza-
tion is presented in the work of Kram et al. (2011). The patented process, developed by
Groundswell Technologies, Inc., uses integrated environmental monitoring sensors,
telemetry, GIS, and geostatistical algorithms to calculate and visualize mass flux and dis-
charge in real time. This technology has great potential in evaluating remediation system
performance and contaminant discharges from aquifers to surface water receptors. The
software component, named Waiora, is a multimedia data acquisition, management, and
visualization tool with the ability to monitor sensors measuring hydraulic head and solute
concentrations through a Web interface (Kram et al. 2011).
The Waiora platform has a user-friendly GUI that can be used to produce contour maps,
time series plots, 2D and 3D visualizations and animations, transect slices, and statisti-
cal analyses. Waiora can also assist in groundwater model calibration and functions as a
document repository with sharing capabilities that can integrate historical data with real-
time measurements from sensors in the field.
Waiora calculates mass flux through the transect method described in ITRC (2010); how-
ever, the use of sensors and telemetry enables more accurate estimation than traditional
methods. This is because the high density of hydraulic head and solute measurements helps
capture the temporal variability inherent to the groundwater system. Traditional manual
sampling events may fail to capture seasonal effects that increase or reduce contaminant
concentrations or alter hydraulic head distributions. The continuous data processed by
Waiora enables integration of mass flux over time such that the average mass discharge
over durations relevant to the time scale of plume formation can be more accurately quan-
tified. The use of sensors and telemetry also significantly reduces data collection costs, for
example, labor, laboratory costs, and investigatory derived waste management costs, in the
long-term.
Examples of visualizations from a Waiora U.S. EPA Environmental Technology
Verification (ETV) project are presented in Figures 8.63 through 8.66. The pilot project
involved the use of water-level transducers and nitrate sensors to monitor nitrate treat-
ment within a bioreactor trench that receives wastewater discharge. Figure 8.63 depicts the
distribution of hydraulic head at the pilot site, represented by contours created by Waiora
Groundwater Remediation 501
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.63
Potentiometric surface contours and selected sensor time series charts of water level–elevation data created
using Waiora. Hardware included Instrumentation Northwest level transducers and WaveData telemetry plat-
form. (Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)
FIGURE 8.64
Contours of nitrate concentration (in parts per million) created using Waiora. Hardware included Instrumenta
tion Northwest nitrate ion selective electrodes and WaveData telemetry platform. (Courtesy of Mark Kram,
PhD, CGWP of Groundswell Technologies, Inc.)
502 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 8.65
Three-dimensional visualization of nitrate mass discharge in grams per second through a source control plane
created using Waiora. (Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)
using water-level data collected by sensors in the field. Similarly, Figure 8.64 presents con-
tours of nitrate concentration in groundwater interpolated from nitrate sensor measure-
ments collected in real time. Note that the contours presented in Figures 8.63 and 8.64
represent a snapshot in time; however, Waiora can automatically reproduce the contours
for any monitored time step and play back geospatial changes in these parameters. Figure
8.65 is a three-dimensional view of nitrate mass discharge through the control transect
of interest, located a few feet downgradient of the nitrate injection point oriented per-
pendicular to the primary direction of groundwater flow. Mass discharge in Figure 8.65
is calculated using the surfaces presented in Figures 8.63 and 8.64 in addition to a site-
specific estimate of hydraulic conductivity. Temporal changes in mass discharge through
the transect can be tracked and visualized in a time series graph as shown in Figure 8.66.
Integration of this graph provides the total mass discharge over the time period of interest.
A similar sensor-based approach can be applied to numerous other contaminants, includ-
ing chlorinated solvents, such as TCE.
FIGURE 8.66
Time series chart of nitrate mass discharge (grams per second) through the control plane depicted in Figure
8.65, created using Waiora. (Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)
Groundwater Remediation 503
“The Visalia Pole Yard Superfund site attained all soil and groundwater remediation
goals, becoming one of the best examples to date of a site with massive quantities of
DNAPL in the saturated zone that has achieved and sustained drinking water stan-
dards following a source-mass depletion remedy.”
Downloaded by [University of Auckland] at 23:45 09 April 2014
• Groundwater pump and treat with discharge to publicly owned treatment works
(1975–1985)
• Construction and operation of an on-site groundwater treatment system to con-
tinue operating the pump and treat system (1985–1997)
• SEE (1997–2000)
• Installation and operation of an enhanced biodegradation system with continued
pump and treat operation (2000–2004)
• Excavation of shallow soils from 0–10 ft below ground surface (2006; U.S. EPA 2009c)
The final close-out report for the site listed the total remedial cost between 1996 and 2006
as approximately $30 million (more than two-thirds of which was the thermal remedi-
ation). Costs for the first 20 years of pump and treat operation and treatment-building
construction were not provided nor was the total energy consumed by the thermal reme-
diation (U.S. EPA 2009c). Without digging deeper, it may appear that the end result of clean
groundwater justifies this significant expenditure, and that the remediation was necessary
to protect human health and the environment and to restore beneficial use of the property
and its underlying groundwater. Unfortunately, as detailed in the ROD (U.S. EPA 1994)
and the final site close-out report (U.S. EPA 2009c), this was not the case.
First of all, no private or public drinking water was contaminated by the Visalia Pole
Yard site. The following text is taken directly from the ROD (U.S. EPA 1994):
“The primary contribution to risk for each of these populations (on- and off-site occu-
pational workers and off-site residents) is the estimated hypothetical future ingestion
of groundwater from the intermediate aquifer. On-site wells are used for groundwater
monitoring and treatment extraction purposes only. Thus, groundwater exposures evalu-
ated in this risk assessment are hypothetical [emphasis added].”
More disturbing is that the five-year review specifically stated, “There are no specific
redevelopment plans currently planned for the site.” Site-redevelopment opportunities
were actually severely limited by a Covenant to Restrict Use of Property, Environmental
Restriction, which was required as part of the ROD. The following text related to this cov-
Downloaded by [University of Auckland] at 23:45 09 April 2014
“As remedial action objectives are based on industrial cleanup standards, prohibited
Site Uses include: residences, human hospitals, schools, and day care centers for chil-
dren. Prohibited Activities include: soil disturbance greater than ten feet below grade,
and the installation of water wells for any purpose [emphasis added].”
When considering all of the above information, it is unclear who or what exactly benefited
from the remediation other than the site environmental consultant and thermal remediation
vendor. The community was unaffected by the site and did not participate actively in site-
redevelopment plans (of which there were none), and future use of the groundwater that was
remediated to site-specific standards (because of a hypothetical future exposure risk) is strictly
prohibited by law. Additionally, the contaminated groundwater did not discharge to a surface
water feature or result in any known adverse ecological impact as far as the authors are aware.
It therefore seems that $30 million was spent simply to prove the point that remediation of
DNAPL sites to some standard is feasible. It is still puzzling, however, that the site closure
report strictly prohibits any future use of groundwater underlying the site, including installa-
tion of any wells.
The frequency with which sites, such as the Visalia Pole Yard, are aggressively remedi-
ated with unclear benefits leads to the question posed by many over the past 30 years: Are
Superfund cleanups really worth the cost? As of 2005, an estimated $35 billion in federal
monies and an unknown amount of private funding has been spent on Superfund cleanups
with remediation only complete at roughly half of the nearly 1600 sites. The average cost of
Superfund cleanups has been estimated at $43 million per site (Greenstone and Gallagher
2008). Two working papers produced by the Massachusetts Institute of Technology (MIT)
Department of Economics seek to answer the above question, evaluating the benefits of
Superfund from an economic (Greenstone and Gallagher 2008) and public health (Currie,
Greenstone, and Moretti 2011) perspective.
Greenstone and Gallagher (2008) assess the economic benefits of Superfund by com-
paring housing market outcomes in the areas surrounding the first 400 sites selected for
Superfund cleanup to the areas surrounding the 290 sites that narrowly missed quali-
fying for the program. The results of the study indicate that Superfund cleanups led to
changes in local residential property values that are statistically indistinguishable from
zero. This is also true for property rental rates, housing supply, total population, and the
types of individuals living near the sites. This conclusion indicates that economic benefits
Groundwater Remediation 505
of Superfund likely fall short of the high costs of the program. Potential explanations for
and implications of this phenomenon are provided in the paper
“In our view, the most likely explanations are that the people that choose to live near
these sites do not value the clean-ups or that consumers have little reason to believe
that the clean-ups substantially reduce health risks. In either case, the results mean that
local residents’ gain in welfare from Superfund clean-ups falls well short of the costs.
Unless there are substantial benefits that are not captured in local housing markets,
less ambitious clean-ups like the erection of fences, posting of warning signs around
the sites, and simple containment of toxins might be a more efficient use of resources”
(Greenstone and Gallagher 2008).
It is important to note that the U.S. EPA partially funded a similar study (Gamper-
Downloaded by [University of Auckland] at 23:45 09 April 2014
Rabindran and Timmons 2011) as an apparent rebuttal to the findings of Greenstone and
Gallagher (2008). The approach of Gamper-Rabindran and Timmons (2011) uses data from
Greenstone and Gallagher (2008) but applies slightly different conceptual and economic
methodologies. For example, there is greater focus on site deletion from the NPL as a criti-
cal element. Gamper-Rabindran and Timmons also state that their methods account for
within-tract heterogeneity that can detect benefits understated or entirely missed by the
median tract–level approach taken by Greenstone and Gallagher (2008). The findings of
the 2011 paper are
“Our results at these three levels of analysis reveal that deletion, which signals the end
of cleanup activities, significantly raises the value of nearby owner-occupied houses on
average at the national level, but that there is considerable heterogeneity in this effect
across metro-specific housing markets. Our tract analysis finds that deletion of a site
raises housing values significantly at the lower deciles of the within-tract housing value
distribution—by 18.2% at the 10th percentile, 15.4% at the median, and 11.4% at the 60th
percentile. . . Our analysis of repeat sales data uncovers evidence of significant hetero-
geneity in the effects of Superfund site remediation across metro areas. We find that
deletion (measured relative to proposal) causes a sizable appreciation in housing values
in northern New Jersey (11.3%), but we find no statistically significant effect of dele-
tion (measured relative to proposal) for LA metro, southwestern Connecticut or metro
Boston. While the appreciation in New Jersey indicates that some neighborhoods do
recover post-cleanup, the lower prices in Boston at deletion relative to pre-discovery
(–6.1%) suggest, conversely, that stigma against contaminated sites and neighborhoods
can also persist despite cleanup. . . This heterogeneity suggests that to perform a cost-
benefit analysis of a particular candidate site, metro-specific estimates, which assess
the remediation of multiple sites in the relevant regional housing market, would be
appropriate.”
The authors acknowledge that there are numerous non-economic benefits to Superfund
remediation, most notably related to public health and the reduction or reversal of damages
to important natural resources. For example, Currie et al. (2011) found that Superfund clean-
ups can reduce the incidence of congenital anomalies by approximately 20%–25% within the
affected community. This was the first study to examine the impact of cleanups of hazard-
ous-waste sites on infant health, which has the important benefit of not requiring detailed
knowledge of environmental factors that may affect adult health, including lifetime smoking
behavior, lifetime exposure to ambient air pollution, and lifetime exposure to multiple hazard-
ous-waste sites (Currie et al. 2011). The reader is referred to U.S. EPA (2011c) for a comprehen-
sive listing of additional benefits of the Superfund program from the U.S. EPA’s perspective.
The work of Currie et al. (2011) clearly demonstrates the need to eliminate actual (not
hypothetical) contaminant exposures related to hazardous-waste sites. However, how
to most efficiently eliminate existing and prevent future exposures across the country is
Downloaded by [University of Auckland] at 23:45 09 April 2014
beyond the realm of Superfund as we know it. Sustainable remediation is an emerging field
that has been specifically developed to counteract the irrational mind-set of the Visalia
Pole Yard cleanup and other similar projects. The concept was originally introduced by
the U.S. Sustainable Remediation Forum (SURF) in a 2008 white paper (SURF 2009) and
has since rapidly spread throughout the United States and the European Union (EU). In
its Summer 2011 issue, the Remediation Journal published the first guidance documents on
the subject, which together present a framework for sustainable remediation, detail neces-
sary steps in completing environmental footprint analysis and life-cycle assessments, and
identify metrics for incorporating sustainable practices in remediation projects (see Simon
(2011) for an overview).
Sustainable remediation is defined as a remedy or combination of remedies whose net
benefit on human health and the environment is maximized through the judicious use of
limited resources (SURF 2009). This is achieved by balancing economic growth, protection
of the environment, and social responsibility to improve the quality of life for current and
future generations (U.S. EPA 2011d). Thus, the triple bottom line of sustainable remedia-
tion is measured by environmental, social, and economic factors (Butler et al. 2011). A dia-
gram illustrating interconnections within the triple bottom line is presented in Figure 8.67.
The Sustainable Remediation Framework, defined by Holland et al. (2011), is character-
ized by the following key elements:
Sustainable
Solutions
Environmental
Factors
Health?
FIGURE 8.67
Sustainable remediation’s triple bottom line. Sustainable remediation attempts to achieve the optimal balance
between social, environmental, and economic factors. (Original figure based on concept presented in United States
Environmental Protection Agency (U.S. EPA), US and EU Perspectives on Green and Sustainable Remediation Part
2, CLU-IN Internet Seminar, Delivered March 15, 2011, 2011. Available at http://www.cluin.org/live/archive/#US_
and_EU_Perspectives_on_Green_and_Sustainable_Remediation_Part_2, accessed August 15, 2011.)
FIGURE 8.68
Expansion of the traditional CSM to include sustainability factors as proposed by Holland et al. (2011).
508 Hydrogeological Conceptual Site Models
questions regarding the beneficial use of groundwater at the site, the feasibility
of reaching site closure, and how achieving remedial goals will change site risk.
• Implementation of sustainable remedial measures, including, but not limited to:
• Using in situ technologies that mimic natural processes and/or result in con-
taminant mineralization rather than phase transfer. For example, bioremedia-
tion, which can result in complete contaminant destruction in situ, is preferable
to thermal remediation during which phase change is often required to extract
contaminants from the subsurface and to potentially dispose of remediation
waste off site.
• Minimizing or eliminating emissions and natural resource (e.g., energy, water)
consumption. This includes minimizing transportation requirements when-
ever possible.
Downloaded by [University of Auckland] at 23:45 09 April 2014
• “Greener cleanups are not an alternative approach to setting cleanup levels and
selecting remedies;
• Cleaning up sites for reuse supports sustainable development; and
Groundwater Remediation 509
• Reducing the environmental footprint of a cleanup does not justify changing the
end point.”
Simply put, green remediation does not incorporate sustainability considerations in select-
ing the remedial approach or the preferred end use. Green remediation is strictly imple-
mentation related, and does not incorporate social and economic factors to address broader
land-management issues (U.S. EPA 2011d). This is unfortunate as the greatest opportunity
to realize sustainable outcomes are in the early stages of remedial implementation while
setting the remedial specification and strategy (NICOLE 2010). This is shown conceptually
in Figure 8.69. Site-specific data are needed to confirm this concept.
With these fundamental limitations, the potential exists for green remediation to become
a good example of what has been termed “LEED brain” in reference to misuse of the
Leadership in Energy and Environmental Design (LEED) building-certification program.
Downloaded by [University of Auckland] at 23:45 09 April 2014
LEED brain is a term originally coined by Schendler and Udall (2005) to reflect the absur-
dity of LEED-certifying fundamentally non-sustainable projects because of the inclusion
of expensive green features, such as rooftop fuel cells and benches made of salvaged euca-
lyptus wood (Owen 2009). Owen (2009) presents an excellent example of LEED brain in his
analysis of the Philip Merrill Environmental Center of the Chesapeake Bay Foundation
(CBF), which was opened in 2001 and became the first building to be certified as LEED
FIGURE 8.69
Conceptual diagram illustrating that including sustainability considerations in the remedy selection phase,
during which performance/cleanup standards are also determined, is expected to result in the greatest overall
benefit to project sustainability. (Based on concept presented in Network for Industrially Contaminated Land in
Europe (NICOLE), NICOLE Road Map for Sustainable Remediation, 2010. Available at http://www.nicole.org/
documents/DocumentList.aspx?l=2&w=n, accessed July 28, 2011.)
510 Hydrogeological Conceptual Site Models
platinum. The CBF facility has innumerable green technologies, such as geothermal wells,
composting toilets, rainwater-collection systems, and showers for bicyclists. However, the
facility was constructed at a remote location along the Chesapeake Bay and is inaccessible
to mass transit. The previous CBF headquarters was in downtown Annapolis, MD, and the
relocation “turned all of the foundation’s eighty employees into automobile commuters”
(Owen 2009). Furthermore, the CBF is an hour-long drive from Baltimore or Washington,
D.C., which is where the majority of visitors will be coming from by car. This pennywise,
pound-foolish approach to sustainability will not result in a net environmental benefit
and fosters the dangerous misconception that sustainability (as measured by the quantity
of green gadgets) is very expensive. An alternative (and much less expensive) building
in a downtown location accessible to mass transit would have been a much more sus-
tainable solution for the CBF, regardless of the number of composting toilets used in the
construction.
Downloaded by [University of Auckland] at 23:45 09 April 2014
Despite being a potential example of LEED brain, green remediation accomplishes the
critical objective of making consultants, regulators, and stakeholders consider the over-
all environmental footprint of a cleanup. Additionally, green remediation encourages the
use of important technologies that undoubtedly play a role in sustainable development.
It is the opinion of the authors that green remediation is another evolutionary step in the
maturation of the remediation industry. Figure 8.70 is an expanded schematic of this evo-
lutionary process originally presented in SURF (2009). The authors hope that the remedial
community swiftly embraces sustainable remediation as it has great potential in applying
FIGURE 8.70
Conceptual diagram illustrating the evolution of societal thinking about waste and environmental cleanups.
This version has been expanded from the original to include the phase we are currently entering (Increased
Knowledge), in which green remediation practices are encouraged and alternative remedial end points and
metrics are at least being actively considered. The next logical evolutionary step is the full embrace of sus-
tainable remediation practices. (Based on concept presented in U.S. Sustainable Remediation Forum (SURF).
Remediat. J. 19, 5–114, 2009, doi: 10.1002/rem.20210.)
Groundwater Remediation 511
remedial outcomes while also freeing up financial resources for assessment and proper
management of as many contaminated sites as possible.
References
American Cancer Society (ACS), 2011. Cancer Facts and Figures 2011. American Cancer Society,
Atlanta, GA.
Beyke, G., and Fleming, D., 2005. In-situ thermal remediation of DNAPL and LNAPL using electrical
resistance heating. Remediat. J. 15, 5–22, doi: 10.1002/rem.20047.
Bussey, T., 2007. Final Five-Year Review Report. Third Five-Year Review Report for Fort Lewis
CERCLA Sites, Pierce County, Washington, Prepared for the US Army Fort Lewis Department
of Public Works, Fort Lewis, WA, 42 pp.
Butler, P. B., Larsen-Hallock, L., Lewis, R., Glenn, C., and Armstead, R., 2011. Metrics for integrat-
ing sustainability evaluations into remediation projects. Remediat. J. 21, 81–87, doi: 10.1002/
rem.20290.
Chapelle, F. H., Widdowson, M. A., Brauner, J. S., Mendez, E., and Casey, C. C., 2003. Methodology
for Estimating Times of Remediation Associated with Monitored Natural Attenuation. U.S.
Geological Survey Water-Resources Investigations Report 03-4057.
Chapelle, F. H., Novak, J., Parker, J., Campbell, B. G., and Widdowson, M. A., 2007. A Framework
for Assessing the Sustainability of Monitored Natural Attenuation. U.S. Geological Survey
Circular 1303, 35 pp.
Cohen, R. M., Vincent, A. H., Mercer, J. W., Faust, C. R., and Spalding, C. P., 1994. Methods for
Monitoring Pump and treat Performance. EPA Contract No. 68-C8-0058, Office of Research and
Development, EPA/6OO/R-94/123, Ada, OK, 114 pp.
Currie, J., Greenstone, M., and Moretti, E., 2011. Superfund Cleanups and Infant Health. Massachusetts
Institute of Technology Department of Economics Working Paper Series, Working Paper 11-02.
Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1768233, accessed July 22, 2011.
Deeb, R., Hawley, E., Kell, L., and O’Laskey, R., 2011. Assessing Alternative Endpoints for Ground
water Remediation at Contaminated Sites. ESTCP Project ER-200832. Available at http://
www.serdp.org/Program-Areas/Environmental-Restoration/Contaminated-Groundwater/
Persistent-Contamination/ER-200832, accessed August 12, 2011.
Environmental Security Technology Certification Program (ESTCP), 2009. In Situ Bioremediation of
Chlorinated Solvents Source Areas with Enhanced Mass Transfer. U.S. Department of Defense
Cost and Performance Report, Project: ER-0218, 73 pp.
512 Hydrogeological Conceptual Site Models
Gamper-Rabindran, S., and Timmins, C., 2011. Valuing the Benefits of Superfund Site Remediation:
Three Approaches to Measuring Localized Externalities. NSF ITR-0427889, U.S. EPA
Purchase Order EP09W001911. Available at http://www.epa.gov/superfund/accomp/pdfs/
Benefits%20Study%202011.pdf, accessed August 10, 2011.
Greenstone, M., and Gallagher, J., 2008. Does Hazardous Waste Matter? Evidence from the Housing
Market and the Superfund Program. Massachusetts Institute of Technology, Department of
Economics Working Paper Series, Working Paper 05-27. Available at http://papers.ssrn.com/
sol3/papers.cfm?abstract_id=840207, accessed July 22, 2011.
Ground Water Task Force, 2007. Recommendations from the EPA Ground Water Task Force. Office of
Solid Waste and Emergency Response, EPA 500-R-07-001, 29 pp.
Hazen, T. C., 2009. Biostimulation. LBNL-1691E. Available at http://cluin.org/techfocus/default
.focus/sec/Bioremediation_of_Chlorinated_Solvents/cat/Overview/, accessed July 30, 2011.
Hazen, T. C., 2010. In Situ Groundwater Bioremediation. Chapter 13 in Part 24 of the Handbook of
Hydrocarbon and Lipid Microbiology. Springer-Verlag, Berlin, Heidelberg, ISBN: 978-3-540-77587-4.
Downloaded by [University of Auckland] at 23:45 09 April 2014
Available at http://cluin.org/techfocus/default.focus/sec/Bioremediation_of_Chlorinated_
Solvents/cat/Overview/, accessed July 30, 2011.
Helsel, D. R., 2005. Nondetects and Data Analysis. John Wiley & Sons, Hoboken, NJ, 250 pp.
Holland, K. S., Lewis, R. E., Tipton, K., Karnis, S., Dona, C., Petrovskis, E., Bull, L. P., Taege, D., and
Hook, C., 2011. Framework for integrating sustainability into remediation projects. Remediat. J.,
21, 7–38, doi: 10.1002/rem.20288.
Interstate Technology and Regulatory Council (ITRC), 1999. Natural Attention of Chlorinated
Solvents in Groundwater: Principles and Practices. Interstate Technology and Regulatory
Cooperation Work Group, In Situ Bioremediation Work Team, and Industrial Members of the
Remediation Technologies Development Forum (RTDF). Available at http://www.itrcweb
.org/Documents/ISB-3.pdf, accessed June 2, 2011.
Interstate Technology and Regulatory Council (ITRC), 2005. Technical and Regulatory Guidance for
In Situ Chemical Oxidation of Contaminated Soil and Groundwater, Second Edition. ISCO-2,
Interstate Technology and Regulatory Council, In Situ Chemical Oxidation Team, Washington,
DC. Available at http://www.itrcweb.org, accessed June 10, 2011.
Interstate Technology and Regulatory Council (ITRC), 2008. In Situ Bioremediation of Chlorinated
Ethene: DNAPL Source Zones. Bioremediation of DNAPLs Team, BioDNAPL-3. Available at
http://cluin.org/techfocus/default.focus/sec/Bioremediation_of_Chlorinated_Solvents/
cat/Guidance/, accessed July 25, 2011.
Interstate Technology and Regulatory Council (ITRC), 2010. Use and Measurement of Mass Flux
and Mass Discharge. Technology overview document. Available at http://www.itrcweb.org/
Documents/MASSFLUX1.pdf, accessed July 20, 2011.
Jurgens, B. C., McMahon, P. B., Chapelle, F. H., and Eberts, S. M., 2009. An Excel® Workbook for
Identifying Redox Processes in Ground Water. U.S. Geological Survey Open-File Report 2009-
1004, 8 pp. Available at http://pubs.usgs.gov/of/2009/1004/, accessed November 12, 2010.
Keely, J. F., 1989. Performance Evaluation of Pump and Treat Remediations, USEPA/540/4-89-005,
Robert S. Kerr Environmental Research Laboratory, Ada, OK, 19 pp.
Kingston, J. T., Dahlen, P. R., Johnson, P. C., Foote, E., and Williams, S., 2010. Final Report: Critical
Evaluation of State-of-the-Art In Situ Thermal Treatment Technologies for DNAPL Source Zone
Treatment. ESTCP Project ER-0314, 1272 pp. Available at http://cluin.org/techfocus/default.
focus/sec/Thermal_Treatment%3A_In_Situ/cat/Guidance/, accessed January 7, 2011.
Kingston, J. T., Dahlen, P. R., Johnson, P. C., Foote, E., and Williams, S., 2009. State-of-the-Practice
Overview: Critical Evaluation of State-of-the-Art In Situ Thermal Treatment Technologies for
DNAPL Source Zone Treatment. ESTCP Project ER-0314. Available at http://cluin.org/techfocus/
default.focus/sec/Thermal_Treatment%3A_In_Situ/cat/Guidance/, accessed January 7, 2011.
Kram, M. L., Airhart, S., Tyler, D., Dindal, A., Barton, A., McKernan, J. L., and Gustafson, G., 2011.
Web-based automated remediation performance monitoring and visualization of contaminant
mass flux and discharge. Remediat. J., 21, 89–101, doi:10.1002/rem.20291.
Groundwater Remediation 513
Kresic, N., 2009. Groundwater Resources: Sustainability, Management, and Restoration. McGraw-Hill,
New York, 852 pp.
Laws, E. P., 1995. Memorandum. Subject: Superfund Groundwater RODs: Implementing Change This
Fiscal Year, July 31, 1995. EPA-540-F-99-005, OSWER-9335.5-03P, PB99-963220, Washington, DC.
Magnuson, J. K., Stern, R. V., Gossett, J. M., Zinder, S. H., and Burris, D. R., 1998. Reductive dechlo-
rination of tetrachloroethene to ethene by a two-component enzyme pathway. Appl. Environ.
Microbiol. 64, 1270–1275.
Maymó-Gatell, X., Chien, Y., Gossett, J. M., and Zinder, S. H., 1997. Isolation of a bacterium that
reductively dechlorinates tetrachloroethene to ethene. Science 276, 1568–1571.
National Research Council (NRC), 1994. Alternatives for Groundwater Cleanup. National Academy
Press, Washington, DC, 315 pp.
Network for Industrially Contaminated Land in Europe (NICOLE), 2010. NICOLE Road Map for
Sustainable Remediation. Available at http://www.nicole.org/documents/DocumentList
.aspx?l=2&w=n, accessed July 28, 2011.
Downloaded by [University of Auckland] at 23:45 09 April 2014
Newell, C. J., Rifai, H. S., Wilson, J. T., Connor, J. A., Aziz, J. A., and Suarez, M. P., 2002. Calculation and
Use of First-Order Rate Constants for Monitored Natural Attenuation Studies. Ground Water
Issue, EPA/540/S-02/500, US Environmental Protection Agency, National Risk Management
Research Laboratory, Cincinnati, OH, 27 pp.
Owen, D., 2009. Green Metropolis: Why Living Smaller, Living Closer, and Driving Less are the Keys to
Sustainability. Riverhead Books, New York.
Parsons Engineering Science, Inc. (Parsons), 2009. Field-Scale Evaluation of Monitored Natural
Attenuation for Dissolved Chlorinated Solvent Plumes. Air Force Center for Environmental
Excellence (AFCEE), Contract Number F41624-00-D-8024, Task Order 0024, Brooks City-Base,
TX, 455 pp.
Powell, T., Smith, G., Sturza, J., Lynch, K., and Truex, M., 2007. New advancements for in-situ treat-
ment using electrical resistance heating. Remediat. J. 17, 51–70, doi:10.1002/rem.20124.
Ricker, J. A., 2008. A practical method to evaluate ground water contaminant plume stability. Ground
Water Monitor. Remediat. 28(4), 85–94.
Ronen, D., Sorek, S., and Gilron, J., 2011. Rationales behind irrationality of decision making in
groundwater quality management. Ground Water, 2011, doi:10.1111/j.1745-6584.2011.00823.x.
Ryan, S., 2010. Dense Nonaqueous Phase Liquid Cleanup: Accomplishments at Twelve NPL Sites.
National Network of Environmental Management Studies Fellow, U.S. EPA, http://cluin.org,
84 pp.
Schendler, A., and Udall, R., 2005. LEED Is Broken: Let’s Fix It. Grist, October 26, 2005.
Simon, J. A., 2011. Editor’s perspective—US sustainable remediation forum pushes forward with
guidance on the state of the practice. Remediat. J. 21, 1–5, doi:10.1002/rem.20287.
Sinke, A., and van Moll, L., 2010. Natural Attenuation (Cartoon Booklet). Available at http://www
.nicole.org/documents/documentlist.aspx?w=n&l=2, accessed July 28, 2011.
Thompson, P., Baker, P., Calkin, S., and Forbes, P., 2000. The Regional Bedrock Structure at Loring Air
Force Base, Limestone, Maine: The Unifying Model for the Study of Basewide Groundwater
Contamination. White Paper, AMEC E&I, Inc.
TRS Group, Inc. (TRS), 2009. Featured Site: Site Characteristics and Design Parameters, Figure 4.
Available at http://www.thermalrs.com/performance/featuredSites/ftLewis/featured_ft_
lewis_2.php, accessed July 20, 2011.
Tsitonaki, A., and Bjerg, P. L., 2008. In-Situ Chemical Oxidation: State of the Art. Schæffergarden,
Gentofte, ATV Jord og Grundvand, Kings Lyngby. Available at http://cluin.org/techfocus/
default.focus/sec/In_Situ_Oxidation/cat/Overview/, accessed July 7, 2011.
United Facilities Criteria (UFC), 2006. Design: In Situ Thermal Remediation. UFC 3-280-05. Available
at http://costperformance.org/remediation/pdf/USACE-In_Situ_Thermal_Design.pdf, accessed
January 10, 2011.
United States Army Corps of Engineers (USACE), 2007. Cost and Performance Report: In Situ Thermal
Remediation (Electrical Resistance Heating) East Gate Disposal Yard, Ft. Lewis, WA, 36 pp.
514 Hydrogeological Conceptual Site Models
United States Army Corps of Engineers (USACE), 2009. Design: In-Situ Thermal Remediation.
Manual 1110-1-401536, 226 pp. Available at http://www.clu-in.org/techfocus/default.focus/
sec/Thermal_Treatment%3A_In_Situ/cat/Guidance, accessed January 7, 2011.
United States Army Corps of Engineers (USACE), 2010. Five-Year Review Report for OU 1 SIA
Groundwater Interim Remedial Action, OU 2 SIA Soils, OU 3 Ammunition Storage Area Soils
and Groundwater, Anniston Army Depot, Calhoun County, Alabama, EPA ID: 321002027,
Mobile, AL, Prepared for U.S. EPA Region 4, Atlanta, GA.
United States Environmental Protection Agency (U.S. EPA), 1989a. Treatability Studies under
CERCLA: An Overview. Publication No. 9380.3-02FS, Office of Solid Waste and Emergency
Response, 6 pp.
United States Environmental Protection Agency (U.S. EPA), 1989b. Risk Assessment Guidance for
Superfund, Vol. I, Human Health Evaluation Manual (Part A), Interim final, EPA/540/1-
89/002, Office of Emergency and Remedial Response, US Environmental Protection Agency,
Washington, DC.
Downloaded by [University of Auckland] at 23:45 09 April 2014
United States Environmental Protection Agency (U.S. EPA), 1991. Risk Assessment Guidance for
Superfund, Vol. I, Human Health Evaluation Manual (Part B, Development of Risk-Based
Preliminary Remediation Goals), Interim, EPA/540/R-92/003, Office of Emergency and
Remedial Response, U.S. Environmental Protection Agency, Washington, DC.
United States Environmental Protection Agency (U.S. EPA), 1993. Guidance for Evaluating
the Technical Impracticability of Ground-Water Restoration, OWSER Directive 9234.2-5,
EPA/540-R-93-080, September.
United States Environmental Protection Agency (U.S. EPA), 1994. EPA Superfund Record of Decision:
Southern California Edison, Visalia Pole Yard Superfund Site, Visalia, California. EPA ID:
CAD980816466.
United States Environmental Protection Agency (U.S. EPA), 1995. Use of Risk-Based Decision
Making in UST Corrective Action Programs, OWSER Directive 9610.17, Office of Solid Waste
and Emergency Response, 20 pp.
United States Environmental Protection Agency (U.S. EPA), 1996a. Pump and treat Ground-
Water Remediation: A Guide for Decision Makers and Practitioners. Office of Research and
Development, EPA/625/R-95/005, Washington, DC, 90 pp.
United States Environmental Protection Agency (U.S. EPA), 1996b. Presumptive Response Strategy
and Ex-Situ Treatment Technologies for Contaminated Ground Water at CERCLA Sites, Final
Guidance. OSWER Directive 9288.1-12, EPA 540/R-96/023.
United States Environmental Protection Agency (U.S. EPA), 1999a. Use of Monitored Natural
Attenuation at Superfund, RCRA Corrective Action, and Underground Storage Tank Sites.
Directive 9200.4-17P, Office of Solid Waste and Emergency Response, 41 pp.
United States Environmental Protection Agency (U.S. EPA), 1999b. EPA Superfund Record of
Decision: Montrose Chemical Corp. and Del Amo. EPA ID: CAD008242711 and CAD029544731
OU(s) 03 & 03, Los Angeles, CA, 03/30/1999, Dual Site Groundwater Operable Unit II: Decision
Summary, EPA/ROD/R09-99/035.
United States Environmental Protection Agency (U.S. EPA), 2000. Engineered Approaches to In Situ
Bioremediation of Chlorinated Solvents: Fundamentals and Field Applications. EPA 542-R-00-
008. Available at http://cluin.org/download/remed/engappinsitbio.pdf, accessed August 12,
2011.
United States Environmental Protection Agency (U.S. EPA), 2003. The DNAPL Remediation Challenge:
Is There A Case For Source Depletion? Report Prepared by an Expert Panel to the Environmental
Protection Agency, Office of Research and Development, Publication EPA/600/R-03/143.
Available at http://www.epa.gov/ada/download/reports/600R03143/600R03143.pdf.
United States Environmental Protection Agency (U.S. EPA), 2005. Cost-Effective Design of Pump
and Treat Systems. Office of Solid Waste and Emergency Response, EPA 542-R-05-008. Available at
http://www.clu-in.org/download/remed/hyopt/factsheets/cost-effective_design.pdf, accessed
July 15, 2011.
Groundwater Remediation 515
United States Environmental Protection Agency (U.S. EPA), 2007a. Treatment Technologies for
Site Cleanup: Annual Status Report (Twelfth Edition). Office of Solid Waste and Emergency
Response, EPA-542-R-07-012. Available at http://www.clu-in.org/asr/, accessed July 10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2007b. Optimization Strategies for Long-
Term Ground Water Remedies (with Particular Emphasis on Pump and Treat Systems). Office
of Solid Waste and Emergency Response, EPA 542-R-07-007. Available at http://www.clu-in
.org/download/remed/hyopt/542r07007.pdf, accessed July 17, 2011.
United States Environmental Protection Agency (U.S. EPA), 2008. A Systematic Approach for
Evaluation of Capture Zones at Pump and Treat Systems: Final Project Report. Office of
Research and Development, EPA 600/R-08/003, Washington, DC, 38 pp.
United States Environmental Protection Agency (U.S. EPA), 2009a. DNAPL Remediation: Selected
Projects Where Regulatory Closure Goals Have Been Achieved. Office of Solid Waste and
Emergency Response, EPA 542/R-09/008, 52 pp. Available at http://www.clu-in.org/s
.focus/c/pub/i/1719/, accessed July 25, 2011.
Downloaded by [University of Auckland] at 23:45 09 April 2014
United States Environmental Protection Agency (U.S. EPA), 2009b. Amendment #2 to the Record of
Decision for the Commencement Bay–South Tacoma Channel Superfund Site, Operable Unit 1,
Well 12A, EPA Region 10.
United States Environmental Protection Agency (U.S. EPA), 2009c. Final Close Out Report: Southern
California Edison Visalia Pole Yard Superfund Site, Visalia, Tulare County, California. EPA Region 9.
United States Environmental Protection Agency (U.S. EPA), 2010. Superfund Remedy Report,
Thirteenth Edition. Office of Solid Waste and Emergency Response, EPA-542-R-10-004.
Available at http://www.clu-in.org/asr/, accessed July 10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011a. Record of Decision. Available at
http://www.epa.gov/superfund/cleanup/rod.htm, accessed July 24, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011b. Green Power Equivalency
Calculator Methodologies. Available at http://www.epa.gov/greenpower/pubs/calcmeth
.htm, accessed January 5, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011c. Beneficial Effects of the Superfund
Program. Office of Superfund Remediation and Technology Innovation, EPA Contract EP W-07-
037. Available at http://www.epa.gov/superfund/accomp/pdfs/SFBenefits-031011-Ver1.pdf,
accessed August 10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011d. US and EU Perspectives on Green
and Sustainable Remediation Part 2. CLU-IN Internet Seminar, Delivered March 15, 2011.
Available at http://www.cluin.org/live/archive/#US_and_EU_Perspectives_on_Green_and_
Sustainable_Remediation_Part_2, accessed August 15, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011e. Introduction to Green Remediation.
Office of Superfund Remediation and Technology Innovation, Quick Reference Fact Sheet.
Available at http://cluin.org/greenremediation/, accessed August 15, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011f. Groundwater Road Map: Recommended
Process for Restoring Contaminated Groundwater at Superfund Sites. OSWER 9283.1-34, 31 pp.
U.S. Sustainable Remediation Forum (SURF), 2009. Sustainable remediation white paper—integrat-
ing sustainable principles, practices, and metrics into remediation projects. Remediat. J. 19,
5–114, doi: 10.1002/rem.20210.
Watts, R. J., 2011. Enhanced Reactant-Contaminant Contact through the Use of Persulfate In Situ
Chemical Oxidation (ISCO). Strategic Environmental Research and Development Program
(SERDP) Project ER-1480. Available at http://cluin.org/techfocus/default.focus/sec/In_Situ_
Oxidation/cat/Guidance/, accessed July 10, 2011.
Watts, R. J., Loge, F. J., and Teel, A. L., 2006. Improved Understanding of Fenton-Like Reactions
for the In Situ Remediation of Contaminated Groundwater, Including Treatment of
Sorbed Contaminants and Destruction of DNAPLs. Strategic Environmental Research and
Development Program (SERDP). Available at http://cluin.org/techfocus/default.focus/sec/
In_Situ_Oxidation/cat/Guidance/, accessed July 10, 2011.
516 Hydrogeological Conceptual Site Models
Wischkaemper, K., 2007. Technical Impracticability, So Far: Anniston Army Ammunition Depot in
Anniston, AL. TSP Semiannual Meeting in Las Vegas, November 7, 2007. Available at www
.epa.gov/tio/tsp/download/2007_fall_meeting/wed-wischkaemper.pdf.
Woolford, J. E., 2011. Memorandum. Subject: Clarification of OSWER’s 1995 Technical Impracticability
Waiver Policy. OSWER- #9355.5-32, Washington, DC.
Downloaded by [University of Auckland] at 23:45 09 April 2014
9
Groundwater Supply
517
518
Downloaded by [University of Auckland] at 23:45 09 April 2014
measurements is increasing, it is becoming more and more evident that 100 years or so is
still too short to calculate the statistics necessary for a more accurate probability analysis
of extreme climate events, such as floods and droughts. For example, it was during a wet
period in the measured hydrologic record that the 1922 Colorado River Compact estab-
lished the basic apportionment of the river between the Upper and Lower Colorado River
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.2
Aerial view of the July 2004 drought conditions of Lake Powell, located in southern Utah on the Colorado River.
(Courtesy of U.S. Bureau of Reclamation; available at http://www.usbr.gov/lc/region/g5000/photolab.)
520 Hydrogeological Conceptual Site Models
Basins in the United States. At the time of compact negotiations, it was thought that an
average annual flow volume of about 21 million acre-feet (MAF; 1 acre-foot equals 1233
m3 and, conceptually, is equal to the volume of water that would cover 1 acre of land to a
depth of 1 ft) was available for apportionment. Subsequently, a 1944 treaty with Mexico
provided a volume of water of 1.5 MAF annually for Mexico. From the measured hydro-
logic data now available, it became apparent that the river’s average annual natural flow
had been overestimated, resulting in overallocation of its water and many political and
societal problems in the region. The reservoirs on the Colorado River used to manage
water supply of the region have been under increasing stress for years now (Figure 9.2).
Major users taking this water for granted, such as the state of California and the city of
Las Vegas, are struggling to find alternative sources of water supply. The focus, in most
cases, is on groundwater because, as in many other countries, the surface waters of the
United States are largely developed with little opportunity available to increase stor-
Downloaded by [University of Auckland] at 23:45 09 April 2014
age along main rivers as few suitable sites remain for dams, and there is increased con-
cern about the environmental effects of reservoirs. The surface waters of the nation also
receive and assimilate, to a large degree, significant quantities of point- and nonpoint-
source contaminants (Anderson and Woosley 2005). Table 9.1 includes some comparative
features of groundwater and surface water resources that should be considered when
planning for IWRM.
TABLE 9.1
Comparative Features of Groundwater and Surface Water Resources
Groundwater Resources and Surface Water Resources
Feature Aquifers and Reservoirs
Hydrological Characteristics
Storage Volumes Very large Small to moderate
Resource Areas Relatively unrestricted Restricted to water bodies
Flow Velocities Very low Moderate to high
Residence Times Generally decades/centuries Mainly weeks/months
Drought Propensity Generally low Generally high
Evaporation Losses Low and localized High for reservoirs
Resource Evaluation High cost and significant Lower cost and often less
uncertainty uncertain
Abstraction Impacts Delayed and dispersed Immediate
Natural Quality Generally (but not always) high Variable
Pollution Vulnerability Variable natural protection Largely unprotected
Pollution Persistence Often extreme Mainly transitory
Socioeconomic Factors
Public Perception Mythical, unpredictable Aesthetic, predictable
Development Cost Generally modest Often high
Development Risk Less than often perceived More than often assumed
Style of Development Mixed public and private Largely public
Source: Tuinhof, A. et al., Groundwater Resource Management: An Introduction to Its Scope and
Practice, Sustainable Groundwater Management: Concepts and Tools, Briefing Note Series,
Note 1, GW MATE (Groundwater Management Advisory Team), The World Bank,
Washington, DC, 6 pp., 2002–2005. With permission.
Groundwater Supply 521
The biggest challenge for IWRM is and will continue to be coping with two seem-
ingly incompatible imperatives: the needs of ecosystems and the needs of growing
population. The shared dependence on water of both makes it natural that ecosystems
must be given full attention within IWRM. At the same time, however, the Millennium
Declaration 2000, agreed upon by world leaders at the United Nations, involves a
set of human livelihood imperatives that are all closely water-related, with the most
important goal being to halve, by 2015, the population suffering from poverty, hun-
ger, ill-health, and lack of safe drinking water and sanitation. A particularly crucial
question will be the water-mediated implications for different ecosystems of the needs
for an increasing population: growing food, biomass, employment, and shelter needs
(Falkenmark 2003).
The most fundamental task of IWRM is the realization, by all stakeholders, that balanc-
ing and compromise are necessary in order to sustain both humanity’s and the planet’s life
Downloaded by [University of Auckland] at 23:45 09 April 2014
support systems. Therefore, a watershed-based approach should have a priority with the
following goals (Falkenmark 2003):
• To satisfy societal needs while minimizing the pollution load and understanding
the water consumption that is involved
• To meet ecological minimum criteria in terms of fundamental ecosystem needs,
such as secured (uncommitted) environmental flow in the rivers, secured flood-
flow episodes, and acceptable river water quality
• To secure hydro-solidarity between upstream and downstream societal and eco-
system needs
On a more technical level, one of the most important roles of hydrogeologists is to edu-
cate the public and water professionals alike about the importance of groundwater and its
invisible role in the watershed and the hydrologic cycle as a whole. Often, water-resource
managers and decision makers have little background in hydrogeology and thus a limited
understanding of the processes induced by pumping groundwater from an aquifer. Both
irrational underutilization of groundwater resources (compared to surface water) and
excessive complacency about the sustainability of intensive groundwater use are thus still
commonplace (Tuinhof et al. 2002–2005).
Groundwater (and water in general) management is commonly divided into supply-
side management and demand-side management. This division is more for technical
and administrative purposes, however, because the two aspects are interdependent.
Overall water management is not just about making sound engineering and economic
decisions. In many cases, it is disproportionately influenced by policies that favor
growth of one or more groups of water users—urban, industrial, or agricultural—
without much regard for the sustainability of water use or the environmental impacts.
The worst possible outcome of failed policies is an uncontrolled spiral of increas-
ing demand, causing increasing groundwater withdrawals, which, in turn, results
in unsustainable depletion of the groundwater resource and overall environmental
degradation. Where groundwater is viewed as both an economic and a public good
and its use is overseen by most, if not all, stakeholders, it is less likely that this spiral
would continue unchecked. In contrast, when selling water is viewed only as a source
of profit for the water purveyor, possibly shared by others through tax revenues, for
example, it is more likely that an unsustainable use of groundwater resources will
continue.
522 Hydrogeological Conceptual Site Models
Figure 9.3 illustrates application of an integrated numeric model used as a decision sup-
port system (DSS) for water-resources management on a regional, watershed scale. Such a
model can be continuously updated as new field information becomes available and can
be used in real time to make short- and long-term predictions based on estimated climatic
input. It can also be used to evaluate various engineering projects for development, aug-
mentation, protection, and restoration of both surface water and groundwater. Finally, it
can be used in support of new regulations aimed at balancing demand and supply and
competing interests of various water users.
An integrated hydrologic model called GSFLOW (for groundwater and surface-water
flow) has recently been developed by the United States Geological Survey (USGS) to simu-
late coupled groundwater and surface water resources (Markstrom et al. 2008). The new
model is based on the integration of the Precipitation-Runoff Modeling System (PRMS)
and MODFLOW. Additional model components were developed and existing compo-
Downloaded by [University of Auckland] at 23:45 09 April 2014
nents were modified to facilitate integration of the models. Methods were developed to
route flow among the PRMS Hydrologic Response Units (HRUs) and between the HRUs
and the MODFLOW finite-difference cells. PRMS and MODFLOW have similar modular
Investigations
Extraction GIS Population/
Monitoring Databases Census Data
Meteorology Maps Growth Projections
Land Use/Cover
INTEGRATED
MODELS
SUPPLY Surface Water DEMAND
Groundwater
Climate
Post-processing
Environment Visualization Risk (Floods,
Ecosystems Droughts)
DECISION
Social Industry
Public Health MAKING
Agriculture
Energy Urban Supply
FIGURE 9.3
Application of integrated models used as a DSS for water-resources management on a regional, watershed scale.
Decisions regarding various water uses can be made based on any combination of modeling and field data
related to water supply and demand.
Groundwater Supply 523
programming methods, which allows for their integration while retaining independence
that permits substitution of additional PRMS modules and MODFLOW packages. Both
models have a long history of support and development.
PRMS is a modular, deterministic, distributed-parameter, physical-process watershed
model used to simulate and evaluate the effects of various combinations of precipitation,
climate, and land use on watershed response. Response to normal and extreme rainfall
and snowmelt can be simulated to evaluate changes in water-balance relations, streamflow
regimes, soil–water relations, and groundwater recharge. PRMS simulates the hydrologic
processes of a watershed using a series of reservoirs that represent a volume of finite or
infinite capacity. Water is collected and stored in each reservoir for simulation of flow,
evapotranspiration, and sublimation. Flow to the drainage network, which consists of
Downloaded by [University of Auckland] at 23:45 09 April 2014
Solar
radiation
Precipitation
Plant canopy
interception
Rain Throughfall
Evaporation Rain
and Snowpack
Transpiration Evaporation
Transpiration Surface runoff
Snowmelt to stream or lake
Soil-Zone Reservoir Impervious-Zone Reservoir
Recharge zone
Lower zone
Subsurface recharge
Groundwater
recharge Subsurface Interflow (or subsurface
Reservoir flow) to stream or lake
Groundwater recharge
Groundwater
Reservoir Groundwater discharge
to stream or lake
Groundwater
sink
FIGURE 9.4
Schematic diagram of a watershed and its climate inputs (precipitation, air temperature, and solar radiation)
simulated by the PRMS. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow
Model Based on the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water
Flow Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240 pp., 2008. Modified
from Leavesley, G. H. et al., Precipitation–Runoff Modeling System—User’s Manual, U.S. Geological Survey Water-
Resources Investigations Report 83-4238, 207 pp., 1983.)
524 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.5
Hydrologic response units discretized for Sagehen Creek watershed near Truckee, CA, used in a surface
water model. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow
Model Based on the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular
Ground-Water Flow Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240
pp., 2008.)
FIGURE 9.6
Hydraulic conductivity values used in a groundwater model of the Sagehen Creek watershed near Truckee,
CA. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow Model Based on
the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water Flow
Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240 pp., 2008.)
precipitation, temperature, and solar radiation; soil morphology and geology; and flow
direction. Each HRU is assumed to be homogeneous with respect to these hydrologic and
physical characteristics and to its hydrologic response. A water balance and an energy
balance are computed daily for each HRU. GSFLOW allows simulations using only
PRMS or MODFLOW-2005 within the integrated model for the purpose of initial cali-
bration of model parameters prior to a comprehensive calibration using the integrated
model. The model boundaries are defined using standard specified-head, specified-flow,
and head-dependent boundary conditions to account for inflows to and outflows from
the modeled region.
526 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.7
Inflows to and outflows from a lake as represented in GSFLOW. Grid shows finite-difference cells for lakes and
groundwater. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow Model
Based on the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water
Flow Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240 pp., 2008.)
FIGURE 9.8
Top: Cross section illustrating groundwater flow at an actual site where the flow regime is influenced by canals
and natural drainage features. Bottom: Change of groundwater flow directions after pumping is initiated at
a nearby water-supply well. Note that one of the canals is eventually completely dewatered by the pumping.
• The general quality of surface-water bodies and their sediments has been impacted
by point- and nonpoint sources of contamination to a much greater extent and
for longer periods of time, requiring more expensive drinking-water treatment
involving the use of a variety of chemicals.
• Surface-water supplies are more vulnerable to accidental or intentional
contamination.
• The ability of surface water systems to balance daily and seasonal periods of peak
demand and low demand is limited. In contrast, water wells can simply be turned
off and on, and their pumping rates can be adjusted as needed.
Figure 9.9 illustrates the common evolution of groundwater resources development and
the associated stages based on the impacts of the hydraulic stress (groundwater extrac-
Downloaded by [University of Auckland] at 23:45 09 April 2014
tion) on the system. The condition of excessive and unsustainable extraction (3A–Unstable
Development) is also included. For this case, the total abstraction rate (and usually the
number of production wells) will eventually fall markedly as a result of near irreversible
degradation of the aquifer system itself (Tuinhof et al. 2002–2005).
FIGURE 9.9
States of groundwater resource development in a major aquifer and their corresponding management needs. (From
Tuinhof, A. et al., Groundwater Resource Management: An Introduction to Its Scope and Practice, Sustainable
Groundwater Management: Concepts and Tools, Briefing Note Series, Note 1, GW MATE (Groundwater
Management Advisory Team), The World Bank, Washington, DC, 6 pp., 2002–2005. With permission.)
Groundwater Supply 529
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.10
USGS monitoring well located at the headquarters of the National Ground Water Association in Columbus,
OH. Water levels are recorded in real time and transmitted via satellite to the USGS processing center. Data are
available online 15 minutes after recording.
In order to be effective or even possible, the groundwater supply must rely on monitor-
ing of static (nonpumping) and pumping water levels in the aquifer, spring and surface-
water flows, water quality, and their spatial and temporal changes (see Figure 9.10). All
monitoring data and data generated during resource evaluation, development, and exploi-
tation (operations and maintenance) should be stored and organized within an interactive
geographic information system (GIS) database for quantitative analyses and easy visual-
izations (see Chapter 3).
feasible extraction in most cases (this, of course, should not be the goal of any groundwater-
based water supply to begin with). Therefore, the most common approach is to calculate
groundwater flow rates that can be obtained from a specific groundwater extraction design,
such as a well or a well field. This is accomplished by performing one or more aquifer
pumping tests and by analyzing the test results to obtain key hydrogeologic parameters of
the porous media: the hydraulic conductivity (transmissivity after multiplying by aquifer
thickness) and the storage properties (specific yield for unconfined aquifers and storage
coefficient for confined aquifers). These parameters are required for calculating optimum
pumping rates, radius of influence, and long-term impacts of groundwater withdrawal.
The design and analyses of aquifer tests are, with varying detail, described in most hydro-
geology textbooks (e.g., see Freeze and Cherry 1979; Kresic 2007) and quite a few specialty
texts (e.g., Kruseman et al. 1991). There are also various commercial computer programs
for interpretation of aquifer tests, such as AQTESOLV (HydroSOLVE 2002). Spreadsheets
Downloaded by [University of Auckland] at 23:45 09 April 2014
for aquifer test analysis produced by the USGS that are available in the public domain are
included in the companion DVD for the benefit of the reader.
Many different analytical solutions for aquifer pumping tests are available for a vari-
ety of hydrogeologic conditions, such as the presence of leaky aquitards below or above
the pumped aquifer (see Figure 9.11), delayed gravity drainage in unconfined aquifers,
partially penetrating wells, wells with large diameters and presence of bore skin on the
well walls, aquifer anisotropy, and fractured aquifers, including dual-porosity approaches
and fractures with skin. It therefore cannot be overemphasized that a thorough under-
standing of the site-specific hydrogeologic characteristics is crucial for any meaningful
FIGURE 9.11
Aquifer pumping test drawdown and recovery data at an observation well used to determine aquifer transmis-
sivity (T) and storage coefficient (S) utilized in a numeric groundwater flow model. (Courtesy of AMEC E&I,
Inc.)
Groundwater Supply 531
interpretation of any aquifer test results. Most importantly, different analytical methods
developed with very different assumptions regarding the underlying physical process
can produce very similar results, and it is up to the interpreting hydrogeologist to decide
which method is the most appropriate and makes the most hydrogeologic sense. The worst
option is to let a computer program perform automated curve matching and accept the
results without any critical analysis. Numeric groundwater models are being increasingly
utilized not only for quantification of groundwater flow in a system but also for the analy-
sis of aquifer pumping tests because they can simulate heterogeneity, anisotropy, and the
varying geometry of the system as well as the presence of any boundaries to groundwater
flow. Various hydrogeologic assumptions can be changed and tested in a numeric model
until the field data are matched and the final conceptual model of the underlying hydro-
geologic conditions is selected.
Aquifer testing should always include two parts for both the newly developed and the
Downloaded by [University of Auckland] at 23:45 09 April 2014
existing wells as shown in Figure 9.12. The first part of the test, which has three steps,
is designed to determine well characteristics, such as well loss and the need for possi-
ble redevelopment. The duration of each step should be the same, usually not more than
six to eight hours. Data recorded during the first step are used to initially estimate the
transmissivity and the storage coefficient of the aquifer. The size of the pump and the
long-term pumping rate for the second part of the test are selected based on drawdown
development during the three-step test. The second part of the test should be performed
after a complete recovery of the hydraulic head in the well and with a maximum feasible
pumping rate. Duration of this part of the test, which is designed to determine the overall
aquifer transmissivity for an extensive radius of influence, depends on specific project
requirements and may vary from 24 hours to several weeks in case of aquifer develop-
ment for major water-supply projects. Long-term pumping with a maximum rate is nec-
essary for uncovering aquifer characteristics that may not be apparent from a short test.
This includes delayed gravity drainage, distant boundaries, leakage through (or from) the
adjacent aquitards, the presence of dual-porosity media, and changes in storage. Both the
drawdown and the recovery data should be used to find the aquifer parameters. At least
one monitoring well near the pumping well should be available to analyze the test results.
However, it is preferable to have several monitoring wells at increasing distances from the
characteristics
Time
Drawdown
FIGURE 9.12
Pumping-rate hydrographs and drawdown curves for a pumping test designed to determine well and aquifer
characteristics. Left: Three-step test for determining well efficiency (well loss) and optimum pumping rate for
the long-term test. Right: Long-term test for determining aquifer transmissivity (hydraulic conductivity) and
storage parameters.
532 Hydrogeological Conceptual Site Models
pumping wells and in different directions, thus enabling an evaluation of possible aquifer
anisotropy and heterogeneity, including the location of any hydraulic boundaries that may
be present (e.g., see Kresic 2007).
Real-time management of groundwater resources is becoming increasingly important
because of greater stresses on existing systems and the apparent greater frequency of
extreme weather events such as floods and droughts. Water managers must be able to rap-
idly assimilate and visualize data obtained from the field in order to make critical decisions
regarding allocations under difficult circumstances. To assist in this process, Groundswell
Technologies, Inc., has created a Groundwater Basin Storage Tracking (GBST) module for
automated water resources management within its Waiora software platform (described in
Chapter 8). This Web-based application uses sensors and telemetry to continuously moni-
tor and visualize changes in aquifer volume (storage) between any two selected time steps.
Cumulative storage volumes are automatically calculated and displayed and become read-
Downloaded by [University of Auckland] at 23:45 09 April 2014
ily available for detailed analyses and export to reports. Applications of the GBST module
include groundwater-supply monitoring, water banking, aquifer storage and recovery,
and storm water infiltration accounting. Basin-scale tracking of groundwater resources
is critical to the overall management framework as it covers a wider area than conven-
tional aquifer tests used to site supply areas and, therefore, allows assessment of regional
impacts that have greater uncertainty. Figure 9.13 presents example visualizations of the
GBST module created in Waiora.
When evaluating the quantity of spring water available for water supply, it is important
to include a measure of spring discharge variability. For example, a spring may have a
very high average discharge, but it may be dry or just trickling most of the year. The pre-
vailing practice in most countries is to evaluate springs based on the minimum discharge
recorded over a long period, typically longer than several hydrologic years (a hydrologic
year is defined as spanning all wet and dry seasons within a full annual cycle). The sim-
plest measure of spring variability is the ratio of the maximum to minimum discharge:
Qmax
Iv = .
Qmin
Springs with the index of variability (Iv) greater than 10 are considered highly variable, and
those with Iv < 2 are sometimes called constant or steady springs.
Meinzer (1923) proposed the following measure of variability expressed as a percentage:
Qmax − Qmin
V= × 100(%)
Qav
where Qmax, Qmin, and Qav are maximum, minimum, and average discharge, respectively.
Based on this equation, a constant spring would have a variability of less than 25%, and a
variable spring would have a variability of greater than 100%.
A more exact and preferable quantitative analysis is illustrated with Figures 9.14 through
9.16, provided there is a sufficiently long time series of spring discharge. Although in this
particular case, it would not be possible to accurately estimate the real natural discharge of
the spring because Edwards Aquifer is heavily pumped for water supply, the figures pro-
vide very useful related information. Hydrographs of the average monthly discharge of
Comal Springs in Texas for May and August for the 73-year-long period of record show the
Groundwater Supply 533
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.13
(Top) Interpolated water-level distributions for a selected time step. Well symbols represent where water-level
data have been collected and contours represent interpolated distributions. Note that tick marks along the
bottom of the image represent additional time steps that can be viewed using playback and mouse controls.
(Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.). (Bottom) Interpolated contours rep-
resenting the change in groundwater basin storage between two selected time steps in acre-feet. An estimate
of the volumetric storage change is displayed along the bottom of the frame. The contour map displays relative
change, so users can understand hydraulic elasticity and specific areas of intense and modest change. (Courtesy
of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)
impact of several droughts, which are compounded by increased pumpage from the aqui-
fer. Note that May typically has the highest recorded daily flows and August the lowest.
During the drought of the 1950s, the springs were dry from June to November of 1956. This
example illustrates how using some average values, even when having an unusually long
period of record, could lead to erroneous conclusions about available secure discharge
rates for any given time. For example, Figure 9.15 shows the theoretical probability that the
average spring discharge in August would be less than 50 cfs is about 4%, and the prob-
ability that the spring would be dry (discharge equal to 0 cfs) is about 2%–3%. However,
534 Hydrogeological Conceptual Site Models
500
May
400 May
average
Discharge (cfs)
300
August
200 average
100
August
0
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.14
Average monthly discharge in cubic feet per second at Comal Springs, TX, for May (bold line) and August
(dashed line). Horizontal lines indicate average May and August discharge for the entire 73-year-long period of
record. (From Kresic, N., Groundwater Resources: Sustainability, Management and Restoration. McGraw Hill, New
York, 852 pp., 2009. With permission.)
we know from the record that the spring went dry in August 1956. It should be noted again
that this probability analysis also reflects historic artificial groundwater withdrawals from
the system and, therefore, should not be used alone for any planning purposes. In other
words, such withdrawals may change in the future, and their impact would have to be
accounted for in some quantitative manner.
Similar probability analysis can also be performed for daily spring flows, and various
percentiles of the probability distribution for individual months can be combined to pro-
duce graphs, such as the one shown in Figure 9.16. This graph can be displayed with a
similar plot of historic and current (recent) monthly precipitation, which is very useful
99.9
99
90
Cumulative probability
70
50
30
20
August
10
5
May
1
0.5
0.1
0 50 100 150 200 250 300 350 400 450 500
Discharge (cfs)
FIGURE 9.15
Extreme value probability distributions of average monthly flows in May and August for Comal Springs. (From
Kresic, N., Groundwater Resources: Sustainability, Management and Restoration. McGraw Hill, New York, 852 pp.,
2009. With permission.)
Groundwater Supply 535
1000
Max
1%
50%
10 70%
90%
95%
99%
Min
1
Downloaded by [University of Auckland] at 23:45 09 April 2014
0.1
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Month
FIGURE 9.16
Daily flow duration curves for each month. (Modified from U.S. Army Corps of Engineers (USACE), Hydrologic
Frequency Analysis, Engineering manual 1110-2-1415, Washington, DC, 1993. Available at http://140.194.76.129/
publications/eng-manuals/.)
when anticipating likely spring flows in the near future. For example, if the spring dis-
charge is significantly dependent on precipitation, the currently measured spring flow is
less than the 10th percentile (i.e., the historically observed flow is higher 90% of the time),
and the same is true for the recent precipitation. It may be necessary to impose some type
of restriction on spring-water use before the hydrologic and meteorological conditions
improve (Kresic and Bonacci 2010).
Regression between different hydrologic variables that are correlated in some manner
and represented with a sufficient number of data is a simple and efficient quantitative
method for estimating groundwater availability in some cases. In the hydrology of springs,
this refers to finding a simple or multiple regression equation describing spring-flow rate
using an observed time series of variables known to influence it. For example, Figure 9.17
shows plots of daily flows at the Comal and San Marcos springs (both springs are draining
Edwards Aquifer in Texas), daily pumpage from the aquifer in the San Antonio area, the
aquifer levels at the Bexar County Index Well J17, and the precipitation in San Antonio. As
can be seen without any quantitative analysis, Comal Springs, which are closer to well J17
and San Antonio, shows a better correlation with the pumpage and the aquifer levels. In
fact, the simple regression model of the Comal Springs discharge based on the J17 water
levels (Figure 9.18) appears almost perfect, judging from the model correlation coefficient
(r = 0.978). This is also the main reason why the aquifer levels measured at index well
J17 are used as the key quantitative threshold parameter for aquifer management; when
this level drops below certain values, successively more stringent restrictions on aquifer
pumpage are imposed in order to protect the flow of the springs.
536 Hydrogeological Conceptual Site Models
Rainfall (inches)
600
1
Spring Flow (cfs)
500 700
2
J17 680
400
640
200
Pumping
Downloaded by [University of Auckland] at 23:45 09 April 2014
620
100
600
San Marcos Springs
0 580
2005 2006
Year
FIGURE 9.17
Daily flows at Comal and San Marcos Springs (in cubic feet per second) versus pumpage from the Edwards
Aquifer in the San Antonio area (in million gallons per day), daily aquifer level at the Bexar County Index
Well J17, and daily precipitation in San Antonio. (Reprinted from Groundwater Hydrology of Springs: Engineering,
Theory, Management, and Sustainability, Kresic, N., Modeling, edited by N. Kresic and Z. Stevanović, pp. 166–230,
Copyright 2010, with permission from Elsevier.)
600
500 r = 0.9778
400
300
200
640 650 660 670 680 690 700
Aquifer level at J17 (ft asl)
FIGURE 9.18
Simple regression model of the Comal Springs flow on aquifer level at the Bexar County Index Well J17 with
the 95% prediction limits. (Reprinted from Groundwater Hydrology of Springs: Engineering, Theory, Management,
and Sustainability, Kresic, N., Modeling, edited by N. Kresic and Z. Stevanović, pp. 166–230, Copyright 2010, with
permission from Elsevier.)
Groundwater Supply 537
groundwater restoration is clearly established, it may take years before any measures to
mitigate the situation are taken. One common reason is the high cost of groundwater
remediation, which can prohibit small and large users alike from attempting to solve
the problem on their own. This is the main reason why in some societies, such as the
United States, where the legal rights of both water users and alleged polluters are highly
protected, exorbitant sums of money are spent each year on litigation over groundwater
contamination.
In some complex hydrogeologic environments, aquifer restoration to pristine or near-
pristine conditions (which legally are defined as all contaminants present below their
maximum allowed concentrations) may not be technically feasible (see Chapter 8). This
fact is often not acceptable to some stakeholders, and there are many examples of ground-
water users with false hopes, waiting for someone else to pay for solving their groundwa-
ter contamination problem. In such cases, the someone else should be, whenever possible,
considered as something else, including at least two options: (1) groundwater treatment to
drinking-water standards after extraction from the aquifer, and (2) innovative approaches
to overall water management, including water reuse and public outreach (education)
regarding groundwater-resource (aquifer) protection.
Oftentimes a source of aquifer contamination may be unknown to or underestimated by
a public or private entity developing a groundwater resource. One such example is illus-
trated in Figure 9.19. Several years of initial monitoring at a proposed extraction site under
short-term pumping or static conditions showed acceptable water quality, including con-
centrations of constituent #1 at or below its drinking water equivalency level (or guidance
level) and concentrations of constituent #2 below its secondary drinking-water standard.
However, upon full-scale initiation of pumping from the well for water supply, concen-
trations of both constituents increased rapidly as shown in Figure 9.19. In a worst-case
scenario, the well may need to be abandoned if concentrations of constituent #2 eclipse its
secondary standard.
This example highlights the importance of expansive water-quality monitoring under
long-term pumping conditions around a proposed extraction site before full-scale devel-
opment of the resource in addition to rigorous identification of potential contaminants
(including nontraditional contaminants with secondary standards) and quantification of
their mass in the aquifer system. It is also important to be aware of emerging contaminants
that may jeopardize the future usage of a groundwater supply. At any time, previously
unregulated contaminants may be added to state or federal drinking water criteria or
existing criteria may be modified to lower concentration thresholds for already-regulated
contaminants (two example cases for the reader to explore independently are related to
538 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.19
Concentrations of two chemical constituents in a water-supply well over time. Continuous pumping of the well
for water supply began in 1997 as indicated by the dashed black vertical line.
arsenic and 1,4-dioxane). Either of these actions may compromise a well’s viability and
increase monitoring and operational costs. For this reason, many public and private enti-
ties are focused on identifying pristine resources for groundwater development. However,
as undeveloped, pristine sources are becoming increasingly rare and inaccessible, appro-
priate risk characterization and management of more vulnerable resources are becoming
key elements of sustainable groundwater development.
Nonpoint-source groundwater contamination, which requires both regulatory and local
land-use changes in order to restore aquifers to their natural condition, is caused worldwide
by the use of pesticides and fertilizers. For example, the United Kingdom Environment
Agency reported that pesticides were found in more than a quarter of groundwater moni-
toring sites in England and Wales in 2004—in some cases exceeding applicable drinking-
water limits. Atrazine is a weed killer used mainly to protect maize (corn) crops, and it
was used in the past to maintain roads and railways. It has been a major problem, but since
nonagricultural uses were banned in 1993, concentrations in groundwater have gradually
declined. A complete ban on all use of atrazine (and simazine, another pesticide) in the
United Kingdom was planned to be phased in between 2005 and 2007 but has been delayed.
As noted by the United Kingdom Environment Agency, even when banned, pesticides can
remain a problem for many years after they were last used (Environment Agency 2007).
Some other European countries have also banned the use of atrazine: France, Sweden,
Norway, Denmark, Finland, Germany, Austria, Slovenia, and Italy. In contrast, the United
States Environmental Protection Agency (U.S. EPA) has concluded that the risks from atra-
zine for approximately 10,000 community drinking-water systems using surface water are
low and did not ban this pesticide, which continues to be the most widely used pesticide
in the United States. Incidentally, as stated by the U.S. EPA, 40,000 community drinking-
water systems using groundwater were not included in the related study, and private wells
used for water supply were not mentioned in the U.S. EPA’s decision to allow continuous
use of atrazine (U.S. EPA 2003).
The United Kingdom Environment Agency also reported that in 2004 almost 15% of mon-
itoring sites in England (none in Wales) had an average nitrate concentration that exceeded
50 mg/L, the upper limit for nitrate in drinking water (for comparison, groundwater
Groundwater Supply 539
naturally contains only a few milligrams per liter of nitrate). Water with high nitrate levels
has to be treated or diluted with cleaner water to reduce concentrations. More than two-
thirds of the nitrate in groundwater comes from past and present agriculture, mostly from
chemical fertilizers and organic materials. It is estimated that more than 10 million tons
of organic material per year is spread on the land in the United Kingdom. More than 90%
of this is animal manure; the rest is treated sewage sludge, green waste compost, paper
sludge, and organic industrial wastes. Other major sources of nitrate are leaking sewers,
septic tanks, water mains, and atmospheric deposition. Atmospheric deposition of nitro-
gen makes a significant contribution of nitrate to groundwater. A study in the Midlands
of England concluded that approximately 15% of the nitrogen leached from soils came
from the atmosphere. The United Kingdom Environment Agency estimates that 60% of
groundwater bodies in England and 11% in Wales are at risk of failing the European Water
Framework Directive objectives because of high nitrate concentrations (Environment
Downloaded by [University of Auckland] at 23:45 09 April 2014
Agency 2007). In general, nitrate is believed to be the most widespread groundwater con-
taminant worldwide (Kresic 2009).
Figure 9.20 shows results of a modeling effort aimed at better understanding the mecha-
nisms of nitrate leaching to groundwater and predicting its future impacts. A GIS and
Microsoft Excel tool, which models the slow movement of historically leached nitrate
down through the unsaturated zone of the Chalk aquifer, has been developed for Wessex
Water Services, Ltd., in the United Kingdom. Seasonal and short-term variations in nitrate
were simulated by linking to groundwater level and bypass recharge variations. Model
predictions closely matched short- and long-term trends and gave confidence in the use of
the model for predicting nitrate concentrations in the years to come assuming a number of
nitrate leaching scenarios. Wessex Water used the findings of the modeling tool to assess
the likely success of active catchment management (i.e., storm water runoff capture and
treatment) as an alternative to blending or treatment in the short and long term.
Site Measured Nitrate (Average) Modeled NO3 (Fn of GWL & Recharge)
Total Recharge at Water Table (mm/d) Modeled NO3 (Fn of GWL)
Estimated Bypass Recharge (mm/d)
12 11.2
Modeled Nitrate Concentration (mg/L N)
Nitrate (mg/L N) or Recharge (mm/d)
10 9.2
8 7.2
6 5.2
4 3.2
2 1.2
0 -0.8
1998 1999 2000 2001 2002 2003 2004 2005
FIGURE 9.20
Model-simulated (red line) trends in historic groundwater nitrate concentrations used to predict future impacts
assuming a number of nitrate leaching scenarios to the underlying chalk aquifer in England. (Courtesy of
AMEC E&I, Inc.)
540 Hydrogeological Conceptual Site Models
concept of overall resource protection in detail (The European Parliament and the Council
of the European Union 2006).
The following discussion on the importance of groundwater resources and their vulner-
ability is based on discussion provided by the U.S. EPA’s Ground Water Task Force (GWTF
2007).
Ground water use typically refers to the current use(s) and functions of groundwater as
well as future reasonably expected use(s). Groundwater use can generally be divided into
drinking water, ecological, agricultural, industrial/commercial uses or functions, and
recreational. Drinking water use includes both public supply and individual (household
or domestic) water systems. Ecological use commonly refers to groundwater functions,
such as providing baseflow to surface water to support habitat and also to the fact that
groundwater (most notably in karst settings) may also serve as an ecologic habitat in
and of itself. Agricultural use generally refers to crop irrigation and livestock watering.
Industrial/commercial use refers to water use in any industrial process, such as for cool-
ing water in manufacturing, or commercial uses, such as car-wash facilities. Recreational
use generally pertains to impacts on surface water caused by groundwater; however,
groundwater in karst settings can be used for recreational purposes, such as cave div-
ing (see Figure 2.92). All of these uses and functions are considered beneficial uses of
groundwater. Furthermore, within a range of reasonably expected uses and functions,
the maximum (or highest) beneficial groundwater use refers to the use or function that
warrants the most stringent groundwater cleanup levels.
Groundwater value is typically considered in three ways: for its current uses, for its future
or reasonably expected uses, and for its intrinsic value. Current use value depends, to a
large part, on need. Groundwater is more valuable where it is the only source of water,
where it is less costly than treating and distributing surface water, or where it supports
ecological habitat. Current use value can also consider the costs associated with impacts
from contaminated groundwater on surrounding media (e.g., underlying drinking-water
aquifers, overlying air—particularly indoor air—and adjacent surface water). Future or
reasonably expected value refers to the value people place on groundwater they expect
to use in the future; this value will depend on the particular expected use or uses (e.g.,
drinking water, industrial). Society also places an intrinsic value on groundwater, which
is distinct from economic value. Intrinsic value refers to the value people place on the fact
that clean groundwater exists and will be available for future generations, irrespective of
current or expected uses. While the value of groundwater is often difficult to quantify, it
will certainly increase as the expense of treating surface water increases and as existing
surface water and groundwater supplies reach capacity with continuing development.
Groundwater Supply 541
Groundwater vulnerability refers to the relative ease with which a contaminant intro-
duced into the environment can negatively impact groundwater quality and/or quantity.
Vulnerability depends to a large extent upon local conditions including, for example,
hydrogeology, contaminant properties, size or volume of a release, and location of the
source of contamination. Shallow groundwater is generally more vulnerable than deep
groundwater. Private (domestic) water supplies can be particularly vulnerable because
they are generally shallower than public water supplies, regulatory agencies generally
require little or no monitoring or testing for these wells (see arsenic discussion in Chapter
1), and homeowners may be unaware of contamination unless there is a taste or odor
problem. Furthermore, vulnerability can change over time. For example, anthropogenic
activities, such as mining or construction, can remove or alter protective overburden, thus
making underlying aquifers more vulnerable.
The protection of groundwater resources is achieved by prevention of possible contami-
Downloaded by [University of Auckland] at 23:45 09 April 2014
groundwater is a relative, unmeasurable, dimensionless property, and they make the dis-
tinction between intrinsic (natural) and specific vulnerability. The intrinsic vulnerability
depends only upon the natural properties of an area, such as characteristics of the porous
media, and recharge. It is independent of any particular contaminant. Specific vulnerability
takes into account the fate-and-transport properties of a contaminant. Simplified, this means
that, for example, an aquifer may be vulnerable to an improper disposal or spill of chlori-
nated solvents at the land surface even though the groundwater flow directions and the
presence of a low-permeable overlying aquitard may be protective against nonpoint-source
nitrate contamination. Another example is a thick, unsaturated zone (e.g., > 300 ft) in arid
climates that may be highly protective of the underlying unconfined aquifer simply because
of the insignificant present-day aquifer recharge that cannot facilitate migration of a con-
taminant through such a thick vadose zone all the way down to the water table. However,
if there were some land-use practices, such as waste disposal in artificial ponds, which can
Downloaded by [University of Auckland] at 23:45 09 April 2014
The most critical decision regarding an appropriate residence time is made by the stake-
holders in each individual case although 5-, 10-, and 20-year wellhead capture zones have
been most widely used to delineate certain subzones of various land-use restrictions
within the main wellhead capture zone (for example, see Kraemer et al. 2005).
Wellhead protection zone delineation methods can be divided into the following three
categories: (1) nonhydrogeologic, (2) quasihydrogeologic, and (3) hydrogeologic. The non
hydrogeologic method is a selection of an arbitrary fixed radius or fixed-shape area
around the well(s) in which authorized personnel implement some type of strict protec-
tion such as limited access. This method does not consider the residence-time criterion.
Quasihydrogeologic methods use very simple assumptions, which, in many cases, do
not have much in common with the site-specific hydrogeologic conditions. Because they
include the application of certain equations, it may appear to nonhydrogeologists that
such methods must have some credibility. When involved in the application of quasihy-
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.21
Map showing tracer test results, faults (in black), and modeled flow paths. Injection sites are shown with yellow
circles, and monitored wells are shown with smaller dotted circles. Tracer test velocities range from 80 ft/day
to more than 12,000 ft/day. The groundwater travel velocity between arrows placed on the modeled flow paths,
as calculated from the USGS’s numeric model of Edwards Aquifer, TX, is 1 mi per year. Dye-tracing results are
presented in the work of Schindel et al. (2009). (Courtesy of Geary Schindel, Edwards Aquifer Authority.)
together with the information obtained from a number of dye-tracing tests performed in
the same area. The numeric model was developed using MODFLOW and an assumption
that the karstic Edwards Aquifer can be modeled as an equivalent porous medium (see
also Chapter 5). However, as can be seen from Figure 9.21, the model completely fails to
match either the groundwater flow directions or the velocities observed in the field.
• Capital cost
• Vicinity to future users
• Existing groundwater users and groundwater permits
• Hydrogeologic characteristics and depth to different water-bearing zones
(aquifers)
Groundwater Supply 545
• Required flow rate of the water-supply system and expected yield of individual wells
• Well drawdown and radius of well (well field) influence
• Interference between wells in the well field
• Water treatment requirements
• Energy cost for pumping and water treatment and general O&M costs
• Aquifer vulnerability and risks associated with the existing or potential sources
of contamination
• Interactions with other parts of the groundwater system and with surface water
• Options for artificial aquifer recharge, including storage and recovery
• Societal (political) requirements
• Existence or possibility of an open water market
Downloaded by [University of Auckland] at 23:45 09 April 2014
The above factors are not all-inclusive and are not listed in order of importance; sometimes just
one or two factors are all that are needed for proceeding with the final design. However, as
the development and use of groundwater resources is becoming increasingly regulated in the
United States and in many other countries, it is likely that most of these factors will have to be
addressed as part of a well-permitting process. Even in cases where permitting requirements
are absent, it is prudent to consider most, if not all, of the listed factors because they ultimately
define the long-term sustainability of any new well or well field (Kresic 2009).
Wells have been used for centuries for domestic and public water supply throughout the
world. Their depth, diameter, and construction methods vary widely, and there is no such
thing as a “one size fits all” approach to construction of wells.
Well design, installation, and well-construction materials should conform to applicable
standards. In the United States, the most widely used water-well standard is the American
National Standards Institute (ANSI)/American Water Works Association (AWWA) A100 stan-
dard, but the authority to regulate products for use in or contact with drinking water rests
with individual states, which may have their own standard requirements. Local agencies may
choose to impose requirements more stringent than those required by the state (AWWA 1998).
Answers to just about any question regarding well design can be found in the classic
1000-page book Groundwater and Wells by Driscoll (1986). Other exhaustive reference books
on well design are Handbook of Ground Water Development by Roscoe Moss Company (1990)
and Water Well Technology by Campbell and Lehr (1973). Following is a brief discussion
of key concepts for consideration in well design. Various public-domain publications by
United States government agencies provide useful information on the design and installa-
tion of water-supply and monitoring wells (e.g., U.S. EPA 1975, 1991; USBR 1977).
The design elements of vertical water wells include the following: drilling method, bor-
ing (drilling) and casing diameter, depth, well screen, gravel pack, well development, well
testing, and selection and installation of the permanent pump. Whenever possible, a well
design should be based on information obtained by a pilot boring drilled prior to the main
well bore. Geophysical logging and coring (sample collection) of the pilot boring provide
the following information: depth to and thickness of the water-bearing intervals in the
aquifer, grain size, permeability of the water-bearing intervals, and physical and chemical
characteristics of the porous media and groundwater. Unknown geology and hydrogeol-
ogy of the formation(s) to be drilled may result in the selection of an improper drilling
technology, sometimes leading to a complete abandonment of the drilling location because
of various unforeseen difficulties such as flowing sands, collapse of boring walls, or loss of
drilling equipment in karst cavities.
546 Hydrogeological Conceptual Site Models
The expected well yield, the well depth, and the geologic and hydrogeologic characteristics
of the porous media (rock) all play an important role in selecting the drilling diameter and the
drilling method. Deep wells or thick stratification of permeable and low-permeable porous
media may require drilling with several diameters and the installation of several casings of
progressively smaller diameter called telescoping casing. This is done to provide stable and
plumb boreholes in deep wells and to bridge difficult or undesirable intervals (e.g., flowing
sands, highly fractured and unstable walls prone to caving, thick sequences of swelling clay).
Ultimately, the expected well capacity is the parameter that will define the last drilling
diameter sufficient to accommodate the screen diameter, including thickness of any gravel
pack for that capacity. The relationship between the two diameters is not linear—doubling
the screen diameter will not result in doubling the well yield. For example, for the same
drawdown and radius of influence, an increase in diameter from a 6-in. well to a 12-in. well
will yield only 10% more water.
Downloaded by [University of Auckland] at 23:45 09 April 2014
The well screen is arguably the most important part of a well because this is where
groundwater enters the well and where the efficiency of an otherwise good design may
be compromised, including loss of the entire well. Casing and screens both stabilize the
formation materials, and screens, in addition to the inflow of water, allow proper well
development. It has been generally accepted that the screens of public water-supply wells
should be made of high-quality stainless steel (see Figure 9.22). During the process of well
FIGURE 9.22
Installation of continuous slot (Johnson) well screen and gravel pack into a large-diameter water-supply well in
upstate New York. The well is completed adjacent to a major river in alluvial deposits containing large gravel
and boulders, requiring use of the cable-tool drilling method.
Groundwater Supply 547
development, the finer materials from the productive water-bearing zones and any fines
introduced by the drilling fluid are removed, so that only the coarser materials are in
contact with the screen. In formations where the porous media grains surrounding the
screen are more uniform in size (homogeneous) and are graded in such a way that the
fine grains will not clog the screen, the developed aquifer materials will form a so-called
natural pack, consisting of grains coarser than farther away from the well bore. Such wells
are called naturally developed wells. In contrast, when the targeted aquifer (formation)
intervals are heterogeneous and have predominantly finer grains, it may be necessary to
place an artificial gravel pack around the screen intervals (Figure 9.22). This gravel pack
(also called filter material) will allow proper well development and prevent the continuous
entrance of fines and screen clogging by the fines during well operation. The placement of
a gravel pack also makes the zone around the well screen more permeable and increases
the effective hydraulic diameter of the well.
Downloaded by [University of Auckland] at 23:45 09 April 2014
The size of well screen openings depends on the grain size distribution of the natural
porous media. When natural well development is not possible, the size of screen openings
is also dependent on the required gravel pack characteristics (gravel pack grain size and
uniformity). The percentage of openings, the screen diameter, and the screen length should
all be selected simultaneously to satisfy the following criteria: maximize well yield, maxi-
mize well efficiency by minimizing hydraulic loss at the screen, and provide for structural
strength of the screen, i.e., prevent its collapse resulting from formation pressure.
Proper well development will improve almost any well regardless of type and size,
whereas without development, an otherwise excellent well may never be satisfactory. As
discussed by U.S. EPA (1975) and Driscoll (1986), in any well drilling technology, the per-
meability around the borehole is reduced. Compaction, clay smearing, and driving fines
into the wall of the borehole occur in the cable tool drilling method. Drilling fluid invasion
into the aquifer and formation of a mud cake on the borehole walls are caused by the direct
rotary method. Silty and dirty water often clog the aquifer in the reverse rotary drill-
ing method. In consolidated formations, compaction may occur in some poorly cemented
rocks, where cuttings, fines, and mud are forced into fractures, bedding planes, and other
openings, and a mud cake forms on the wall of the borehole.
There are various methods of well development, and their selection depends primarily
on the applied drilling technology and the formation characteristics. However, availability
of the equipment and driller’s preference in many cases play unjustifiably more impor-
tant roles. It is often impossible to anticipate how a well will respond to certain types of
development and how long it will take to achieve adequate development. General methods
of well development are pumping, surging, fracturing, and washing, each of which has
several variations (U.S. EPA 1975). It is always recommended that at least two methods
be applied for best results. Because a lump-sum basis for well development may result in
unsatisfactory work, it is better to provide for development on a unit price per hour basis
and continue until the following conditions have been met (AWWA 1998):
1. Sand content should average no more than 5 mg/L for a complete pumping cycle
of 2-hour duration when pumping at the design discharge capacity.
2. No less than 10 measurements should be taken at equal intervals to permit plot-
ting of sand content as a function of time and production rate and to determine the
average sand content for each cycle.
3. There should be no significant increase in specific capacity during at least 24 hours
of development.
548 Hydrogeological Conceptual Site Models
Figure 9.23 shows discharge of clear water from a properly developed, large-diameter,
deep well designed for public water supply in Phoenix, AZ. Wells like this have enabled an
unprecedented growth of urban areas and agricultural land in many arid environments
around the world. At the same time, however, a relatively small number of people fully
understand the complexity, importance, and cost of a properly designed and constructed
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.23
Large-diameter water-supply well in Phoenix, AZ, during performance of an aquifer pumping test. (Courtesy
of Chris Legg.)
Groundwater Supply 549
large-capacity well used for public water supply. For nonhydrogeologists and those who
are not in a related water supply profession, a well usually means a nondescript hole
in the ground that somehow produces water. On the other hand, hydrogeologists and
groundwater professionals think of wells in many different contexts, and some of them
spend lifetimes trying to better understand and design them. Unfortunately, the continu-
ous advances in well design, flexibility in selecting the most feasible locations for certain
uses, and general affordability in terms of well installation costs and energy cost for pump
operation are the main reasons why there is a growing concern regarding sustainability of
groundwater use in many countries. The most obvious consequence of an indiscriminate
use of water wells and subsequent lowering of the hydraulic heads in aquifers is cessation
of spring flows documented around the world (Kresic 2010). It is the hope of the authors
that this trend will not continue as younger generations become more and more aware of
many aspects of groundwater sustainability and preserve jewels such as the one shown in
Downloaded by [University of Auckland] at 23:45 09 April 2014
Figure 9.24.
FIGURE 9.24
Spa pool fed by thermal water issuing from the historic Octagon Spring (inset) in The Homestead resort, Warm
Springs, VA, established in 1766. Archeological evidence indicates use of the mineral and thermal springs in the
area dating back to 7000 BC.
550 Hydrogeological Conceptual Site Models
and Bredehoeft (2002) provide illustrative discussions about the safe yield concept and the
related water budget myth.
In order to examine the safe yield myth more carefully, an analogy is made comparing
an aquifer and a reservoir behind a dam on a river. If withdrawals from a reservoir equal
inflows, the river below the dam will be dry because there will be no outfall from the res-
ervoir. The same principle can be applied to a groundwater reservoir. If pumping (with-
drawal) equals inflow (recharge), the outflows (subsurface flow or discharge to springs,
streams, or wetlands) from the aquifer will decrease and may eventually reach zero,
resulting in some adverse consequence at some point in time. The direct hydrologic effects
will be equal to the volume of water removed from the natural system, but those effects
may require decades to centuries to manifest. Because aquifer recharge and groundwater
withdrawals can vary substantially over time, these changing rates can be critical informa-
tion for developing groundwater management strategies (Anderson and Woosley 2005).
With an increased demand for water and pressures on groundwater resources, the
decades-long debate among water professionals about what constitutes safe withdrawal
of groundwater has now changed into a debate about sustainable use of groundwater.
The difference is not only semantic, and confusion has occasionally resulted. For example,
there are attempts to distinguish between safe yield and sustainable pumping when the
latter is defined as the pumping rate that can be sustained indefinitely without mining or
dewatering the aquifer. Devlin and Sophocleous (2005) provide a detailed discussion of
these and other related concepts.
What appears most difficult to understand is that the groundwater system is a dynamic
one—any change in one portion of the system will ultimately affect its other parts as
well. Even more important is the fact that most groundwater systems are dynamically
connected with surface water. As groundwater moves from the recharge area toward the
discharge area (e.g., a river), it constantly flows through the saturated zone, which is the
groundwater storage (reservoir). If another discharge area (such as a well for water supply)
is created, less water will flow toward the old discharge area (river). This fact seems to be
paradoxically ignored by those who argue that groundwater withdrawals may actually
increase aquifer recharge by inducing inflow from recharge boundaries (such as surface
water bodies!) and, therefore, result in sustainable pumping rates. Although such ground-
water management strategy may be safe or sustainable for the intended use, another ques-
tion is if it has any consequences for the sustainable use of the surface water system, which
is now losing water to rather than gaining it from the groundwater system (Kresic 2009).
Dependence of communities or regions solely on groundwater in storage is a manage-
ment strategy that is not sustainable for future generations. When it is obvious that the
Groundwater Supply 551
natural aquifer recharge cannot offset the reduction in groundwater storage in any mean-
ingful way over a reasonable time, prudent groundwater management must also consider
strategies that rely on used water for aquifer recharge. Two general groups of groundwater
systems fall into the category of nonrenewable:
Both groups involve the extraction of groundwater that originated as recharge in a distant
Downloaded by [University of Auckland] at 23:45 09 April 2014
past, including during more humid climatic regimes. The volumes of such groundwater
stored in some aquifers are enormous. For example, the total recoverable volume of fresh
water in the Nubian Sandstone Aquifer System (NSAS) in North Africa is estimated at
about 15,000 km3, and the present rate of annual groundwater extraction is 2.17 km3. For
comparison, the combined volume of water stored in the Great Lakes of North America is
22,684 km3.
The term groundwater sustainability in the case of nonrenewable systems has an
entirely social rather than physical (engineering, scientific) context. It implies that full con-
sideration must be given not only to the immediate benefits but also to the negative socio-
economic impacts of development and to the what comes after question—and thus to time
horizons longer than 100 years (Foster et al. 2002–2005).
There are two general situations under which the utilization of nonrenewable ground-
water occurs: planned and unplanned. In the planned scenario, the management goal is
the orderly utilization of groundwater reserves stored in the system with little preexisting
development. The expected benefits and predicted impacts over a specified time frame
must be specified. Appropriate exit strategies need to be identified, developed, and imple-
mented by the time that the groundwater system is seriously depleted. This scenario must
include balanced socioeconomic choices on the use of stored groundwater reserves and on
the transition to a less water-dependent economy. A key consideration in defining the exit
strategy will be identification of the replacement water resource, such as desalination of
brackish groundwater. In an unplanned situation (Figure 9.25), a rationalization scenario
is needed in which the management goal is to achieve hydraulic stabilization (or recovery)
of the aquifer or utilize groundwater reserves in a more orderly way by minimizing qual-
ity deterioration, maximizing groundwater productivity, and promoting social transition
to a less water-dependent economy (Foster et al. 2002–2005).
Saudi Arabia is a good example of two main stages in exploitation of nonrenewable
groundwater: initially very rapid, large-scale, and unrestricted development for all uses,
subsequently supplemented by desalinated water and treated wastewater. Saudi Arabia
has become the largest desalinated water producer in the world. The present produc-
tion presents approximately 50% of the total current domestic and industrial demands
with the rest met from groundwater resources (Abderrahman 2006). However, irrigated
agriculture is still the largest user of the nonrenewable groundwater in Saudi Arabia,
where food security concerns have the highest priority. This is also true in other parts
of the world, including some of the most developed countries, such as the United States
and Australia (Figures 9.26 and 9.27), and some of the poorest sub-Saharan countries in
Africa.
552 Hydrogeological Conceptual Site Models
GRADUAL
RECOVERY
Evaluation of groundwater
GENERAL
STABILIZATION
FIGURE 9.25
Targets for nonrenewable groundwater resource management in rationalization scenarios following indis-
criminate and excessive exploitation. (From Foster, S. et al., Utilization of Non-Renewable Groundwater: A
Socially Sustainable Approach to Resource Management, Sustainable Groundwater Management: Concepts
and Tools, Briefing Note Series, Note 11, GW MATE (Groundwater Management Advisory Team), the World
Bank, Washington, DC, 6 pp., 2002–2005. With permission.)
FIGURE 9.26
Center-pivot irrigation using groundwater is widespread throughout the Unites States, including large areas in
the arid West as shown here.
Groundwater Supply 553
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.27
Center pivot irrigation in Colorado using the Low Elevation Spray Application system. This type of application
uses less water and reduces evaporation compared to traditional methods. (Courtesy of Gene Alexander, USDA
National Resources Conservation Service.)
• Regulatory requirements.
• The availability of an adequate source of recharge water of suitable chemical and
physical quality.
• Geochemical compatibility between recharge water and the existing groundwater
(e.g., possible carbonate precipitation, iron hydroxide formation, mobilization of
trace elements).
• The hydrogeologic properties of the porous media (soil and aquifer) must facil-
itate desirable infiltration rates and allow direct aquifer recharge. For example,
existence of extensive low permeable clays in the unsaturated (vadose) zone may
exclude a potential recharge site from future consideration.
554 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.28
Possible options for managed aquifer recharge or MAR. (Modified from International Association of Hydro
geologists (IAH), Managing Aquifer Recharge, Commission on Management of Aquifer Recharge, IAH-MAR,
12 pp., 2002.)
• The water-bearing deposits must be able to store the recharged water in a reason-
able amount of time and allow its lateral movement toward extraction locations
at acceptable rates. In other words, the specific yield (storage) and the hydraulic
conductivity of the aquifer porous media must be adequate.
• Presence of fine-grained sediments may have the advantage of improving the
quality of recharged water because of their high filtration and sorption capacities.
Other geochemical reactions in the vadose zone below recharge facilities may also
influence water quality.
• An engineering solution should be designed to facilitate efficient recharge when there
is an available surplus of water and efficient recovery when the water is most needed.
• The proposed solution must be cost-efficient, environmentally sound, and com-
petitive to other water-resource development options.
Aquifers that can store large quantities of water and do not transmit them away quickly are
best suited for artificial recharge. For example, karst aquifers may accept large quantities of
recharged water but, in some cases, tend to transmit them very quickly away from the recharge
area. This may still be beneficial for the overall balance of the system and the availability of
groundwater downgradient from the immediate recharge sites. Alluvial aquifers are usually
the most suited to storage because of the generally shallow water table and vicinity to source
water (surface stream). Sandstone aquifers are, in many cases, very good candidates because of
their high storage capacity and moderate hydraulic conductivity (Kresic 2009).
Groundwater Supply 555
FIGURE 9.29a
Recharge basins of the Agua Fria Recharge Project in central Arizona. (Courtesy of Philip Fortnam, Central
Arizona Project.)
556 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.29b
Top: Rivers run to the sea but sometimes do not make it there. The mighty Colorado River runs dry at the
USA/Mexico border in January 2009. Bottom: The Colorado River delta 5 miles north of the Sea of Cortez,
Mexico, also in January 2009. (Photos courtesy of Pete McBride, United States Geological Survey; avail-
able at http://gallery.usgs.gov/photos/10_15_2010_rvm8Pdc55J_10_15_2010_1 and http://gallery.usgs.gov/
photos/10_15_2010_rvm8Pdc55J_10_15_2010_0.)
Groundwater Supply 557
• The groundwater supply system would continuously operate at the highest per-
mitted monthly average rate of 16.12 MGD.
• Washington County would only receive average rainfall for 4 years followed by a
drought that would occur in the fifth year, which coincides with the anticipated
operation of Plant Washington’s groundwater supply system.
• During the 50-year life expectancy of Plant Washington, a 100-year extreme
drought will occur. The conditions of a 100-year drought are more extreme than
the recent, severe 2007–2008 drought or the drought of record in 1981–1982.
558 Hydrogeological Conceptual Site Models
16 230
Pumping, in mgd
14 Well #23X027
10
8
Downloaded by [University of Auckland] at 23:45 09 April 2014
0
1/1/1995
1/1/2009
1/1/1980
1/1/1990
1/1/2000
1/1/2005
1/1/1985
1/1/1978
1/1/2011
Date
FIGURE 9.30
Hydrograph of daily water-level elevation at the USGS monitoring well 23X027 in Sandersville, GA versus
monthly precipitation in Sandersville and annual permitted groundwater withdrawal from the Cretaceous
Aquifer in the greater Sandersville area. (From Kresic, N. et al., Sustainable Groundwater Use for Power
Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)
• Despite the actual nonincreasing trend, the model assumes a 2% annual population
growth with a proportional increase of domestic water usage from Washington
County’s aquifers.
• Agricultural water demands, that is, the withdrawals from the underlying
Cretaceous aquifer, would increase by 20% during future drought periods. (Note
that for the drought of the year 2000, water consumption was 11% higher than
in 2005, a nondrought year; therefore, a 20% increase in agricultural pumping is
considered conservative.)
As can be seen from Figures 9.30 and 9.31, the model assumptions are overly conservative
because the most obvious aquifer recharge episodes (reflected by the increasing hydraulic
head at the USGS index well) are the result of above-average precipitation and not caused
by any cyclic operation of the existing water wells in the area (Kresic et al. 2011).
The transient numeric model developed for this project is the most complex such model
required by the EPD for groundwater permitting in the state of Georgia to date. The model
dimensions are 48.3 mi × 36 mi, and it consists of eight layers comprising 1,008,000 cells.
All permanent surface streams are simulated in the model with fine cell size (Figure 9.32)
to accurately simulate surface water–groundwater interactions. The overly conservative,
low recharge rates and the lowered hydraulic heads at the external model boundaries dur-
ing the simulated 100-year drought are shown schematically in Figures 9.33 and 9.34.
Groundwater Supply 559
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.31
Cross-correlation between monthly precipitation in Sandersville and average monthly water-level elevation at
the USGS monitoring well 23X027 in Sandersville, GA. (From Kresic, N. et al., Sustainable Groundwater Use for
Power Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)
FIGURE 9.32
Detail of the numerical groundwater flow model showing surface water features, existing wells, and permit-
ted Plant Washington wells (shown in red). (From Kresic, N. et al., Sustainable Groundwater Use for Power
Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)
560 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014
FIGURE 9.33
Schematic presentation of simulated conservative future recharge in the groundwater flow model. (From Kresic,
N. et al., Sustainable Groundwater Use for Power Generation in Georgia, Case Study. Meeting Competing
Demands with Finite Groundwater Resources, Presented at the Groundwater Protection Council Annual
Forum, Atlanta, GA, September 24–28, 2011. Courtesy of AMEC E&I, Inc.)
The groundwater modeling results show that Plant Washington’s groundwater sup-
ply system will cause small but reasonable impacts to groundwater as a result of using
this resource for an average of four months once every five years. Under extreme drought
conditions, these impacts would increase but are still considered acceptable. The ground-
water model predicts a steady, small decline in groundwater elevations resulting from
the assumed 2% per year increase in Washington County population. By the year 2063,
without any pumping associated with Plant Washington, the model predicts that the
FIGURE 9.34
Simulated recharge and change in the hydraulic head elevation at the general head boundary prior, during,
and following a 100-year drought. (From Kresic, N. et al., Sustainable Groundwater Use for Power Generation
in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources, Presented at
the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy of AMEC
E&I, Inc.)
Groundwater Supply 561
groundwater level at the USGS Sandersville well 23X027 would drop by only 13 ft below
its current level (Figure 9.35). When the model includes Plant Washington’s impacts, future
drawdown at the USGS Sandersville well would increase by approximately 1 ft. During a
100-year drought, the model predicts that Plant Washington’s impact would result in an
additional 2 ft of drawdown at this well compared to the impact by all other users during
the drought.
The model also shows that, after cessation of pumping by the Plant Washington wells,
the potentiometric surface recovers before the next simulated drought and continues to
follow the general trend caused by the simulated increasing groundwater withdrawals by
other users (without the influence of Plant Washington wells pumping). As illustrated in
Figure 9.36, during the entire simulated period, including during simulated groundwater
withdrawal by the Plant Washington wells, the predicted potentiometric surface at well
23X027 remains 100 ft above the top of the confined Cretaceous aquifer, thus retaining its
Downloaded by [University of Auckland] at 23:45 09 April 2014
210
USGS data
logger Predicted, no Plant Washington
Water-level elevation, in feet amsl
205
wells pumping; average recharge;
increasing pumping by other users
200
195
Predicted, Plant Washington wells
not pumping; drought conditions
190
185
180
Predicted, Plant Washington
wells pumping; 100-year drought
175
1995
2000
2005
2010
2015
2020
2025
2030
2035
2040
2045
2050
2055
2060
2065
Year
FIGURE 9.35
Modeled potentiometric surface at the USGS monitoring well 23X027 with and without Plant Washington Wells
pumping for the recharge and boundary conditions shown in Figures 9.33 and 9.34. (From Kresic, N. et al.,
Sustainable Groundwater Use for Power Generation in Georgia, Case Study. Meeting Competing Demands
with Finite Groundwater Resources, Presented at the Groundwater Protection Council Annual Forum, Atlanta,
GA, September 24–28, 2011. Courtesy of AMEC E&I, Inc.)
562 Hydrogeological Conceptual Site Models
440
360
320
USGS Datalogger
280
Data
Plant Washington
240
Wells Not Pumping
200
160
Plant Washington
120 Wells Pumping
80
0
2000
2005
2010
2015
2020
2025
2030
2035
2040
2045
2050
2055
2060
1996
Year
FIGURE 9.36
Modeled potentiometric surface at the USGS monitoring well 23X027 stays well above the top of the confined
Cretaceous Aquifer for the entire simulated time (From Kresic, N. et al., Sustainable Groundwater Use for
Power Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)
FIGURE 9.37
Detail of the model-predicted maximum additional drawdown in the vicinity of Plant Washington from
pumping of the proposed wells during the simulated 100-year drought. (From Kresic, N. et al., Sustainable
Groundwater Use for Power Generation in Georgia, Case Study. Meeting Competing Demands with Finite
Groundwater Resources, Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA,
September 24–28, 2011. Courtesy of AMEC E&I, Inc.)
Groundwater Supply 563
development and its subsequent use for evaluating numerous groundwater withdrawal
scenarios, third parties did not challenge the permit issued by EPD.
FIGURE 9.38
Top: Initial abundance provided by a prolific, nonrenewable aquifer. Bottom: Possible final outcome of ground-
water withdrawal from the nonrenewable aquifer. (Courtesy of Marin Kresic.)
564 Hydrogeological Conceptual Site Models
References
Abderrahman, W. A., 2006. Saudi Arabia Aquifers. In Non-Renewable Groundwater Resources: A
Guidebook on Socially Sustainable Management for Water-Policy Makers. IHP-VI, Series on
Groundwater No. 10, edited by S. Foster and D. P. Loucks, UNESCO, Paris, pp. 63–67.
Anderson, M. T., and Woosley, L. H., Jr., 2005. Water Availability for the Western United States–Key
Scientific Challenges. U.S. Geological Survey Circular 1261, Reston, VA, 85 pp.
American Water Works Association (AWWA), 1998. AWWA Standard for Water Wells: American
National Standard. ANSI/AWWA A100-97, AWWA, Denver, CO.
Bredehoeft, J. D., 2002. The water budget myth revisited: why hydrogeologists model. Ground Water
40(4), 340–345.
Bredehoeft, J. D., Papadopulos, S. S., and Cooper, H. H., 1982. Groundwater—The Water Budget
Myth. In Studies in Geophysics, Scientific Basis of Water Resource Management. National Academy
Downloaded by [University of Auckland] at 23:45 09 April 2014
Kresic, N., and Stevanović, Z. (eds.), 2010. Groundwater Hydrology of Springs: Engineering, Theory,
Management, and Sustainability, Elsevier, Amsterdam, 573 pp.
Kresic, N., Kennedy, J., Ledbetter, L., and Alford, D., 2011. Sustainable Groundwater Use for Power
Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater
Resources. Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28,
2011.
Kruseman, G. P., de Ridder, N. A., and Verweij, J. M., 1991. Analysis and Evaluation of Pumping
Test Data (Completely Revised 2nd edition). International Institute for Land Reclamation and
Improvement (ILRI) Publication 47, Wageningen, The Netherlands, 377 pp.
Leavesley, G. H., Lichty, R. W., Troutman, B. M., and Saindon, L. G., 1983. Precipitation–Runoff
Modeling System—User’s Manual. U.S. Geological Survey Water-Resources Investigations
Report 83-4238, 207 pp.
Markstrom, S. L., Niswonger, R. G., Regan, R. S., Prudic, D. E., and Barlow, P. M., 2008. GSFLOW-
Coupled Ground-Water and Surface-Water Flow Model Based on the Integration of the
Downloaded by [University of Auckland] at 23:45 09 April 2014
Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water Flow Model
(MODFLOW-2005). U.S. Geological Survey Techniques and Methods 6-D1, 240 pp.
Meinzer, O. E., 1923. The Occurrence of Ground Water in the United States with a Discussion of
Principles. U.S. Geological Survey Water-Supply Paper 489, Washington, DC, 321 pp.
Neal, L., Ledbetter, L., and Alford, D., 2011. An Integrated Water Management Strategy for Power
Generation: A Central Georgia Case Study. Meeting Competing Demands with Finite
Groundwater Resources. Groundwater Protection Council Annual Forum, Atlanta, GA,
September 24–28, 2011.
Rogers, P., and Hall, A. W., 2003. Effective Water Governance. TEC Background Papers No. 7, Global
Water Partnership Technical Committee (TEC), Global Water Partnership, Stockholm, Sweden,
44 pp.
Roscoe Moss Company, 1990. Handbook of Ground Water Development. John Wiley & Sons, New York,
493 pp.
Schindel, G., Johnson, S. Hoyt, J., Green, R. T., Alexander, E. C., and Krietler, C., 2009. Hydrology of the
Edwards Group: A Karst Aquifer Under Stress. A Field Trip Guide for the USEPA Groundwater
Forum November 19, 2009, San Antonio, TX, 57 pp.
The European Parliament and the Council of the European Union, 2000. Directive 2000/60/EC of
the European Parliament and the Council of 23 October 2000 Establishing a Framework for
Community Action in the Field of Water Policy (EU Water Framework). Official Journal of the
European Union, 22 December, p. L 327/1.
The European Parliament and the Council of the European Union, 2006. Directive 2006/118/EC
on the Protection of Groundwater Against Pollution and Deterioration. Official Journal of the
European Union, 27 December, pp. L 372/19-31.
Tuinhof, A., Dumars, C., Foster, S., Kemper, K., Garduño, H., and Nanni, M., 2002–2005. Ground
water Resource Management: An Introduction to Its Scope and Practice. Sustainable
Groundwater Management: Concepts and Tools, Briefing Note Series, Note 1, GW MATE
(Groundwater Management Advisory Team), The World Bank, Washington, DC, 6 pp.
U.S. Army Corps of Engineers (USACE), 1993. Hydrologic Frequency Analysis. Engineering manual
1110-2-1415, Washington, DC. Available at http://140.194.76.129/publications/eng-manuals/.
USBR, 1977. Ground Water Manual. U.S. Department of the Interior, Bureau of Reclamation,
Washington, DC, 480 pp.
U.S. EPA, 1975. Manual of Water Well Construction Practices. EPA-570/9-75-001, Office of Water
Supply, Washington, DC, 156 pp.
U.S. EPA, 1991. Protecting the Nation’s Ground Water: EPA’s Strategy for the 1990s. The final report
of the EPA Ground-Water Task Force, 21Z-1020, Office of the Administrator.
U.S. EPA, 2003. Atrazine Interim Reregistration Eligibility Decision (IRED). Q&As—January.
Available at http://www.epa.gov/pesticides/factsheets/atrazine.htm#q1, accessed January
23, 2008.
566 Hydrogeological Conceptual Site Models
U.S. EPA, 2005. National Management Measures to Control Nonpoint Source Pollution from Urban
Areas. EPA-841-B-05-004, United States Environmental Protection Agency, Office of Water,
Washington, DC.
Vrba, J., and Zaporozec, A. (eds.), 1994. Guidebook on Mapping Groundwater Vulnerability, International
Contributions to Hydrogeology Vol. 16. International Association of Hydrogeologists (IAH), Swets
& Zeitlinger Lisse, Munich, 156 pp.
Downloaded by [University of Auckland] at 23:45 09 April 2014