Hydrogeological Conceptual Site Models

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 582

Hydrogeological

Conceptual Site Models


Data Analysis
and Visualization
Downloaded by [University of Auckland] at 23:39 09 April 2014
Hydrogeological
Conceptual Site Models
Data Analysis
Downloaded by [University of Auckland] at 23:39 09 April 2014

and Visualization
Neven Kresic
Alex Mikszewski

Boca Raton London New York

CRC Press is an imprint of the


Taylor & Francis Group, an informa business
Esri® ArcGIS® software graphical user interfaces, icons/buttons, splash screens, dialog boxes, artwork, emblems, and associ-
ated materials are the intellectual property of Esri and are reproduced herein by permission. Copyright © 2011 Esri. All rights
reserved.

Microsoft® Access® screen shots used with permission from Microsoft based on “Use of Microsoft Copyrighted Content”
guidelines.
Downloaded by [University of Auckland] at 23:39 09 April 2014

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2013 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20120502

International Standard Book Number-13: 978-1-4398-5228-6 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials
or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material repro-
duced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming,
and recording, or in any information storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copy-
right.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been
granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifica-
tion and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Downloaded by [University of Auckland] at 23:39 09 April 2014

To Angela, Joanne, and Miles


Downloaded by [University of Auckland] at 23:39 09 April 2014
Contents

Preface............................................................................................................................................ xiii
Authors............................................................................................................................................xv

1. Introduction..............................................................................................................................1
1.1 Historical Example......................................................................................................... 3
1.2 Example Uses of This Book..........................................................................................9
References................................................................................................................................ 10
Downloaded by [University of Auckland] at 23:39 09 April 2014

2. Conceptual Site Models....................................................................................................... 11


2.1 Definition...................................................................................................................... 11
2.2 Physical Profile............................................................................................................. 13
2.2.1 Geomorphology (Topography)..................................................................... 14
2.2.2 Hydrology........................................................................................................ 29
2.2.3 Climate (Hydrometeorology)........................................................................ 37
2.2.4 Land Use and Land Cover............................................................................. 40
2.2.5 Water Budget................................................................................................... 46
2.3 Hydrogeology............................................................................................................... 48
2.3.1 Aquifers in Unconsolidated Sediments....................................................... 49
2.3.1.1 Alluvial Aquifers............................................................................. 50
2.3.1.2 Basin-Fill Aquifers.......................................................................... 55
2.3.1.3 Blanket Sand-and-Gravel Aquifers............................................... 61
2.3.1.4 Aquifers in Semiconsolidated Sediments.................................... 62
2.3.1.5 Glacial-Deposit Aquifers................................................................63
2.3.2 Sandstone Aquifers........................................................................................65
2.3.3 Fractured-Bedrock Aquifers......................................................................... 68
2.3.4 Karst Aquifers................................................................................................. 79
2.3.5 Basaltic and Other Volcanic Rock Aquifers.............................................. 104
2.3.6 Aquitards....................................................................................................... 108
References.............................................................................................................................. 112

3. Data Management, GIS, and GIS Modules.................................................................... 117


3.1 Introduction................................................................................................................ 117
3.2 Data Management for GIS........................................................................................ 121
3.2.1 Data Management Failures......................................................................... 121
3.2.1.1 Working with the Wrong Units................................................... 121
3.2.1.2 Working with Unknown or Mixed Coordinate Systems........ 123
3.2.1.3 Erroneous Query Results............................................................. 124
3.2.1.4 Data Entry Inefficiency................................................................. 126
3.2.2 Data Management Systems......................................................................... 126
3.2.2.1 Define the Project Objective........................................................ 127
3.2.2.2 Determine the Quantity and Type of Field and
Laboratory Data to Be Collected................................................. 129

vii
viii Contents

3.2.2.3 Perform Required Data Collection and Analysis..................... 130


3.2.2.4 Present Study Conclusions.......................................................... 131
3.3 Introducing the Geodatabase................................................................................... 133
3.3.1 Tables.............................................................................................................. 133
3.3.2 Feature Classes.............................................................................................. 135
3.3.3 Rasters............................................................................................................ 135
3.4 Basics of Geodatabase Design.................................................................................. 138
3.4.1 Table Format and Querying........................................................................ 139
3.4.1.1 Select Query................................................................................... 141
3.4.1.2 Cross-Tab Query............................................................................ 143
3.4.1.3 Forms.............................................................................................. 146
3.4.2 Data Linkage with GIS................................................................................. 147
3.4.3 Errors in Geodatabase Design.................................................................... 148
Downloaded by [University of Auckland] at 23:39 09 April 2014

3.4.3.1 Fields with Wrong Data Type...................................................... 148


3.4.3.2 Spaces in Data Field Names......................................................... 148
3.4.3.3 Misspellings and Format Discrepancies.................................... 148
3.4.3.4 Cross-Tab Query Difficulties....................................................... 148
3.4.3.5 Inconsistent or Unknown Coordinate Systems........................ 149
3.5 Working with Coordinate Systems......................................................................... 149
3.6 Data Visualization and Processing with ArcGIS.................................................. 154
3.6.1 Why Visualize Data?.................................................................................... 154
3.6.2 Analog versus Digital Data......................................................................... 156
3.6.3 Data Visualization in ArcMap.................................................................... 160
3.6.3.1 Data View....................................................................................... 160
3.6.3.2 Layout View................................................................................... 170
3.6.4 Geoprocessing in ArcMap........................................................................... 170
3.6.4.1 Querying Tools.............................................................................. 170
3.6.4.2 Labeling Tools................................................................................ 173
3.6.4.3 Editing Tools.................................................................................. 174
3.6.4.4 Georeferencing Tools.................................................................... 176
3.6.4.5 Analysis Tools................................................................................ 176
3.6.4.6 Measurement Tools....................................................................... 178
3.6.4.7 Data Management Tools............................................................... 180
3.7 GIS Modules for Hydrogeological Data Analysis................................................. 182
3.7.1 Statistical Analyses....................................................................................... 183
3.7.1.1 Visual Sample Plan....................................................................... 183
3.7.1.2 FIELDS Rapid Assessment Tool Software................................. 184
3.7.1.3 Spatial Analysis and Decision Assistance................................. 185
3.7.1.4 ArcToolbox..................................................................................... 187
3.7.1.5 ProUCL........................................................................................... 187
3.7.1.6 Monitoring and Remediation Optimization System............... 188
3.7.1.7 Other Tools..................................................................................... 190
3.7.2 Geostatistics and Contouring..................................................................... 190
3.7.3 Boring Logs and Cross Sections................................................................. 190
3.7.4 Proprietary Environmental Database Systems........................................ 192
3.7.5 General Notes on Computer Modules....................................................... 193
References.............................................................................................................................. 196
Contents ix

4. Contouring............................................................................................................................199
4.1 Introduction................................................................................................................ 199
4.2 Contouring Methods................................................................................................. 201
4.2.1 Manual Contouring...................................................................................... 201
4.2.2 Contouring with Computer Programs...................................................... 202
4.2.3 Spatial Interpolation Models....................................................................... 208
4.2.3.1 Deterministic Models................................................................... 208
4.2.3.2 Geostatistical Models.................................................................... 212
4.2.3.3 Trend and Anisotropy.................................................................. 213
4.2.3.4 Error and Uncertainty Analysis.................................................. 218
4.3 Kriging.........................................................................................................................222
4.3.1 Variography...................................................................................................225
4.3.1.1 Semivariogram Curve-Fitting..................................................... 229
Downloaded by [University of Auckland] at 23:39 09 April 2014

4.3.1.2 Search Neighborhood................................................................... 232


4.3.1.3 Modeling Techniques................................................................... 233
4.3.2 Kriging Prediction Standard Error............................................................ 233
4.3.2.1 Nugget Effect and Prediction Standard Error.......................... 237
4.3.3 Types of Kriging............................................................................................ 239
4.3.3.1 Ordinary, Simple, and Universal Kriging................................. 239
4.3.3.2 Cokriging........................................................................................ 240
4.3.3.3 Indicator Kriging........................................................................... 241
4.3.3.4 Point and Block Kriging............................................................... 242
4.4 Contouring Potentiometric Surfaces ...................................................................... 243
4.4.1 Importance of Conceptual Site Model....................................................... 243
4.4.2 Heterogeneity and Anisotropy................................................................... 246
4.4.3 Influence of Hydraulic Boundaries............................................................ 250
4.5 Contouring Contaminant Concentrations............................................................. 256
4.5.1 Importance of Conceptual Site Model....................................................... 257
4.5.2 Example Application.................................................................................... 261
4.5.2.1 Default Parameters........................................................................ 263
4.5.2.2 Data Exploration............................................................................ 266
4.5.2.3 Lognormal Kriging with Anisotropy........................................ 271
4.5.2.4 Lognormal Kriging with Trend Removal.................................. 274
4.5.2.5 Model Comparison....................................................................... 277
4.5.2.6 Advanced Detrending and Cokriging....................................... 281
4.5.2.7 Advanced Uncertainty Analysis................................................. 283
4.5.3 Summary........................................................................................................ 283
4.6 Grid and Contour Conversion Tools....................................................................... 285
4.6.1 Converting Contour Maps to Grid Files.................................................... 285
4.6.2 Converting Grid File Types......................................................................... 287
4.6.3 Extracting Grid Values to Points................................................................. 288
4.6.4 Appropriate Use of Spatial Analyst........................................................... 289
References.............................................................................................................................. 291

5. Groundwater Modeling......................................................................................................293
5.1 Introduction................................................................................................................ 293
5.2 Misuse of Groundwater Models.............................................................................. 294
x Contents

5.3 Types of Groundwater Models.................................................................................300


5.3.1 Analytical Models......................................................................................... 301
5.3.1.1 BIOCHLOR Case Study................................................................304
5.3.2 Numerical Models........................................................................................ 312
5.4 Numerical Modeling Concepts................................................................................ 314
5.4.1 Initial Conditions, Boundary Conditions, and Water Fluxes................. 316
5.4.2 Dispersion and Diffusion............................................................................ 319
5.5 Model Calibration, Sensitivity Analysis, and Error.............................................. 326
5.6 Modeling Documentation and Standards..............................................................334
5.7 MODFLOW-USG........................................................................................................ 335
5.7.1 Description of Method................................................................................. 336
5.7.2 Input and Output.......................................................................................... 339
5.7.3 Benchmarking and Testing......................................................................... 339
Downloaded by [University of Auckland] at 23:39 09 April 2014

5.8 Variably Saturated Models.......................................................................................340


5.9 GIS and Numerical Modeling Software.................................................................342
References..............................................................................................................................343

6. Three-Dimensional Visualizations.................................................................................347
6.1 Introduction................................................................................................................ 347
6.2 3D Conceptual Site Model Visualizations..............................................................348
6.2.1 3D Views of Geologic Model....................................................................... 353
6.2.2 4D Views of Groundwater Chemistry....................................................... 357
6.2.3 Views of 3D Plumes and Soil Plumes........................................................ 361
6.2.4 Specialty 3D Visualizations......................................................................... 361
Citations................................................................................................................................. 365

7. Site Investigation.................................................................................................................367
7.1 Data and Products in Public Domain..................................................................... 368
7.1.1 USGS Data and Publications....................................................................... 369
7.1.2 State GIS Data................................................................................................ 370
7.2 Database Coordination.............................................................................................. 371
7.3 Georeferencing........................................................................................................... 372
7.3.1 Georeferencing AutoCAD Data.................................................................. 372
7.3.2 Georeferencing Raster Data........................................................................ 376
7.4 Developing a Site Basemap....................................................................................... 381
7.5 Developing and Implementing Sampling Plans................................................... 382
7.5.1 Developing Sampling Plans........................................................................ 382
7.5.1.1 Systematic Planning to Balance Cost and Risk......................... 382
7.5.1.2 Example Application of Visual Sample Plan............................. 385
7.5.2 Implementing Sampling Plans................................................................... 393
7.5.2.1 Data Collection.............................................................................. 393
7.5.2.2 Real-Time Data Management, Analysis, and Visualization...... 399
7.6 Example Visualizations for Site Investigation Data..............................................404
7.6.1 Plan-View Maps............................................................................................404
7.6.2 Boring Logs and Cross Sections................................................................. 411
7.6.3 Graphs and Charts........................................................................................ 418
7.7 Toxic Gingerbread Men and Other Confounders................................................. 421
References..............................................................................................................................422
Contents xi

8. Groundwater Remediation................................................................................................425
8.1 Introduction................................................................................................................425
8.2 Pump and Treat..........................................................................................................430
8.2.1 Introduction...................................................................................................430
8.2.2 Design Concepts...........................................................................................430
8.2.3 System Optimization....................................................................................442
8.3 In Situ Remediation....................................................................................................444
8.3.1 Introduction...................................................................................................444
8.3.2 In Situ Thermal Treatment...........................................................................445
8.3.2.1 Design Concepts............................................................................445
8.3.2.2 Case Study...................................................................................... 449
8.3.3 In Situ Chemical Oxidation......................................................................... 455
8.3.3.1 Design Concepts............................................................................ 455
Downloaded by [University of Auckland] at 23:39 09 April 2014

8.3.3.2 Case Study...................................................................................... 460


8.3.4 Bioremediation and Monitored Natural Attenuation............................. 466
8.3.4.1 In Situ Bioremediation.................................................................. 466
8.3.4.2 Monitored Natural Attenuation.................................................. 471
8.4 Alternative Remedial Endpoints and Metrics....................................................... 479
8.4.1 Motivation...................................................................................................... 479
8.4.2 Technical Impracticability........................................................................... 481
8.4.3 Risk-Based Cleanup Goals........................................................................... 494
8.4.4 Mass Flux....................................................................................................... 497
8.5 The Way Forward: Sustainable Remediation......................................................... 503
References.............................................................................................................................. 511

9. Groundwater Supply.......................................................................................................... 517


9.1 Integrated Water Resources Management............................................................. 517
9.2 Groundwater Supply................................................................................................. 526
9.2.1 Groundwater Quantity................................................................................ 529
9.2.2 Groundwater Quality................................................................................... 537
9.2.2.1 Protection of Groundwater..........................................................540
9.2.3 Groundwater Extraction..............................................................................544
9.3 Groundwater Sustainability..................................................................................... 550
9.3.1 Sustainable Groundwater Use Case Study: Plant Washington.............. 557
9.3.2 Sustainable Groundwater Use: Conclusion............................................... 563
References..............................................................................................................................564
Downloaded by [University of Auckland] at 23:39 09 April 2014
Preface

From their origins, exploration and inquiry in the Earth sciences have been dependent
on conceptual models and data visualizations to test theories and convey findings to the
general public. One can appreciate the power and importance of conceptual graphics by
flipping through the pages of a National Geographic magazine. Data visualization is inex-
tricably linked to quantitative spatial data analysis—the two major forms of which, for the
Earth sciences, are statistical interpolation and modeling. Data analysis and visualization
are invaluable in assessing the efficacy of current regulatory and consulting practices to
ensure that political and technical interventions related to the management of ground­
Downloaded by [University of Auckland] at 23:39 09 April 2014

water resources and contaminated sites are evidence based and lead to desirable outcomes.
This book covers conceptual site model development, data analysis, and visual data pre-
sentation for hydrogeology and groundwater remediation. While this book is technical
in nature, equations and advanced theoretical discussions are minimized with the focus
instead placed on key concepts and practical data analysis and visualization strategies. As
a result, we believe that nontechnical stakeholders involved in groundwater projects will
find this book interesting and relevant as well. We sincerely hope that the reader’s aca-
demic or professional practice, whatever that may be, benefits from the tips and techniques
contained herein. We wish to thank Hisham Mahmoud, Don Chandler, Dave Goershel,
Dan Grogan, Allen Kibler, Leonard Ledbetter, Ann Massey, Larry Neal, and Steve Youngs
of AMEC for their continuing support and advice, and Ted Chapin and Karl Kasper of
Woodard & Curran for their support in the completion of this book.

xiii
Downloaded by [University of Auckland] at 23:39 09 April 2014
Authors

Neven Kresic is a hydrogeology practice leader at AMEC


Environment and Infrastructure, Inc., an international engi-
neering and consulting firm. He is a professional geologist
and professional hydrogeologist working for a variety of
national and international clients, including industry, min-
ing, water utilities, government agencies, and environmental
law firms. Neven holds a bachelor’s degree in hydrogeologi-
cal engineering, a master’s degree in hydrogeology, and a PhD
Downloaded by [University of Auckland] at 23:39 09 April 2014

in geology, all from the University of Belgrade. Before coming


to the United States as a Senior Fulbright Scholar at the U.S.
Geological Survey in Reston, Virginia, and George Washington University in Washington,
DC, Neven was a professor of groundwater dynamics and hydrogeology at the University
of Belgrade. He serves on the management committee of the Groundwater Management
and Restoration Specialty Group of the International Water Association, co-chairs the
Karst Commission of the International Association of Hydrogeologists, and is a past vice
president for international affairs of the American Institute of Hydrology.

Alex Mikszewski is a licensed professional environmental


engineer in the Commonwealth of Massachusetts, where he
works for Woodard & Curran, Inc. He holds a bachelor’s degree
in civil and environmental engineering from Cornell University
and a master’s degree in environmental engineering and sci-
ence from The Johns Hopkins University. He was a NNEMS
fellow in the U.S. EPA Office of Superfund Remediation and
Technology Innovation. As a consultant, Alex has developed
groundwater models for clients in the public and private sec-
tors in settings ranging from southeastern New Hampshire to
the semiarid groundwater basins of Southern California. His
experience in statistics and geostatistics involves the use of computer software to design
defensible sampling plans at Superfund sites, delineate contaminant concentrations in soil
and groundwater, assess surface water–groundwater interactions, and evaluate the effects
of pumping in multiple-aquifer systems. Alex has hands-on experience with a variety of
remedial technologies, including in situ chemical oxidation, soil vapor extraction, in situ
thermal remediation, monitored natural attenuation, and pump and treat.

xv
Downloaded by [University of Auckland] at 23:39 09 April 2014
1
Introduction

The physics of groundwater flow, geochemistry, contaminant fate and transport, ground-
water remediation, and groundwater resources development and management are all
subjects that have been covered extensively in innumerable textbooks. Thus, a student
or a practicing groundwater professional has access to a wealth of information regarding
hydrogeological theory. A strong technical background in hydrogeology and related dis-
ciplines, such as fluid mechanics, forms the foundation for a successful career in academia
or the public or private sectors. However, this is typically where the education ends, and
continued development is generally only possible by obtaining real-world experience in
field hydrogeology, quantitative spatial data analysis, and data visualization that includes
mapping. The novice groundwater professional may also find that there are critical hydro-
geological concepts applicable at varying investigatory scales that are not typically covered
in conventional textbooks. The political and regulatory framework that a hydrogeologist
must operate within is another area where improved educational materials are desirable
but lacking.
The intention of this book is to fill the void in hydrogeological literature through iden-
tification and explanation of key concepts in professional hydrogeology and to provide
practical guidance and real-life examples related to the following applications:

• Hydrogeological conceptual site models (Chapter 2)


• Data management and geographic information systems (Chapter 3)
• Contouring (Chapter 4)
• Groundwater modeling (Chapter 5)
• Three-dimensional visualization (Chapter 6)
• Site investigation (Chapter 7)
• Groundwater remediation (Chapter 8)
• Groundwater supply (Chapter 9)

Data visualization is underestimated in its importance to the practice of ground­water


professionals. Efficient and clear presentation to stakeholders with varying technical
backgrounds is essential to the success of any project. The content and design of visual
presentations depend on the target audience, which, in the case of professional hydrogeol-
ogy, may include:

• Regulators such as the United States Environmental Protection Agency (US EPA)
• Commercial and industrial clients
• Attorneys involved in litigation or real-estate transactions
• Juries
• Communities affected by a contaminated site or a water supply project

1
2 Hydrogeological Conceptual Site Models

It is the hope of the authors that this book will be interesting and useful to any of the above
stakeholders involved in groundwater projects. While this book is technical in nature,
equations and advanced theoretical discussions are minimized, with the focus placed on
key concepts, practical data analysis, and visualization strategies.
In addition, concepts are presented throughout this book related to the current state of the
hydrogeological practice, focusing on prevailing ideologies and recommendations for
improvement. These topics are often controversial, and the authors hope that this book
provokes thought and discussion on how we can evolve current policies and practices to
achieve better outcomes at a lesser cost to society. The authors have no agenda or underly-
ing motivation in these discussions, and it should be noted that this book was completed
without financial support from any public or private entity.
One example of a thought-provoking topic similar to others included in this book is
the current regulatory policy related to arsenic (As) in private drinking-water supplies in
Downloaded by [University of Auckland] at 23:40 09 April 2014

eastern New England. Arsenic occurs naturally in metasedimentary bedrock units in the
region that are extensively tapped by private water-supply wells. In 2003, it was estimated
that more than 100,000 people across eastern New England were using private water sup-
plies with arsenic concentrations above the federal maximum contaminant level (MCL)
of 10 µg/L (Ayotte et al. 2003). This represents a widespread exposure to a chemical at
dangerously high levels.
Figure 1.1 is a map of arsenic concentrations measured at private bedrock wells in south-
eastern New Hampshire during a 2003 study performed by the United States Geological
Survey. Despite these alarming data, arsenic is not regulated by the state of New Hampshire
in private drinking-water wells, and there are no current requirements to even test exist-
ing wells for the contaminant. In 2010, a bill (HB 1685) that would have made it a require-
ment to test new wells and wells involved in home sales was killed by the New Hampshire
Legislature (Susca and Klevens 2011).
In contrast to this policy of allowing arsenic exposure, environmental regulations
require the expenditure of millions of dollars to remediate Superfund and state-led con-
taminated sites where the exposure often constitutes a very low risk (e.g., one in a million
excess lifetime cancer risk) or is hypothetical in nature (e.g., potential future consumption
of groundwater). For example, at the Visalia Pole Yard Superfund site, well over $20 million
was spent to remediate groundwater contamination that was not posing an actual risk (see
Chapter 8 for a more detailed discussion). This is a classic example of policy that permits
self-inflicted risk while disproportionately targeting externally inflicted risk, ignoring the
relative costs and benefits of the overall outcome. One potential declaration of this ideol-
ogy is

When protecting human health and the environment, it is not our place to address risk
related to naturally occurring contamination or individual lifestyle choices, but we will
act aggressively to remedy any minimal level of risk caused by a third-party agent.

The reader should consider how this logic impedes efforts to protect human health and
the environment. Developing sound conceptual models and using effective data analysis
and visualization tools can help address problems even at this philosophical scale; practic-
ing groundwater professionals are encouraged to use their expertise to be active agents of
change. A historical example of the power of these methods is provided in the following
section.
Introduction 3
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 1.1
Arsenic concentrations in private bedrock wells in southeastern New Hampshire and grouped geologic units
showing the percentage of wells with concentrations of arsenic greater than the current MCL of 0.010 mg/L.
(Modified from USGS, 2003. Arsenic Concentrations in Private Bedrock Wells in Southeastern New Hampshire.
US Department of the Interior, USGS Fact Sheet 051-03.)

1.1 Historical Example


It is likely that most hydrogeologists, environmental engineers, epidemiologists, and med-
ical doctors have heard the famous story of Dr. John Snow and the Broad Street pump
in mid-19th-century London. Dr. Snow has been voted the greatest doctor of all time by
Hospital Doctor magazine (edging out Hippocrates) and is also known as the father of mod-
ern epidemiology (Frerichs 2011). The authors have also heard him referred to as the first
environmental engineer. While Dr. Snow was a pioneering anesthesiologist, he is best
known for his staunch advocacy of the waterborne theory of cholera transmission, and his
innovative work in this area led to his great posthumous fame.
In the mid-19th century, cholera outbreaks were common in the cities of the industrial
revolution, spreading rapidly through densely settled areas and inflicting frighteningly
4 Hydrogeological Conceptual Site Models

high mortality rates. At the time, the spread of cholera and most other diseases was
blamed on foul inner-city air. This conceptual model for disease transmission by odors
was termed the miasmatic theory and was widely accepted by sanitation professionals,
public officials, and Parliament in London by the late 1840s (Johnson 2006). Dr. Snow will
forever be remembered for his fight against this flawed, superstition-based theory.
Dr. Snow’s interest in cholera was likely spurred by the London cholera outbreak of
1848–1849, which killed 50,000 people (Johnson 2006). The doctor became obsessed with
the disease and, during that outbreak, developed an original conceptual model for cholera
transmission based on his knowledge and experience as a medical doctor. He reasoned that
cholera is fundamentally a diarrheic disease of the gut and, therefore, is caused by some-
thing ingested rather than inhaled. Where advocates of the miasmatic theory argued that
cholera was a poison inhaled and circulated through the blood, causing fever, Dr. Snow
argued that the pathology of cholera is caused by dehydration from severe diarrhea (Koch
Downloaded by [University of Auckland] at 23:40 09 April 2014

2011). He further built his argument on waterborne transmission through two population-­
based studies conducted during the 1848–1849 epidemic. His findings were communicated
through a landmark 1849 publication On the Mode and Communication of Cholera. While
Dr. Snow’s work garnered much public interest, it was generally concluded, at the time,
that his publication failed to provide sufficient evidence linking cholera to water supply.
He therefore stewed for an additional five years before getting another chance at conclu-
sively proving the accuracy of his conceptual model. This opportunity came in the form of
another cholera outbreak in the Soho neighborhood centered on the famous Broad Street
pump (Johnson 2006).
The Soho outbreak was particularly swift and virulent, yet both Dr. Snow and his rival
working for the Board of Health, Reverend Henry Whitehead, were able to conduct rig-
orous, on-the-ground data collection during the outbreak itself. Armed with his correct
conceptual model, Dr. Snow collected site-specific data linking the spread of disease to the
Broad Street pump. He presented his immediate findings to the Board of Governors of St.
James Parish, and the evidence was compelling enough to convince the board to remove
the handle from the pump, thereby eliminating public access to the well. The action was
met with jeers by the observing public. While the data indicate that the outbreak was
already waning by the time of the pump handle removal, Dr. Snow’s actions likely con-
tributed to its decline and, at the minimum, prevented a second wave of disease spread
(Tufte 1997). The toll of the cholera outbreak was devastating; 90 out of the 896 Broad Street
residents died within two weeks (Johnson 2006).
Seizing the opportunity to further promote his theory, Dr. Snow quickly compiled his
data on the Soho outbreak for scientific publication. He summarized his findings in a
now famous map originally presented to the Epidemiological Society in December 1854
and included as Figure 1.2. Cholera deaths are represented as thick black bars, which are
clearly clustered around the Broad Street pump. While many declare that Dr. Snow’s map
“solved the mystery” of cholera (e.g., FlowingData 2007), it was not used to get the pump
handle removed (Dr. Snow’s weight of on-the-ground evidence was sufficient to get a des-
perate board to try anything), and it did not convince the board or the general public
of the waterborne theory of cholera transmission. The impact of the map has therefore
been somewhat exaggerated. The miasmatic theory persisted for several decades after Dr.
Snow’s work until it was replaced by the germ theory, and German scientist Robert Koch
isolated the cholera microbe in 1883. Ironically, Vibrio cholerae had already been identi-
fied in 1854—the same year as the Soho outbreak—by the Italian Fillipo Pacini, a finding
that was largely ignored by his contemporaries but later acknowledged by the parasite’s
renaming in 1965 to Vibrio cholerae Pacini 1854 (Johnson 2006).
Introduction 5
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 1.2
Dr. John Snow’s famous map of the 1854 Broad Street cholera outbreak first presented in December 1854 and
later published in 1855. Available at http://www.ph.ucla.edu/epi/snow.html.

This fascinating story is presented in detail in the work of Johnson (2006). It has been
summarized here to provide a historical example of how conceptual models, data analysis,
and data visualization can be used to tackle even the most difficult scientific and soci-
etal problems. Dr. Snow developed a conceptual model based on his professional knowl-
edge, collected and analyzed data quantitatively, and presented his results in an effective
visualization (which also served as an additional test on his original theory). However,
as previously stated, Dr. Snow unfortunately did not solve the mystery as his contempo-
raries remained unconvinced. Koch (2011) proposes that Dr. Snow could have used more
detailed quantitative analysis to bolster his study and potentially win over even the most
ardent miasma believers. Dr. Snow did not calculate relative mortality rates in the indi-
vidual pump catchments, which is a form of quantitative analysis that Koch (2011) asserts
was practiced at the time. A rendering of Dr. Snow’s data created using modern mapping
6 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 1.3
Cholera deaths per 1000 persons for the pump catchments in the area of the 1854 cholera outbreak. (Mortality
rates and approximate georeferenced catchment, cholera death, and pump locations from Koch, T., Disease
Maps: Epidemics on the Ground, University of Chicago Press, Chicago, 2011, 330 pp.). World Street Map sources:
Esri, DeLorme, NAVTEQ, TomTom, USGS, Intermap, iPC, NRCAN, Esri Japan, METI, Esri China (Hong Kong),
Esri (Thailand).

techniques is presented as Figure 1.3, including clear delineation of the pump catchments
and labeling the number of cholera deaths per 1000 persons in each catchment. The mor-
tality per 1000 persons in the Broad Street pump catchment (149 per 1000 persons) clearly
overwhelms the rates of the adjacent catchments (Koch 2011). It is important to note that
Dr. Snow produced a second version of his original map that innovatively used a Voronoi
diagram to delineate the area where the Broad Street pump was the closest source of water.
This results in a similar effect to the catchment-area delineations presented in Figure 1.3.
If Dr. Snow had performed these mortality calculations and presented them in such a
manner, might he have ended the cholera debate once and for all? The authors believe it
is highly unlikely. While the addition of mortality rates does enhance the visualization, it
often takes generations for entrenched ideologies to be purged from the public mind. In
Introduction 7

some cases, it takes extreme acts of self-sacrifice, such as self-experimentation, to prove the
validity of a scientific concept. While not necessary for cholera, self-experimentation was
critical in demonstrating the role of the mosquito in yellow fever transmission. The yellow
fever saga is brilliantly chronicled by Crosby (2006). If Dr. Snow had voluntarily consumed
cholera-impacted water, or conducted a study using other human subjects, maybe the tran-
sition to the waterborne theory would have been expedited. However, apart from martyr-
dom or unethical experimentation, Dr. Snow contributed as much as humanly possible to
the fight against cholera. At the time of this writing, cholera has still not been eradicated,
and a deadly outbreak continues in the Caribbean country of Haiti. As of July 31, 2011,
there have been more than 400,000 reported cases of cholera associated with the epidemic
in Haiti that began in fall 2010 (World Health Organization 2011). The reader may explore
how entrenched ideologies have contributed to the persistence of this outbreak.
The John Snow story is relevant to this publication for multiple reasons. For starters, it
Downloaded by [University of Auckland] at 23:40 09 April 2014

involves contaminated groundwater and associated impacts on public health. More impor-
tantly, though, it outlines the framework for conducting spatial scientific studies that is the
fundamental topic of this book. The key elements of this framework are

• Conceptual model development based on education and experience in hydrogeology


and available information from historical studies
• Data collection at the site-specific level
• Spatial data analysis to evaluate the original conceptual model
• Data visualization to present study conclusions and refine the conceptual model as
needed

A flow chart illustrating the relationship of these elements is provided in Figure 1.4. Note
that this framework is cyclical as it is valuable to perform data visualization or analysis
first before focusing on the conceptual model, particularly where historical data are lim-
ited or completely absent. However, without a conceptual model, data collection, analysis,
and visualization are uninformed and can lead to erroneous interpretations. If Dr. Snow
had blindly plotted the cholera deaths on his map without providing substantive technical
and conceptual justification for his theory, the map could have just as easily linked cholera

FIGURE 1.4
Flow chart of the framework for spatial investigation advanced by Dr. Snow and applicable to modern hydroge-
ology. Note that data analysis and visualization often occur cooperatively.
8 Hydrogeological Conceptual Site Models

to a former plague burial site in the Broad Street area, which would have fit nicely into the
miasmatic model (Koch 2011).
The failure to include conceptual models in hydrogeological studies results in the propa-
gation of major errors in professional practice. Examples of such fundamental errors high-
lighted in this book are

• Failure to identify karst and other predominant geological features (Chapter 2)


• Data management and technical mapping errors (Chapter 3)
• Default contouring with computer programs (Chapter 4)
• Blindly accepting models published by “authorities” or “experts” including gov-
ernment agencies (Chapter 5)
• Performing groundwater remediation without a conceptual basis for the design
Downloaded by [University of Auckland] at 23:40 09 April 2014

(Chapter 8)

It is the hope of the authors that this book educates groundwater professionals and stake-
holders alike about these major errors. However, more important objectives are to encour-
age independent thinking about current groundwater issues and to promote the use of
conceptual models and advanced data analysis and visualization tools to better solve
hydrogeological problems.
The breakdown of independent analysis and the failure to use appropriate conceptual and
quantitative models are symptoms of groupthink, a term discussed further in Chapter
5. Groupthink has led to innumerable engineering failures including such disasters as the
Space Shuttle Columbia accident in 2003. According to the Columbia Accident Investigation
Board (CAIB), foam shedding from space shuttles was originally viewed as a potential safety
issue early in the shuttle program. However, foam shedding occurred so frequently over the
course of 112 missions without major incident that it was eventually accepted as a nuisance
management issue rather than a significant hazard. Even when it became apparent from
analytic evidence that the Columbia accident was caused by damage to the shuttle’s thermal
protection system from a collision with detached foam debris, there remained “lingering
denial” that foam could really be the root cause (CAIB 2003). As a result, the CAIB had to
conduct impact and analysis testing using a real-life physical model to provide irrefutable
proof that foam can inflict potentially catastrophic damage to shuttle paneling.
Volume I of the CAIB report is included on the companion DVD for reference. In addi-
tion to the flawed notion that foam shedding was solely a maintenance problem, the report
identifies many other factors that contributed to the fatal accident:

• The use of a semiempirical quantitative model beyond its calibration range rather
than a physics-based model
• Poor communication of decision uncertainty and risk to National Aeronautics and
Space Administration (NASA) management (see also Tufte [2006])
• Concern regarding jumping the chain of command
• Fear of being ridiculed for expressing dissenting opinion
• Decision-making processes that were obscured by scheduling metrics and politi-
cal pressures

All the above factors can similarly affect projects in hydrogeology and groundwater reme-
diation, leading to engineering failure and associated consequences.
Introduction 9

1.2 Example Uses of This Book


With the previously stated objectives in mind, it is useful to list several example scenarios
of how this book may be used to assist different entities involved in groundwater projects.

Example 1
A consulting firm has just been awarded a contract for a Phase II/Comprehensive Site
Investigation Assessment at a former industrial facility. The primary component of the
Phase II report is a conceptual site model (CSM), which will dictate where and how
environmental data will be collected and what the significance of the data will be. This
book can help the consultant develop an effective CSM for the site, leading to defensible
characterization strategies and study conclusions. Concepts related to CSMs and site
Downloaded by [University of Auckland] at 23:40 09 April 2014

investigations are presented in Chapters 2 and 7. Data management and contouring are
also key elements of Phase II investigations, which are discussed at length in Chapters
3 and 4, respectively.

Example 2
A hydrogeologist becomes an expert witness in a lawsuit regarding the contamination
of several public water-supply wells. The hydrogeologist develops a fate and transport
groundwater model that demonstrates the client is not responsible for the contamina-
tion. For the upcoming trial, the hydrogeologist has been asked to produce simplified
graphics illustrating the principles behind the groundwater model and its overall con-
clusions. The hydrogeologist can use this book as a resource for producing data tables,
graphs, maps, illustrations, and animations of modeling results that may be easily
understood by the nontechnical trial jury. The hydrogeologist can also find key insight
in this book regarding the use of groundwater models in professional practice. Concepts
and visualizations related to groundwater models are presented in Chapter 5. Chapter
6, covering three-dimensional visualizations, may also be useful for this application.
Numerous examples including animations are provided on the companion DVD.

Example 3
An environmental engineer is responsible for the design and operation of an in situ
chemical oxidation (ISCO) and monitored natural attenuation (MNA) remedy at a high-
profile Superfund site. An initial round of ISCO injections at the contaminant source
area has been completed. The potentially responsible parties (PRPs) paying for the
cleanup have just asked the environmental engineer to demonstrate to the US EPA that
the source area remediation has been completed to the extent practicable and that the
remedy can fully transition to long-term MNA. Similarly, the US EPA has asked the
engineer to verify that MNA processes are occurring at the site to substantiate this
transition. The engineer can use this book to learn about key concepts in ISCO, technical
impracticability, and MNA and also as a reference for developing compelling visualiza-
tions of field data to justify remedial decisions to the US EPA. Groundwater remediation
is discussed at length in Chapter 8.

Example 4
A municipality has just completed a long-term pumping test at an extraction well that
is being considered for use as a public water supply. The town hydrogeologist needs to
present the results of the test at a town hall meeting to local conservation committees,
10 Hydrogeological Conceptual Site Models

state regulators, and the general public. A major concern of the conservation groups and
regulators is the dewatering of a small river located near the extraction well site. The
hydrogeologist can use this book to better understand surface water and groundwater
interactions, which are described in Chapters 2, 4, and 9. In addition, this book can
help the hydrogeologist perform groundwater modeling and contouring that assess the
potential for induced infiltration under pumping conditions (Chapters 4 and 5). Lastly,
the hydrogeologist can find example visualizations throughout this book and the com-
panion DVD that may be helpful in developing simplified data tables, graphs, maps,
and illustrations for the town hall meeting, helping nontechnical stakeholders clearly
understand the study conclusions.
Downloaded by [University of Auckland] at 23:40 09 April 2014

References
Ayotte, J. D., Montgomery, D. L., Flanagan, S. M., and Robinson, K. W., 2003. Arsenic in groundwater
in eastern New England: Occurrence, controls, and human health implications. Environ. Sci.
Technol., 37(10), 2075–2083.
Columbia Accident Investigation Board, 2003. The Columbia Accident Investigation Board Report:
Volume I. The National Aeronautics and Space Administration and the Government Printing
Office, Washington, DC, 248 pp.
Crosby, M. C., 2006. The American Plague: The Untold Story of Yellow Fever, the Epidemic that Shaped our
History. The Berkley Publishing Group, New York, 308 pp.
FlowingData, 2007. John Snow’s Famous Cholera Map. Available at http://flowingdata​
.com/2007/09/12/john-snows-famous-cholera-map/. Accessed July 30, 2011.
Frerichs, R. R., 2011. John Snow—A Historical Giant in Epidemiology. UCLA Department of Epide­
miology, School of Public Health. Available at http://www.ph.ucla.edu/epi/snow.html.
Accessed August 21, 2011.
Johnson, S., 2006. The Ghost Map. Riverhead Books, New York, 299 pp.
Koch, T., 2011. Disease Maps: Epidemics on the Ground. The University of Chicago Press, Chicago, 330 pp.
Snow, J., 1855. On the Mode of Communication of Cholera, 2nd Edition. Churchill, London.
Susca, P., and Klevens, C., 2011. NHDES Private Well Strategy Private Well Working Group.
Drinking Water and Groundwater Bureau, N.H. Department of Environmental Services (NHDES).
Available at http://www.dartmouth.edu/~toxmetal/program-resources/research-translation/
arsenic consortium.​html. Accessed August 15, 2011.
Tufte, E. R., 1997. Visual Explanations: Images and Quantities, Evidence and Narrative. Graphics Press,
Cheshire, 156 pp.
Tufte, E. R., 2006. Beautiful Evidence. Graphics Press, Cheshire, 213 pp.
United States Geological Survey (USGS), 2003. Arsenic Concentrations in Private Bedrock Wells in
Southeastern New Hampshire. U.S. Department of Interior, USGS Fact Sheet 051-03.
World Health Organization, 2011. Haiti: Cholera Epidemic Reached a Second Peak and Case
Numbers Now Decreasing. Available at http://www.who.int/hac/crises/hti/en/. Accessed
August 28, 2011.
2
Conceptual Site Models

2.1 Definition
A hydrogeological conceptual site model (CSM) is a description of various natural and
anthropogenic factors that govern and contribute to the movement of groundwater in the
subsurface. Simply put, it is the answer to the following key questions:

• Where is the groundwater coming from?


• Through what type of porous media is it flowing?
• How much of it is there, and how fast is it flowing?
• Where is it going?
• How did the groundwater system behave in the past, and how will it change in the
future based on both natural and anthropogenic influences?

When the groundwater is contaminated, a CSM also includes answers to similar general
questions regarding the contaminant(s). ASTM International (2008), formerly the American
Society for Testing and Materials, defines a conceptual site model for contaminated sites as
follows: “A written or pictorial representation of an environmental system and the biologi-
cal, physical, and chemical processes that determine the transport of contaminants from
sources through environmental media to environmental receptors within the system.”
An accurate CSM is critical in satisfying the ultimate goal of any project, which, in
hydrogeological practice, typically involves a decision regarding water supply, protection
of human health and the environment, or both.
A schematic (qualitative) CSM of an alluvial hydrogeological system consisting of sev-
eral water-bearing zones (aquifers, hydrostratigraphic units) is presented in Figure 2.1
together with major areas of groundwater recharge. Insets show schematic CSMs focused
on groundwater contamination discovered at two sites. Although these three CSMs, one
regional and two local, may have been developed independently, it is obvious that each
would benefit greatly from incorporating information collected for seemingly different
purposes at the other sites. In many instances, the success of a CSM will depend on the
ability of the project team to gather relevant information at different scales and from dif-
ferent sources, and integrate it with the data collected during site-specific investigations.
This concept is discussed in detail in Chapter 7.
The complexity and quantitative aspects of a CSM vary broadly depending on project
goals and the investigative stages of data collection. For example, a preliminary model
developed during early phases of water resource planning on a watershed scale may be
qualitative in nature and limited to general descriptions of underlying aquifers, their likely

11
12 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.1
Schematic regional and two local CSMs in an alluvial aquifer. (a) Three-phase contaminant plume developed
from a leaky underground storage tank at a gas station. (Modified from U.S. EPA 2000.) (b) Contaminant plumes
developed from a source of DNAPLs at the land surface.
Conceptual Site Models 13

recharge and discharge areas, and existing groundwater users. This information may be
readily available from various government agencies such as geological surveys in a for-
mat appropriate for direct presentation to applicable decision makers. In contrast, a CSM
adopted to serve as the basis for the design of a groundwater remediation (aquifer restora-
tion) system is, by default, very detailed and quantitative. Such a CSM is usually the result
of lengthy and expensive site-specific investigations and includes quantification of risk,
time-dependent (changing in time) groundwater flow rates and velocities, and fate and
transport of contaminants. CSMs that include quantitative aspects are increasingly relying
on mathematical models (analytic and numeric) during various stages of CSM develop-
ment. Mathematical models enable testing of hypotheses and can make predictions on,
for example, sustainable rates of groundwater extraction for water supply or probable con-
taminant concentrations at points of exposure. Quantitative models also allow evaluation
of the uncertainty associated with management decisions.
Downloaded by [University of Auckland] at 23:40 09 April 2014

The main purpose of a CSM is to provide a single visual product composed of text,
pictures, and, if necessary, animations, where all the information about the site is easily
accessible and can be used for decision making at any stage of project implementation. At
the same time, it is very important to understand that the CSM is a dynamic entity, con-
tinuously refined and updated. At the initial stage of the project, it is possible to have two
or three competing preliminary CSMs because the readily available information may not
lead to a definitive concept. As the project progresses and new information is collected,
the CSM becomes more detailed and quantitative, helps plan additional investigations,
and focuses the project team on feasible solutions. These solutions (such as design of a well
field for water supply or a bioremediation system for aquifer restoration) will be possible
only when there is consensus among all involved parties that the final CSM accurately
represents the hydrogeology of the site at the scale of interest—a concept further discussed
below. The remainder of this chapter focuses on key physical elements of hydrogeologi-
cal conceptual site models. Elements of the CSM related to contaminant exposure (i.e., the
exposure or receptor profile) are discussed in Chapter 8.

2.2 Physical Profile


The most important decision made very early in the process of developing a CSM is selec-
tion of appropriate temporal and spatial scales of data collection, both regional and local
(site-specific). When deciding about the right scales, it helps to understand that it is always
easier to reduce the extent of investigation and drop some of the initially collected informa-
tion than to expand the scale of investigation when the project is nearing the completion
deadline. Whatever the case might be, one common but erroneous approach to describing
the physical profile of a site is to view it as something less important and therefore not
deserving of detailed discussion. Too many CSMs include very dry, cookie-cutter, brief
sections on geomorphology (topography), hydrology, climate, and land cover/land use
while failing to discuss their importance for the site hydrogeology. Often these discus-
sions are blindly copied directly from previous reports or from regional-scale documents
produced by the United States Geological Survey (USGS), the United States Environmental
Protection Agency (U.S. EPA), or other federal or state government agencies. This practice
adds zero value to the CSM, and can result in the propagation of serious conceptual errors
when the copied text is not at all applicable to the scale of interest, or presents grossly
14 Hydrogeological Conceptual Site Models

dated or obsolete information. In addition, the key components of a site physical profile
listed above are usually described in a static manner even though most, if not all, of them
change in time as a result of various natural and anthropogenic factors.

2.2.1 Geomorphology (Topography)


One important aspect of geomorphology is that, on a regional scale, natural directions of
groundwater flow can be related to surface topography. Just like surface water, groundwater
flows from a higher hydraulic head toward a lower hydraulic head, that is, from pronounced
topographic highs (hills) to pronounced topographic lows (valleys), respectively. This is
generally true regardless of the underlying geology (rock type) with the occasional excep-
tions of confined aquifers when observed locally and aquifers developed in karstified rocks
(see Section 2.3.4). The concept is shown in Figure 2.2, which is an example from a guidance
Downloaded by [University of Auckland] at 23:40 09 April 2014

manual on developing conceptual hydrogeological models for fractured rock aquifers in the
Piedmont and Mountain regions of North Carolina, USA. This manual of the North Carolina
Department of Environment and Natural Resources explains, in detail, the importance of
topography for determining groundwater recharge and discharge zones and flow directions.
Figure 2.2 shows that the path of natural groundwater movement is relatively short
and almost invariably restricted to the zone underlying the topographic slope extend-
ing from a topographic divide to an adjacent stream. Thus, the concept of a local slope–­
aquifer system applies. On the opposite sides of an interstream topographic divide are
two similar slope–aquifer systems as shown by A and B. The region as a whole is a

FIGURE 2.2
Conceptual view of double slope–aquifer systems and its compartments (C). All arrows indicate ground-
water flow directions. (Modified from LeGrand, H. W., Sr., A Master Conceptual Model for Hydrogeological Site
Characterization in the Piedmont and Mountain Region of North Carolina: A Guidance Manual, North Carolina
Department of Environment and Natural Resources, Division of Water Quality, Groundwater Section, 2007.)
Conceptual Site Models 15

network of slope–aquifers where an individual aquifer represents a unit of the ground-


water flow regime that is seemingly separated and free of impact from adjacent, similar
units. Commonly, the slope–aquifer system includes smaller hill-and-dale configurations
that are observed as topographic spurs (ridges branching from a main ridge or mountain
crest). Similar undulations, although of lesser amplitude, may also occur in the underlying
water table and form important natural groundwater flow-control features. The crests of
the water table undulations represent natural groundwater divides within a slope–aquifer
system and may limit the area of influence of wells or the extent of contaminant plumes
located within their boundaries (LeGrand 2007).
Natural topography can be artificially altered in many ways resulting in the creation
of new topographic highs and lows and the redistribution of original groundwater flow
directions. Some well-known examples include landfills, below which there is groundwa-
ter mounding caused by increased infiltration and waste leachate, and quarries and mine
Downloaded by [University of Auckland] at 23:40 09 April 2014

works, which act as new groundwater discharge zones. For example, Figure 2.3 shows the
region of Butte, MT, which had already earned the nickname “The Richest Hill on Earth”

FIGURE 2.3
This image of the Berkeley Pit in Butte, MT, shows many features of the mine workings, such as the terraced
levels and access roadways of the open mine pits (gray and tan sculptured surfaces). A large gray tailings pile
of waste rock and an adjacent tailings pond appear to the north of the Berkeley Pit. Color changes in the tailings
pond result primarily from changing water depth. This astronaut photograph ISS013-E-63766 was acquired
August 2, 2006, with a Kodak 760C digital camera using an 800-mm lens and is provided by the ISS Crew Earth
Observations experiment and the Image Science & Analysis Group, Johnson Space Center. (From NASA, Earth
Observatory, http://earthobservatory.nasa.gov/, accessed March 2011.)
16 Hydrogeological Conceptual Site Models

by the end of the 19th century because of mining for gold, silver, and copper. Demand for
electricity increased demand for copper so much that by World War I, the city of Butte was
a boomtown. Well before World War I, however, copper mining had spurred the creation
of an intricate network of underground drains and pumps to lower the groundwater level
and continue the extraction of copper. Water extracted from the mines was so rich in dis-
solved copper sulfate that it was also mined (by chemical precipitation) for the copper it
contained. In 1955, copper mining in the area expanded with the opening of the Berkeley
Pit. The mine took advantage of the existing subterranean drainage and pump network to
lower groundwater until 1982 when a new owner suspended operations. After the pumps
were turned off, water from the surrounding rock basin began seeping into the pit. By the
time an astronaut on the International Space Station took this picture, water in the pit was
more than 275 meters (900 feet) deep. Because its water contains high concentrations of
metals such as copper and zinc, the Berkeley Pit is listed as a federal Superfund site. The
Downloaded by [University of Auckland] at 23:40 09 April 2014

Berkeley Pit receives groundwater flowing through the surrounding bedrock and acts as a
terminal pit or sink for these heavy metal–laden waters, which can be as strong as battery
acid. Ongoing cleanup efforts include treating and diverting water at locations upstream
of the pit to reduce inflow and decrease the risk of accidental release of contaminated
water from the pit into local aquifers or surface streams (NASA 2011).
Another key concept of geomorphology is that specific landforms are closely related
to the underlying geology, that is, rock types and the tectonic fabric that includes folds,
fractures, faults, and other discontinuities in the rock mass. Together, the geology and
the geomorphologic processes shaping the land surface play key roles in the formation of
aquifers and the resulting characteristics of groundwater flow at the site. Thanks to rapid
developments in remote sensing technology and easy access to various Internet (online)
sources of Earth imagery and digital elevation data, it is now possible to perform a very
detailed visual analysis of geomorphologic features without even visiting a site. This, how-
ever, is not recommended by the authors—one should always make every attempt to visit
the site he or she is working on. As emphasized throughout this book, site topography can
be displayed in three dimensions, rotated and viewed from different angles by using a
variety of commercial and public-domain computer programs. Aerial and satellite images,
geologic maps, and other thematic maps can be easily draped over digital 3D topography
and analyzed in the same fashion (Figures 2.4 and 2.5). In addition, every working profes-
sional should have ready access to Google Earth, which is now the default, free platform for
visualization of land surface features. Figures 2.6 through 2.18 illustrate many benefits of
analyzing remote sensing imagery and digital elevation models (DEMs) when developing
hydrogeological CSMs.
The Kunlun fault shown in Figure 2.6 is one of the gigantic strike-slip faults that bound
the north side of Tibet. Left-lateral motion along the 1500-km (932-mi) length of the Kunlun
has occurred uniformly for the last 40,000 years at a rate of 1.1 cm/year, creating a cumula-
tive offset of more than 400 m. In this image, two splays of the fault are clearly seen cross-
ing from east to west. The northern fault juxtaposes sedimentary rocks of the mountains
against alluvial fans. Its trace is also marked by lines of vegetation, which appear red
in the image. The southern, younger fault cuts through the alluvium. Box A shows wet
ground caused by discharge of groundwater from the alluvial fans. The dark linear area in
the outlined box B is wet ground where groundwater has ponded against the fault (NASA
2011).
Songhua River just upstream (west) of the city of Harbin, China, is shown in Figure 2.7.
The main stem of the river and its myriad channels appear deep blue, winding from bot-
tom left toward center right. To the west of the river, shallow lakes appear electric blue.
Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.4
Portion of the geologic map from Hydrogeologic Atlas of the Hill Country Trinity Aquifer, Blanco, Hays, and Travis Counties, Central Texas. (Modified from
Wierman, D. A. et al., Hydrogeologic Atlas of the Hill Country Trinity Aquifer, Blanco, Hays, and Travis Counties, Central Texas, 2010.)

17
18 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.5
Geologic map shown in Figure 2.4 draped over a DEM. (Courtesy of Gavin Hudgeons, AMEC.)

The surrounding landscape reveals the Manchurian Plain in shades of brown, crossed
by pale lines (roads) and spots representing villages and towns. The extreme flatness of
the Manchurian Plain has caused the river to meander widely over time. The result of the
meandering is that the river is surrounded by a wide plain that is filled with swirls and
curves, showing paths the river once took (NASA 2011). The plain includes classic features
of meandering rivers, such as oxbow lakes—semicircular lakes formed when a meander
is cut off from the main channel by river-deposited sediment. As meandering rivers, such
as this one, shift their positions across the valley bottom, they create a complicated pattern
of heterogeneous sediment deposits of varying grain sizes both laterally and vertically.
Consequently, groundwater flow rates, directions, and velocities are quite convoluted, not
just in the three-dimensional space but also in time because of changing water levels in the
main river and its tributaries, including flooding (see also Figure 4.40).
Analysis of topography is particularly useful when deciphering possible geologic
reasons for a certain type of surface drainage as illustrated schematically in Figure 2.8.
Although topographic maps printed on paper will continue to be utilized for years by
default (and in some parts of the world may still be the only available option), DEMs offer
many advantages as illustrated in Figures 2.9 and 2.10. For example, areas denoted as A
and B have a higher density of surface drainage features and steeper slopes (more closely
spaced contours), which are clearly visible on both the topographic map in Figure 2.9 and
the DEM in Figure 2.10. This difference may be the result of a less permeable rock type, dif-
ferent slopes of sedimentary layers, or some other geologic reason such as local uplifting
(tectonic movement) that promotes vertical erosion. However, more subtle landforms in
the central flood plain such as the oxbow lakes and river terraces that are clearly visible on
the bottom of Figure 2.10 are less obvious on the printed map or not even depicted because
of a relatively coarse contour interval of 20 ft. In contrast, the high-resolution DEM derived
from Light Detection and Ranging (LiDAR) topographic data even shows rows of crops in
some of the flood-plain fields. In terrains like this, a hydrogeologic site visit would focus
Conceptual Site Models 19
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.6
Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) satellite image of the Kunlun
fault, north Tibet. This visible light and near infrared scene was acquired on July 20, 2000. (Courtesy of NASA/
GSFC/MITI/ERSDAC/JAROS and the U.S./Japan ASTER Science Team. From NASA, Earth Observatory, http://
earthobservatory.nasa.gov/, accessed March 2011.)
20 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.7
ASTER image of the Songhua River upstream (west) of the city of Harbin, China, acquired on April 1, 2002.
(Courtesy of NASA/GSFC/METI/ERSDAC/JAROS and the U.S./Japan ASTER Science Team. From NASA,
Earth Observatory, http://earthobservatory.nasa.gov/, accessed March 2011.)

E E

D D
F
D

F
E F

FIGURE 2.8
Criteria for interpretation of drainage patterns on topographic maps and remote sensing imagery. Top left:
(a) Drainage density is high in less permeable rocks; (b) more permeable rocks have fewer drainage features;
(c) surface drainage is disintegrated or missing in karstified rocks. Top right: (a) Dendritic drainage is character-
istic of homogeneous and isotropic geologic terrains; (b) rectangular drainage is common in folded, stratified
(layered) sedimentary rocks dissected by perpendicular fractures and faults; (c) circular drainage (ring pat-
tern) is characteristic for domes or partially destroyed calderas. Bottom: Faults can be inferred from drainage
features such as long, straight stream segments (a), aligned segments of neighboring streams that change flow
directions abruptly (b), and segments of different streams extending (aligning) over the ridges (c). (Modified
from Dimitrijević, M., Geolosko Kartiranje (Geological Mapping), ICS, Beograd, 1978.)
Conceptual Site Models 21
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.9
Part of printed 1:24,000 USGS topographic quadrangle with a contour interval of 20 ft. Note denser surface
drainage in areas A and B (see also Figure 2.8).

on the river terrace and other escarpments looking for springs, seeps, and rock (sediment)
outcrops. Planning such a visit would greatly benefit from having a “living” 3D image
of the topography. Another advantage of the DEM visualization is that it is likely easier
for nontechnical audiences to understand than 2D contour maps, and it is therefore well
suited for presentations and reports prepared for the general public.
National Elevation Dataset (NED) high-resolution data from the USGS is typically derived
from LiDAR technology or digital photogrammetry and is often break line enforced to
account for linear relief features. If collected at a ground sample distance no coarser than
5 m, such data may also be available within the NED at a resolution of 1/9 arcsec, now
22 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.10
Top: View of 3D surface created from high-resolution digital elevation data for the same area shown in Figure
2.9. Bottom: Enlarged view of the 3D surface showing fine detail such as roads, meanders, terrace scarps, and
rows of crops.

downloadable for some portions of the United States (see USGS’s Seamless Data Warehouse
at http://seamless.usgs.gov/ or The National Map Viewer at http://viewer.nationalmap​
.gov/viewer/). The new generation of USGS topographic maps at scale 1:24,000 (topographic
quadrangles) is based on these high-resolution elevation data, incorporates high-resolution
photo images, and has various layers of information all available in digital, georeferenced,
pdf format. An example is shown in Figures 2.11 through 2.15. Both the digital map with
all its layers, including the aerial photograph, and the accompanying high-resolution NED
(digital raster file) can be downloaded and analyzed as illustrated. The NED file can be
Conceptual Site Models 23
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.11
Part of the new 2011 edition of the USGS Lewisburg, WV, quadrangle showing contour and hydrography layers
only.

contoured and displayed in 3D with some of the commonly used programs such as Surfer
by Golden Software, Inc. or Esri ArcScene (see Chapter 4).
Figure 2.11 shows contours and hydrography layers of a part of the new 2011 USGS
Lewisburg, WV, quadrangle illustrating unique features of karst topography. Numerous
sinkholes including deep, uvala-like closed depressions developed in the Greenbrier
Limestone are visible in the left portion of the map and a smaller area in the southeast.
Closed depressions, such as these, and an absence of surface drainage (flowing streams)
are the main characteristics of mature karst terrains. In contrast, less permeable rocks
such as shales of the McCrady Formation and Pocono Group in the central and eastern
portions of the map have densely developed surface drainage. Figure 2.12 shows two
views of the high-resolution NED for another portion of the same quadrangle and some
of the adjacent areas where pronounced lines of sinkholes and other relief lineaments are
clearly visible. These linear features, likely formed by faulting and contrasting lithology,
are main indicators of preferential flow paths within the underlying karst aquifer. The
entire karst area of the Lewisburg quadrangle is draining at Davis Spring, the largest
spring in West Virginia (the spring is located to the southeast on the adjacent Asbury,
WV, quadrangle).
Because of the legibility issues, printed topographic maps are limited by the contour
interval even when a finer resolution of elevation data is available. This is illustrated with
Figures 2.13 through 2.15. The maps’ contour interval of 20 ft is not sufficient to depict
smaller sinkholes, which are clearly visible on the aerial photograph. Having the NED file
of the same quadrangle enables contouring at any desired interval including 3 ft (Figure
2.14), which should suffice for identifying virtually all sinkholes visible on the aerial
photograph. Better yet is to use a combination of contours and a color-shaded surface
map (Figure 2.15), which can be rotated, zoomed in, and displayed with different vertical
exaggeration. As a small incentive to our speleological (karst) colleagues, the authors will
24 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.12
3D view of a portion of the Lewisburg, WV, quadrangle and the adjacent areas to the west. Bottom: Blowup of
the 3D surface showing fine detail depicted by the high-resolution NED.
Conceptual Site Models 25
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.13
Karst features depicted with the 20-ft contours versus all features visible on the aerial photograph of the new
2011 edition of the Lewisburg, WV, quadrangle. The photograph has a resolution of approximately 0.5 m.
26 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.14
Contours of the same area shown in Figure 2.13 (top) created from the NED file contour interval is 3 ft.

FIGURE 2.15
Contours with the 5-ft interval superimposed on the shaded color surface of the same area shown in Figures
2.13 and 2.14.
Conceptual Site Models 27
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.16
Shaded relief map of the greater San Antonio, TX, area available from the National Atlas of the United States
(www.nationalatlas.gov). Arrows indicate some of the lineaments visible on this small-scale map.

donate a signed copy of this book (and possibly some other goodies as well) to the first
speleological team that sends us a nice-looking georeferenced map that shows the spatial
relationship between the land surface features visible on the previous figures and the lon-
gest explored cave in the area.
For various, and sometimes mystifying reasons, geologic maps do not always show other­
wise obvious features that may have an underlying geologic reason and are of critical
importance to a hydrogeological conceptual site model. One such example is the Balcones
Fault Zone in Texas, which is responsible for the current geometry of the Edwards
Aquifer, one of the most extensive and prolific karst aquifers in the world. Most, if not
all, geologic maps of this large area, at varying scales, only show faults of the northeast–­
southwest strike, which are universally interpreted as the most important geologically.
However, even very general relief maps at small scales such as in Figure 2.16 clearly show
long prominent lineaments trending in other directions, including perpendicular to the
main northeast–southwest system of faults. Even a nongeologist would be able to iden-
tify northwest–southeast striking faults in this figure. In addition to being perfect candi-
dates for preferential groundwater flow paths within the Edwards Aquifer, these faults
(deemed unimportant lineaments by some) may be transferring significant quantities of
ground­water from the adjacent Trinity Aquifer to the Edwards Aquifer. Figure 2.17 illus-
trates advantages of DEMs for analyzing topographic lineaments in the midsection of the
Balcones Fault Zone between San Antonio and Austin.
28 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.17
Two DEM views of the central portion of the Balcones Fault Zone between San Antonio and Austin, TX. Some
topographic lineaments may be more apparent when viewed from different angles. Note the two blue ellipses
for orientation.
Conceptual Site Models 29

2.2.2 Hydrology
Surface water and groundwater are inseparable parts of the same hydrologic cycle and, as
such, influence each other at practical (site-specific) scales for most projects. However, some
working professionals do not always appreciate this simple fact and may, for example, study
a groundwater problem without paying any attention to a surface stream in the vicinity.
Even when a project at hand involves a deep, confined aquifer seemingly separated from the
rest of the world, it is highly recommended to make an attempt to understand its recharge
and discharge areas. For all projects involving unconfined aquifers, it is mandatory to define
likely hydraulic roles of nearby surface water bodies (e.g., streams, lakes, drainage ditches)
with respect to the movement of groundwater. In this sense, the term hydrology refers to
flow rates and stages (water elevation) of surface water bodies and their influence on the
exchange of flow between surface water and groundwater. In the field of water resources
Downloaded by [University of Auckland] at 23:40 09 April 2014

management, groundwater and surface water are increasingly seen as a single intercon-
nected resource that must be managed holistically. Additional discussion regarding inte-
grated management of surface water and groundwater resources, including examples of
combined surface water–groundwater numeric models, is provided in Chapter 9.
The majority of perennial surface streams would not have permanent flow without
groundwater contribution called baseflow. Excessive withdrawal of groundwater may
cause depletion or complete cessation of baseflow and disappearance of wetlands abutting
the surface stream in question. Conversely, changes in land use, such as urban develop-
ment, may alter patterns of surface water runoff, increase average flow in the receiving
streams, and reduce aquifer recharge. Contamination of surface streams adjacent to well
fields used for water supply may threaten groundwater quality if the contaminant enters
the underlying aquifer, and discharge of contaminated groundwater into a surface water
body may have negative impact on human health and the environment.
Examples in Chapter 4 illustrate various hydraulic relationships between surface streams
and underlying unconfined aquifers including their importance for drawing contour maps
of the potentiometric surface and determining groundwater flow directions. However,
depending on local hydrogeologic and climatic conditions and human impacts, such
as stream regulation with dams and locks, the same stream may be losing or gaining
water in different sections (reaches), and this pattern may change in time. Major unregu-
lated meandering streams with large flood plains that are seasonally flooded (see Figure
2.18) may have quite complicated surface water–groundwater relationships, especially if
there are oxbow lakes and buried meanders. In addition, large streams are often regional
groundwater discharge locations for deeper confined aquifers, which complicates things
even further (Figure 2.19). Consequently, trying to determine representative groundwater
flow directions and contaminant plume geometry at a local site based solely on quarterly
water level measurements may be a daunting, if not impossible, task.
Streams without permanent gauges and sufficient data records present special chal-
lenges when determining characteristic flows needed for various calculations. Long-term
minimum baseflow is of particular importance for various applications, such as determin-
ing maximum allowable loading rates of a groundwater contaminant still protective of
in-stream water-quality standards. In the United States, this flow is typically referred to as
7Q10, which means seven-day, consecutive low flow with a 10-year return frequency (i.e.,
the lowest stream flow for seven consecutive days that would be expected to occur once in
10 years). Flow measurement at successive stream segments is a common method of deter-
mining if the stream is losing or gaining water between the segments. However, because
of the variability of flow conditions in the same stream and the associated potential
30 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.18
Two composite color satellite images of the White River, AR. ASTER on NASA’s Terra satellite captured the top
image on April 7, 2008, while the river was still rising before reaching one of the worst flood levels on record.
The bottom image is from April 14, 2006, when water levels were closer to normal. Bright green vegetation
flanks the river in the 2006 image. Much of the land is forested and preserved in the Cache River National
Wildlife Refuge. Around the forest, agricultural fields create a checkerboard of tan and green. Some of the fields
are flooded in the 2008 image. (NASA image created by Jesse Allen, using data provided courtesy of NASA/
GSFC/METI/ERSDAC/JAROS and U.S./Japan ASTER Science Team; caption by Holli Riebeek. From NASA,
Earth Observatory, http://earthobservatory.nasa.gov/, accessed March 2011.)
Conceptual Site Models 31
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.19
Groundwater flow directions (shown with arrows) in a flood plain of a meandering stream, which also acts as a
groundwater discharge zone for the regional aquifer. (Modified from Kresic, N., Hydrogeology and Groundwater
Modeling, Second Edition, CRC Press, Taylor & Francis Group, Boca Raton, FL, 807 pp., 2007.)

measurement errors, this method should be applied with great care in order to avoid false
conclusions. For example, if only one set of measurements is made at a few locations with
several or more hours separating them and the stream flow is under the influence of recent
precipitation, the results would almost certainly be misleading because the flow wave is
moving rapidly. It is therefore best if the flow measurements are performed after a long
period without precipitation and the applied method is based on a continuous record-
ing of the stream stage at successive stream segments. Flow hydrographs derived in this
way provide information on the actual change of volume of water between the segments,
which is the only real measure of gain or loss.
One of the most precise methods for measuring flow of smaller streams is tracer dilution
gauging, which can be performed with a fluorescent dye or, more simply, a salt solution. In
addition, when accompanied with chemical analyses, it is very useful when determining if
and where there is an increased inflow of contaminated groundwater. Figures 2.20 and 2.21
show results of a dye tracer study used to assess the dry-weather baseflow of a stream and
groundwater seepage inflows to the stream segment. The rhodamine dye was injected at the
most upstream location of the study segment at a constant rate and constant known concentra-
tion. A water-quality meter with a rhodamine sensor was placed in-stream at the most down-
stream sampling location and programmed to continuously record rhodamine concentrations.
The dye was injected continuously into the stream until the most downstream dye concentra-
tions reached a plateau. Note that if salt solution were used as the tracer, one would mea-
sure in-stream conductivity at the downstream location. Once dye concentrations along
the stream reached a plateau, surface water samples were collected at each sampling loca-
tion and analyzed for rhodamine concentrations and concentrations of a constituent of
concern (COC). Stream flow rate was determined for each stream sampling location using
the rhodamine concentration based on the observed dilution of the tracer. As can be seen
in Figure 2.21, near the middle of the study reach, the concentration of the COC increases
notably and then decreases gradually, indicating that an influx of impacted groundwater
to the stream is occurring along a preferential flow path through a short stream segment.
32 Hydrogeological Conceptual Site Models




5KRGDPLQH FRQFHQWUDWLRQ XJ/










Downloaded by [University of Auckland] at 23:40 09 April 2014


 $0  30  $0  $0
7LPH

FIGURE 2.20
Rhodamine concentration at the downgradient location during an instream dye tracing study used to deter-
mine baseflow in a small stream. (Courtesy of Lisa Pfau, Larry Neal, and Margaret Tanner, AMEC).

Another important aspect of determining baseflow of surface streams is its application


in water budget and groundwater recharge studies (Kresic and Mikszewski 2009). In natu-
ral long-term conditions and the absence of artificial groundwater withdrawal, the rate
of groundwater recharge in a drainage basin of a permanent gaining stream is equal to
the rate of groundwater discharge. Assuming that all groundwater discharges into the
surface stream, either directly or via springs, it follows that the stream baseflow equals
the groundwater recharge in the basin. This simple concept is illustrated in Figure 2.22.



)ORZ UDWH



)ORZ UDWH FIV


&RQFHQWUDWLRQ

&RQFHQWUDWLRQ XJ/



 




 
    
'LVWDQFH IURP LQMHFWLRQ SRLQW IW

FIGURE 2.21
Contaminant concentration and stream flow rate along a stream segment determined from the in-stream dye
tracer study. (Courtesy of Lisa Pfau, Larry Neal, and Margaret Tanner, AMEC.)
Conceptual Site Models 33

5HFKDUJH SHUFHQWDJHRISUHFLSLWDWLRQ
3UHFLSLWDWLRQ

5HFKDUJH 5  IWG

'UDLQDJHDUHD $  IW

5[$ IWG

UJH %DVHIORZ %)  IWG


5HFKD
5HFKDUJH 5  %DVHIORZ %)
Z
%DVHIOR
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.22
Estimation of aquifer recharge from surface stream baseflow. (Modified from Kresic, N., Hydrogeology and
Groundwater Modeling, Second Edition, CRC Press, Taylor & Francis Group, Boca Raton, FL, 807 pp., 2007.)

However, its application is not always straightforward, and it should be based on a thor-
ough understanding of the geologic and hydrogeologic characteristics of the basin. The
following examples illustrate some situations where baseflow alone should not be used to
estimate actual groundwater recharge (Kresic 2007):

• Surface stream flows through a karst terrain where topographic and groundwater
divides are not the same. The groundwater recharge based on baseflow may be
grossly overestimated or underestimated depending on the circumstances.
• The stream is not permanent, or some river segments are losing water (either
always or seasonally); locations and timing of the flow measurements are not ade-
quate to assess such conditions.
• There is abundant riparian vegetation in the stream floodplain, which extracts a
significant portion of groundwater via evapotranspiration.
• There is discharge from deeper aquifers, which have remote recharge areas in
other drainage basins.
• A dam regulates the flow in the stream.

Most techniques for estimating baseflow are based on graphical separation of surface
stream hydrographs into two major components: flow generated by surface and near-­
surface runoff and flow generated by discharge of groundwater. Although some profes-
sionals view this approach as a convenient fiction because of its subjectivity and lack of
rigorous theoretical basis, it does provide useful information in the absence of detailed
(and expensive) data on many surface water runoff processes and drainage basin char-
acteristics that contribute to streamflow generation. Risser et al. (2005) present a detailed
application and comparison of two automated methods of hydrograph separation for
estimating groundwater recharge based on data from 197 streamflow gauging stations in
Pennsylvania. The two computer programs—PART and RORA (Rutledge 1993, 1998, 2000)
developed by the USGS—are in public domain and available for free download from the
USGS Web site. The PART computer program uses a hydrograph separation technique
34 Hydrogeological Conceptual Site Models

to estimate baseflow from the streamflow record. The RORA computer program uses
the recession-curve displacement technique of Rorabaugh (1964) to estimate groundwa-
ter recharge from each storm period. The RORA program is not a hydrograph-separation
method; rather, recharge is determined from displacement of the streamflow–recession
curve according to the theory of groundwater drainage.
Rorabaugh’s method utilized by RORA is a one-dimensional analytical model of ground-
water discharge to a fully penetrating stream in an idealized, homogenous aquifer with
uniform spatial recharge. Because of the simplifying assumptions inherent to the equa-
tions, Halford and Mayer (2000) caution that RORA may not provide reasonable estimates
of recharge for some watersheds. In fact, in some extreme cases, RORA may estimate
recharge rates that are higher than the precipitation rates. Rutledge (2000) suggests that
estimates of mean monthly recharge from RORA are probably less reliable than estimates
for longer periods and recommends that results from RORA not be used at time scales
Downloaded by [University of Auckland] at 23:40 09 April 2014

smaller than seasonal (three months) because results differ most greatly from manual
application of the recession-curve displacement method at small time scales.
A method proposed by Pettyjohn and Henning (1979) includes the effects of riparian
evapotranspiration and, therefore, usually provides lower estimates than the hydrograph-
separation technique based on recession-curve displacement (Rutledge 1992). Groundwater
recharge estimates produced by the Pettyjohn–Henning and similar methods are some-
times called effective (or residual) groundwater recharge because the estimates represent
the difference between actual recharge and losses to riparian evapotranspiration.
As illustrated in Figure 2.23, graphical methods of baseflow separation may not be appli-
cable at all in some cases. A stream with alluvial sediments having significant bank storage

2 2
1 1

(a) (b)
Discharge

Discharge

Time Time

FIGURE 2.23
Stream hydrograph showing flow components after a major rise resulting from rainfall when (a) the stream
stage is higher than the water table; (b) the stream stage is higher than the water table in the shallow aquifer but
lower than the hydraulic head in the deeper aquifer, which is discharging into the stream. The initial stream
stage before rainfall is marked as 1, and the stream stage during peak flow is marked as 2. (Modified from Kresic,
N., Groundwater Resources: Sustainability, Management, and Restoration, McGraw Hill, New York, 235–292, 2009.)
Conceptual Site Models 35

capacity may, during floods or high river stages, lose water to the subsurface so that no
baseflow is occurring (Figure 2.23a). Or a stream may continuously receive baseflow from
a regional aquifer that has a different primary recharge area than the shallow aquifer and
maintains a higher head than the stream stage (Figure 2.23b). Although one may attempt to
graphically separate either of the two hydrographs using some common method, it would
not be possible to make any conclusions as to the groundwater component of the surface
stream flow without additional field investigations. One such field method is hydrochem-
ical separation of the streamflow hydrograph using dissolved chemical constituents or
environmental tracers. It is often more accurate than simple graphoanalytical techniques
because surface water and groundwater usually have significantly different chemical sig-
natures (Kresic and Mikszewski 2009).
The rate of flow exchange between surface water and groundwater depends on two
main factors: the hydraulic gradient between the two and the conductance of the riverbed
Downloaded by [University of Auckland] at 23:40 09 April 2014

(lakebed) sediments. This is schematically illustrated in Figure 2.24. The hydraulic gradi-
ent or the difference between the hydraulic head in the aquifer adjacent to the river and
the river stage (hydraulic head of the river) is the same in all four cases but with different

D E
+HDG LQ +HDG LQ
K DTXLIHU DTXLIHU K
5LYHU VWDJH

5LYHUE
H
VHGLPH G .&
QW
.&
. .
$TXLIHU 4 4
&!&
4!4

F G

K K
+HDG LQ +HDG LQ
DTXLIHU DTXLIHU

.& .&

. . .!.
& & 4 &!&
4
4 4 4!4

FIGURE 2.24
River hydraulic boundary represented with a head-dependent flux. K is hydraulic conductivity of the riverbed,
C is riverbed conductance, Q is the flow rate between the aquifer and the river, Δh is the hydraulic gradient
between the aquifer and the river (same in all four cases). (a, b) Gaining stream; (c, d) losing stream. Lower
hydraulic conductivity of the riverbed sediments and their greater thickness result in lower conductance and
a lower flow rate. (Modified from Kresic, N. Groundwater Resources: Sustainability, Management, and Restoration,
McGraw Hill, New York, pp. 235–292, 2009.)
36 Hydrogeological Conceptual Site Models

signs. In cases a and b, the hydraulic gradient is toward the river, which therefore gains water,
whereas in cases c and d, the hydraulic gradient is from the river toward the aquifer (the river
loses water). The lower conductance corresponds to more fines (e.g., silt) in the riverbed sedi-
ment and a lower hydraulic conductivity, resulting in a lower water flux between the aquifer
and the river. Thicker low-permeable riverbed sediments will have the same effect as shown
with cases b and d. All other things being equal, an increase in the hydraulic gradient will
result in an increased flux of water between the aquifer and the river.
As discussed earlier, one of the most important aspects of surface water stages is that
they change in time. Which time interval will be used for their inevitable averaging
depends upon the goals of each particular study. Seasonal or perhaps annual periods may
be adequate for a long-term water supply evaluation when considering recharge from pre-
cipitation. When a hydraulic boundary is quite dynamic and the required accuracy of pre-
dictions is high, the time interval for describing a changing river (lake) stage may have to
Downloaded by [University of Auckland] at 23:40 09 April 2014

be much shorter. For example, Figure 2.25 shows a comparison of two time intervals used
to model the interaction between a large river and a highly transmissive alluvial aquifer.
The Columbia River stage at this site is dominated by higher frequency diurnal fluctua-
tions that are principally the result of water released at Priest Rapid Dam to match power-
generation needs. The magnitude of these diurnal river-stage fluctuations can exceed the


 5LYHU

= P





   
'LVWDQFH P


 5LYHU

= P





   
'LVWDQFH P

5LYHU WUDFHU FRQFHQWUDWLRQ /  ±   ±  ! 

FIGURE 2.25
River water tracer concentrations at the end of a model simulation. 2D numeric model of interaction between
the aquifer, vadose zone, and the Columbia River in the Hanford 300 area, Washington. Top: Hourly boundary
conditions. Bottom: Monthly boundary conditions. (Modified from Waichler, S. R., and Yabusaki, S. B., Flow
and Transport in the Hanford 300 Area Vadose Zone–Aquifer–River System, Pacific Northwest National Laboratory,
Richland, WA, 2005.)
Conceptual Site Models 37

seasonal fluctuation of monthly average river stages. During the simulation period, the
mean 24-hour change (difference between minimum and maximum hourly values) in
river stage was 0.48 m, and the maximum 24-hour change was 1.32 m. Groundwater levels
are significantly correlated with river stage although with a lag in time and decreased
amplitude of fluctuations. A two-dimensional, vertical, cross-sectional model domain was
developed to capture the principal dynamics of flow to and from the river as well as the
zone where groundwater and river water mix (Waichler and Yabusaki 2005).
Running the model with hourly boundary conditions resulted in frequent direction and
magnitude changes of water flux across the riverbed. In comparison, the velocity fluctua-
tions resulting from averaging the hourly boundary conditions over a day were consid-
erably attenuated, and for the month average, fluctuations were nonexistent. A similar
pattern held for the river tracer, which could enter the aquifer and then return to the river
later. Simulations based on hourly water-level boundary conditions predicted an aquifer–
Downloaded by [University of Auckland] at 23:40 09 April 2014

river water mixing zone that reached 150 m inland from the river based on the river tracer
concentration contours. In contrast, simulations based on daily and monthly averaging of
the hourly water levels at the river and interior model boundaries were shown to signifi-
cantly reduce predicted river water intrusion into the aquifer, resulting in underestimation
of the volume of the mixing zone. The relatively high-frequency river-stage changes asso-
ciated with diurnal release schedules at the dams generated significant mixing of the river
and groundwater tracers and flushing of the subsurface zone near the river. This mixing
was the essential mechanism for creating a fully developed mixing zone in the simula-
tions. Although the size and position of the mixing zone did not change significantly on a
diurnal basis, they did change in response to seasonal trends in the river stage. The largest
mixing zones occurred with the river-stage peaks in May–June and December–January,
and the smallest mixing zone occurred in September when the river stage was relatively
low (Waichler and Yabusaki 2005).
Urban areas present special challenges for understanding relevant hydrologic factors
and developing an accurate CSM. Examples include rerouted streams and filled stream-
beds resulting in complex groundwater flow patterns that may not be easily deciphered
based on current land surface topography. Considering these complicating factors is
particularly important for interpretation of present shapes of groundwater contaminant
plumes emanating from old historic source(s). A list of possible artificial hydrographic
features often only marginally (or not at all) described in CSMs includes storm water and
drainage ditches, storm water collection basins, leaky sewer and water lines, culverts, and
infrastructure tunnels. All of them can cause either local or regional effects and can sig-
nificantly influence groundwater recharge, discharge, and flow directions.

2.2.3 Climate (Hydrometeorology)


Climate is defined as an aggregate of weather conditions, representing a general pattern of
weather variations at a location or in a region. It includes average weather conditions and the
variability of elements and information on the occurrence of extreme events (Lutgens and
Tarbuck 1995). The nature of both weather and climate is expressed in terms of basic elements,
the most important of which are the temperature of the air, the humidity of the air, the type
and amount of cloudiness, the type and amount of precipitation, the pressure exerted by the
air, and the speed and direction of the wind. These elements constitute the variables by which
weather patterns and climatic types are characterized (Lutgens and Tarbuck 1995). They also
all influence the water budget of an area primarily by affecting natural processes of aquifer
recharge and aquifer discharge via precipitation and evapotranspiration.
38 Hydrogeological Conceptual Site Models

The main difference between weather and climate is the time scale at which these basic
elements change. Weather is constantly changing, sometimes from hour to hour, and these
changes create an almost infinite variety of weather conditions at any given time and place.
In comparison, climate changes are more subtle and were, until relatively recently, considered
important for time scales of hundreds of years or more and usually only discussed in aca-
demic circles. A more broad definition of climate is that it represents the long-term behavior
of the interactive climate system, which consists of the atmosphere, hydrosphere, lithosphere,
biosphere, and cryosphere or the ice and snow that are accumulated on the Earth’s surface
(Lutgens and Tarbuck 1995). At a minimum, and regardless of the project scale and scope, long-
term precipitation data and air temperatures for the closest available climate-gauging station
should be analyzed as they relate to the site’s groundwater. It is also generally recommended to
include a rain gauge at hydrogeological study sites as precipitation can vary greatly over small
distances. Barometric pressure is another parameter typically monitored by hydrogeologists at
Downloaded by [University of Auckland] at 23:40 09 April 2014

the site level, as many pressure transducers placed in monitoring or production wells are not
vented to the atmosphere, which means that the barometric pressure must be subtracted from
all measurements in order to obtain the water-level measurement.
An example illustrating interconnections between recharge from precipitation and
water table fluctuations in a fractured rock aquifer is given by Harned (1989) and shown
in Figure 2.26. In this example, the main period of groundwater recharge results from
heavy rains in late winter when evapotranspiration is low. It is reflected by a peak in the
water table hydrograph appearing a few days after heavy rainfall in late March. The time


:DWHU OHYHO LQ IHHW EHORZ ODQG VXUIDFH


:HOO '




3UHFLSLWDWLRQ LQ LQFKHV

 3UHFLSLWDWLRQ DW $WODQWD


-$1 )(% 0$5 $35 0$< -81 -8/ $8* 6(3 2&7 129 '(&


FIGURE 2.26
Response of well water level change to rainfall. (From Cressler, C. W. et al., Ground Water in the Greater Atlanta
Region, Georgia, Georgia Department of Natural Resources, U.S. Environmental Protection Agency, and The
Georgia Geologic Survey, in cooperation with U.S. Geological Survey, Information Circular 63, 1983.)
Conceptual Site Models 39

after a storm when the peak appears in the water level is directly related to the vertical
hydraulic conductivity of the material in the unsaturated zone and the depth-to-water
table. The hydrograph also shows that little recharge took place during the growing sea-
son (April through September) even though the area received significant rainfall during
these months. The declining water level indicates continuing groundwater discharge that
is not equaled or exceeded by recharge until fall when evapotranspiration is low.
While the above example represents a very useful form of analysis, it is important to
remember that water-level fluctuations in a well are not entirely caused by aerial recharge
(i.e., recharge from infiltration of precipitation immediately above the well). Local and
regional recharge from up-gradient areas will cause a water table rise across an aquifer
independent of aerial recharge in some locations. It is often counterintuitive to the non-
hydrogeologist when a paved area experiences a significant water table rise, for example.
For groundwater remediation applications, areas are often paved to limit aerial recharge
Downloaded by [University of Auckland] at 23:40 09 April 2014

to support dewatering; however, recharge in other areas of the watershed must also be
accounted for to predict the relative impact of the paving on water table fluctuations.
Failure to do so may result in the compromise of a dewatering system from unacceptable
water table rise. Additional related discussion is provided in Section 2.3.1.2.
One common limiting factor for many groundwater projects is a relatively short period
of site-specific data on the hydraulic head (water table, potentiometric surface) fluctua-
tions. Even when this record is longer than several years, for example, it usually has
only quarterly water-level data, thus not allowing for a more detailed analysis of aquifer
recharge and its natural cycles. In order to determine where the site-specific data belong
in terms of natural hydrologic and climatic cycles, it is very helpful to analyze long-term
data from wells or springs in the same or similar hydrogeologic and climate settings
that may be available from government agencies. Figure 2.27 shows the daily discharge





)ORZ FIV








3UHFLSLWDWLRQ LQ
















<HDU

FIGURE 2.27
Hydrograph of daily flows of Annie Spring near Crater Lake, OR, versus daily precipitation at Crater Lake. Red
line represents polynomial sixth-order trend.
40 Hydrogeological Conceptual Site Models

rate of Annie Spring in Oregon versus daily precipitation at Crater Lake gauging sta-
tion for 28 years. The graphs illustrate the presence of both long-term and annual natu-
ral cycles and, in this case, delay in the spring’s response to precipitation of about six
months. The highest precipitation, mainly in the form of snowfall, is in November and
December, whereas the discharge peaks in June following a complete snowmelt. The
Annie Spring’s hydrologic system behaves like a perfect clock without any visible long-
term trend unrelated to natural periodicity. However, in some other settings, such a
trend may be present, indicating possible anthropogenic influences as illustrated in the
next section.

2.2.4 Land Use and Land Cover


The quantity and quality of groundwater are profoundly affected by the historic and cur-
Downloaded by [University of Auckland] at 23:40 09 April 2014

rent land use and land cover at both regional and local scales. Understanding historic
land use/land cover and comparing it with current and projected land use/land cover is
therefore of critical importance to the project at hand. Three main trends in land use and
the associated human-induced changes in land cover have been taking place worldwide
and disrupting natural hydrologic cycles:

1. Conversion of forests into agricultural land, mostly a practice in undeveloped


countries
2. Rapid urbanization in undeveloped and developing countries converting all other
land uses into urban land
3. Rapid decentralization of cities (i.e., suburbanization) and reforestation of former
agricultural land, particularly in the United States and other developed countries

Urban development and the creation of impervious surfaces beyond a city core inevita-
bly result in increase in runoff and soil erosion. In turn, this reduces infiltration poten-
tial and groundwater recharge. Increased sediment load carried by surface streams
often results in the formation of fine-sediment deposits along stream channels, which
reduces hydraulic connectivity and exchange between surface water and groundwater
in river flood plains. Clear-cutting of forests also alters the hydrologic cycle and results
in increased erosion and sediment loading to surface streams. Conversion of low-lying
forests into agricultural land may increase groundwater recharge, especially if it is fol-
lowed by irrigation. (Although it is important to remember that often such recharge is
irrigation return, and the origin of water may be the underlying aquifer in question.
Irrigation return may be a small fraction of the groundwater originally pumped.) In con-
trast, cutting of forests in areas with steeper slopes will generally decrease groundwater
recharge because of the increased runoff except in cases of very permeable bedrock such
as karstified limestone.
In general, urban development results in decreased infiltration rates and increased
surface runoff because of the increasing area of various impervious surfaces (rooftops,
asphalt, concrete). However, the infiltration rate varies significantly within an urban area
based on actual land use. This is particularly important when evaluating fate and trans-
port of contaminant plumes, including development of groundwater models for such
diverse areas. For example, a contaminant plume may originate at an industrial facility
with a high percentage of impervious surfaces, resulting in negligible infiltration, and then
migrate toward a residential area where infiltration rates may be rather high because of the
Conceptual Site Models 41

open space (yards) and watering of lawns. By eliminating infiltration as a driving force,
paving and rooftops may play a positive role in preventing further downward migration
of contaminants through the vadose zone. In undeveloped countries, where the booming
of megacities and associated slums causes many societal and environmental problems,
groundwater recharge rates may actually increase as a result of leaky water lines and
unregulated sewage disposal, contaminating groundwater resources (particularly shal-
low and dug wells) and adding yet another problem to the list.
Agricultural activities have had direct and indirect effects on the rates and compositions
of groundwater recharge and aquifer biogeochemistry. Direct effects include dissolution
and transport of excess quantities of fertilizers and associated materials and hydrologic
alterations related to irrigation and drainage. Some indirect effects include changes in
water-rock reactions in soils and aquifers caused by increased concentrations of dis-
solved oxidants, protons, and major ions. Agricultural activities have directly or indirectly
Downloaded by [University of Auckland] at 23:40 09 April 2014

affected the concentrations of a large number of inorganic chemicals in groundwater, such


as NO3–, N2, Cl, SO42–, H+, P, C, K, Mg, Ca, Sr, Ba, Ra, and As as well as a wide variety of
pesticides and other organic compounds (Böhlke 2002; see also Figure 2.28).
Figures 2.29 through 2.35 are example visualizations of land use and land cover created
with satellite imagery, aerial photography, groundwater data, and georeferenced (GIS)
maps available from various federal and state agencies, local governments, and water agen-
cies such as water management districts. As explained in more detail in Chapter 7, various
sources of historic aerial photographs and maps can be georeferenced and subsequently
utilized for the purpose of land use/land cover analysis at both local and regional scales.

FIGURE 2.28
Pesticide application on leaf lettuce in Yuma, AZ. (Courtesy of Jeff Vanuga, National Resources Conservation
Service.)
42 Hydrogeological Conceptual Site Models

Figure 2.29 is a false color image of central Florida east-northeast of Orlando created
from data collected by the Enhanced Thematic Mapper Plus (ETM+) instrument on the
Landsat 7 satellite. The image combines ETM+ bands 5, 4, and 3. Green shows vegetation,
exposed land is pink, lakes and water are deep blue, and concrete-based structures (roads,
towns) appear in various tones of purple and gray. The area on the image is part of the
main recharge zone of the Floridan Aquifer, which is increasingly under stress because
of clear-cutting of natural vegetative cover for residential and agricultural development,
both of which increase groundwater extraction from the aquifer. A new major subdivi-
sion, about 3 mi across, is noted with a circle; this subdivision, being developed north of
Lacoochee and State Road 50, east of U.S. Interstate 75, is just one of many similar ones
changing the Florida landscape forever.
Figure 2.30 shows land cover/land use change in the area of Rainbow Springs near
Dunnellon, FL, between 1995 and 2007. Discharge of this first magnitude spring exhibits
Downloaded by [University of Auckland] at 23:40 09 April 2014

a decreasing linear trend as illustrated in Figure 2.31. Likely explanations for this trend
include reduced aquifer recharge because of increased residential, industrial, commercial,
and transportation developments (i.e., significantly more red color on the 2007 map north
and northeast of the spring) and possibly larger groundwater withdrawals via wells for
water supply and irrigation. In any case, as can be seen in Figure 2.31, the precipitation
pattern did not change notably during the same period indicating a reason other than
reduced natural recharge.
Figure 2.32 illustrates land cover/land use change in the greater San Antonio, TX, area
between 1992 and 2006 where urban development is encroaching on the recharge zone

FIGURE 2.29
False color Landsat ETM+ image of central Florida east-northeast of Orlando. Explanation in text.
Conceptual Site Models 43
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.30
Land use/land cover maps for the area of Rainbow Springs, FL. These maps are simplified versions of the
original shapefiles provided by the Southwest Florida Water Management District. Both maps were categorized
according to the Florida Land Use and Cover Classification System. The 1995 top features (top) were photoin-
terpreted from 1:12,000 USGS color infrared (CIR) digital orthophoto quarter quadrangles by the Mapping and
GIS Section, Southwest Florida Water Management District. The 2007 features (bottom) were photointerpreted
at 1:8000 using 2007 1-ft CIR digital aerial photographs.
44 Hydrogeological Conceptual Site Models






)ORZ FIV





 

 

3UHFLSLWDWLRQ LQ
 


Downloaded by [University of Auckland] at 23:40 09 April 2014

















<HDU

FIGURE 2.31
Hydrograph of the daily flow of Rainbow Springs near Dunnellon, FL, versus daily precipitation at Blitchton
Tower (October 1975 to July 1996) and Rainbow Springs (August 1996 to January 2009) gauging stations. Red
lines represent the linear trend.

of the Edwards Aquifer north of the city. The maps were derived from satellite imagery
using supervised classification of different reflective signatures of the land surface (cover).
The 2006 map is based on newer satellite imagery and appears sharper, providing ground
resolution of 30 × 30 m. For comparison, land use/land cover maps of the Rainbow Springs
area in Figure 2.30 are based on high-resolution aerial photographs and their direct visual
interpretation resulting in depiction of fine detail, sometimes showing features smaller
than 10 × 10 ft.
For hazardous waste site investigation and remediation, the current and historical indus-
trial/commercial land uses at a site comprise what is typically termed the facility profile
of the CSM. The nature of manufacturing processes and historical waste handling prac-
tices must be understood to identify potential contaminants and their disposal locations.
Historical aerial photographs are often useful in identifying former burn pits, infiltration
ponds, surface water discharge points, and lagoons or cesspools. In New England, which
has long been a manufacturing center, legacy contamination at hazardous waste sites may
date back to the 1700s. In many cases, these sites have been converted for different indus-
trial uses over time, leading to a wide variety of potential contaminants that could be
found in soil or groundwater. A classic example is the use of mill buildings dating from
the 1700s to 1800s for electronics or other manufacturing purposes in the 1950s through
the 1970s. Metal contamination may be caused by the former use, and chlorinated solvent
contamination may be caused by the latter. The presence of urban fill material at such sites
from different eras, involving various workings and reworkings of the surface topography,
may also complicate the CSM as contaminant origins may be unclear.
Conceptual Site Models 45
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.32
Top: Greater San Antonio, TX, land cover map for year 1992. Bottom: Land cover map for year 2006. Shades of
red denote developed areas of different intensity (deep red = highest intensity); shades of green denote forests;
shades of ocher, yellow, and light gray are shrubs/scrubs and grassland, pasture, and barren land, respectively;
brown is cultivated crops; open water is blue. Note that the 2006 map is derived from higher-resolution satel-
lite imagery and appears sharper. This is based on the National Land Cover Database, available for download
at http://seamless.usgs.gov. Explanation of land-use classes, color legend, and various related publications are
available at http://www.mrlc.gov/nlcd_definitions.php.
46 Hydrogeological Conceptual Site Models

2.2.5 Water Budget


Typically, only a few of the general water budget components shown in Figure 2.33 would
be of interest at any specific site (project) with the possible exception of some groundwater
availability studies at (sub)continental scales. Common to most of them, including ground-
water recharge, is that they cannot be measured directly and are estimated from measure-
ments of related quantities (parameters) and estimates of other components. Exceptions
are direct measurements of precipitation, stream flow, spring discharge rates, and well-
pumping rates. Other important quantities that can be measured directly and used in
water budget calculations as part of various equations are the hydraulic head (water level)
of both groundwater and surface water and soil moisture content.
Water budget terms are often used interchangeably, sometimes causing confusion. In
general, infiltration refers to any water movement from the land surface into the subsur-
Downloaded by [University of Auckland] at 23:40 09 April 2014

face. This water is sometimes called potential recharge, indicating that only a portion of it
may eventually reach the water table (saturated zone). The term actual recharge is being
increasingly used to avoid any possible confusion; it is the portion of infiltrated water that
reaches the aquifer, and it is confirmed based on groundwater studies. The most obvi-
ous confirmation that actual groundwater recharge is taking place is a rise in water table
(hydraulic head). Effective (net) infiltration or deep percolation refers to water movement
below the root zone and is often equated to actual recharge. In hydrologic studies, the term
effective rainfall describes the portion of precipitation that reaches surface streams via
direct overland flow or near-surface flow (interflow). Rainfall excess describes the compo-
nent of rainfall that generates surface runoff, and it does not infiltrate into the subsurface.

3UHFLSLWDWLRQ
3
6QRZ SDFN
6XEOLPDWLRQ
(YDSRUDWLRQ
65VS
(YDSR 6XUIDFH (UHV
WUDQVSLUDWLRQ 5XQRII
(7 65 65UHV
3XPSLQJ
,QILOWUDWLRQ
Z
4RXW ,QILOWUDWLRQ ,VS
(IIHFWLYH ,QILOWUDWLRQ ,VU
60' , ,QILOWUDWLRQ
3UHFLSLWDWLRQ
5HFKDUJH ,UHV
3HI
5 :DWHU WDEOH
,QWHUIORZ (7ZW
65 6
,IO &KDQJH LQ VWRUDJH

4VV
8QFRQILQHG DTXLIHU /DWHUDO JURXQGZDWHU
'LVFKDUJH XD
LQIORZ 4LQ
XD
4RXW
&RQILQLQJ OD\HU
/HDNDQFH
'LVFKDUJH /
/DWHUDO JURXQGZDWHU
FD
4RXW &RQILQHG DTXLIHU FD
LQIORZ 4LQ

FIGURE 2.33
Elements of the water budget of a groundwater system. (Modified from Kresic, N., Groundwater Resources:
Sustainability, Management, and Restoration, McGraw Hill, New York, 852 pp., 2009.)
Conceptual Site Models 47

Interception is the part of rainfall intercepted by vegetative cover before it reaches the
ground surface, and it is not available for either infiltration or surface runoff. The term net
recharge (synonymous with actual recharge) is being used to distinguish between the follow-
ing two water fluxes: recharge reaching the water table via vertical downward flux from the
unsaturated zone and evapotranspiration from the water table, which is an upward flux (nega-
tive recharge). Aerial (or diffuse) recharge refers to recharge derived from precipitation and
irrigation that occurs fairly uniformly over large areas, whereas concentrated recharge refers
to loss of stagnant (ponded) water from playas, lakes, and recharge basins or loss of flowing
stream water to the subsurface via sinks.
The complexity of the water budget determination depends on many natural and anthro-
pogenic factors present in the general area of interest:

• Climate
Downloaded by [University of Auckland] at 23:40 09 April 2014

• Hydrography and hydrology


• Geologic and geomorphologic characteristics
• Hydrogeologic characteristics of the surficial soils and subsurface porous media
• Land cover/land use
• Presence and operations of artificial surface water reservoirs
• Surface water and groundwater withdrawals for consumptive use and irrigation
• Wastewater management

The most general equation of water budget that can be applied to any water system has the
following form:

water input – water output = change in storage. (2.1)

Water budget equations can be written in terms of volumes (for a fixed time interval),
fluxes (volume per time, such as cubic meters per day or acre-feet per year), and flux densi-
ties (volume per unit area of land surface per time, such as millimeters per day). Following
are some of the relationships between the components shown in Figure 2.33 that can be
utilized in quantitative water budget analyses:

I = P − SR − ET

I = I sr + I res + I sp

R = I − SMD − ETwt

Pef = SR + I fl
(2.2)
ua ca
Qsss = Pef + Qout + Qout
ua
Qout = R + Qinua − L
ca
Qout = Qinca + L − Qout
w

∆S = R + Qinua − L − Qout
ua

where I is the infiltration in general, SR is the surface water runoff, ET is the evapotrans-
piration, Isr is the infiltration from surface runoff, Ires is the infiltration from surface water
48 Hydrogeological Conceptual Site Models

reservoirs, Isp is the infiltration from snow pack and glaciers, R is the groundwater recharge,
SMD is the soil moisture deficit, ETwt is the evapotranspiration from the water table, Pef is
the effective precipitation, Iif is the interflow (near-surface flow), Qss is the surface stream
ua ca
flow, Qout is the direct discharge of the unconfined aquifer, Qout is the direct discharge of the
ua
confined aquifer, Qin is the lateral groundwater inflow to the unconfined aquifer, L is the
leakage from the unconfined aquifer to the underlying confined aquifer, Qinca is the lateral
w
groundwater inflow to the confined aquifer, Qout is the well pumpage from the confined
aquifer, and ΔS is the change in storage of the unconfined aquifer. If the area is irrigated, two
more components would be added to the list: infiltration and runoff of the irrigation water.
Ideally, all applicable relationships at a given site would have to be established to fully
quantify the processes governing the water budget, including volumes of water stored in,
and flowing between, three general reservoirs—surface water, the vadose zone, and the
saturated zone. By default, change in one of the many water budget components causes a
Downloaded by [University of Auckland] at 23:40 09 April 2014

chain reaction and influences all other components. These reactions take place with more
or less delay, depending on both the actual physical movement of water and the hydraulic
characteristics of the three general reservoirs.

2.3 Hydrogeology
For reasons of simplicity or feasibility, one may decide that the groundwater system under con-
sideration, including any of its parts (e.g., individual aquifers, aquitards, layers and lenses of
different permeability), could be represented by a volume that includes all important aspects
of heterogeneity and anisotropy of the porous media present. Such volume is sometimes called
representative elementary volume (REV) and is defined by only one value for each of the many
quantitative parameters describing groundwater flow and fate and transport of contaminants.
The REV concept is considered by many to be rather theoretical because it is not independent
of the nature of the practical problem to be solved. For example, < 1 m3 (several cubic feet) of
rock may be more than enough for quantifying the phenomena of contaminant diffusion into
the rock matrix, whereas this volume would be completely inadequate for calculating ground-
water flow rate in a fractured rock aquifer where major transmissive fractures are spaced more
than 1 m apart. In contrast, dimensionless parameters commonly used in fluid mechanics,
such as the Reynolds number, are independent of the nature of the problem to be solved (e.g.,
the Reynolds number is applicable to a conduit or a channel of any size).
Deciding on the representative volume will also depend on the funds and time available
for collecting field data and performing laboratory tests. Extrapolations and interpolations
based on data from several borings or monitoring wells will be very different from those
using data from tens of wells. Another related difficulty, which always presents a major
challenge, is upscaling. This term refers to assumptions made when using parameter val-
ues obtained from small volumes of porous media, such as laboratory samples, to solve
larger, field-scale problems. The problem of upscaling has led to mathematic constructs
such as dispersivity, which is described in detail in Chapter 5. Whatever the final choice
for each quantitative parameter may be, every attempt should be made to fully describe
and quantify the associated uncertainty and sensitivity of that parameter.
Regardless of the scale at which a CSM is being developed, the first step is to identify the
presence of any aquifers and low-permeable porous media in the area of interest. Even when
the site is relatively small, which is often the case when investigating point sources of shallow
Conceptual Site Models 49

groundwater contamination, such as a leaky underground storage tank at a gas station, the
CSM should include all major water-bearing zones underlying the site at different depths.
Their description will be more general in nature when based on regional studies performed by
government agencies such as geologic surveys. When field investigations, including drilling,
are conducted at the site, the associated boring logs, aquifer tests, and laboratory data enable
detailed quantitative evaluation of the porous media underlying the site and, therefore, pro-
vide for accurate hydrogeological classification of aquifers and aquitards.
An aquifer, the focal point of any hydrogeologic CSM, is defined as a geologic formation, or
a group of hydraulically connected geologic formations, storing and transmitting significant
quantities of potable groundwater. However, two key terms in this definition, significant and
potable, are not easily quantifiable. The common understanding is that an aquifer should pro-
vide more than just several gallons or liters per minute to individual wells and springs, and
that water should have less than 1000 mg of dissolved solids. However, in many parts of the
Downloaded by [University of Auckland] at 23:40 09 April 2014

world, these rules of thumb do not apply. Examples include saturated fractured rock forma-
tions that can reliably provide one or two gallons per minute to individual wells or springs and
are thus often referred to as fractured rock aquifers (bedrock aquifers). The issue of ground-
water quality is similarly relative. For example, if the groundwater has naturally elevated total
dissolved solids of 1000–4000 mg/L, it is traditionally disqualified from consideration as a
significant source of potable water in water-rich regions regardless of the groundwater quan-
tity. At the same time, such water is routinely utilized for both human and livestock consump-
tion in water-scarce areas. Moreover, advanced water treatment technologies, such as reverse
osmosis, enable development of aquifers containing brackish groundwater, which are increas-
ingly considered integral parts of water-resources management around the world.
When developing a detailed, site-specific CSM, the terms hydrostratigraphic unit and water-
bearing unit are sometimes used to differentiate between more or less transmissive zones
within an aquifer or aquifer system containing lenses or thin layers of low-permeable porous
media.
Aquitards, like aquifers, do store water and are capable of transmitting it but at a much slower
rate, so they cannot provide significant quantities of potable groundwater to wells and springs.
Determining the nature and the role of aquitards in groundwater systems is very important in
both water supply and groundwater contamination studies. When the available information
suggests there is a high probability for water and contaminants to move through an aquitard
within a practical timeframe of, say, less than 100 years, such an aquitard is called leaky (semi-
confining bed is often used as a synonym). When the potential movement of groundwater and
contaminants through an aquitard is estimated in hundreds or thousands of years, such an
aquitard is called competent, of high integrity, or nonleaky.
Aquiclude is another related term, generally much less used today in the United States but
still in relatively wide use elsewhere (the Latin word claudo means to confine, close, make
inaccessible). An aquiclude is equivalent to an aquitard of very low permeability, which, for all
practical purposes, acts as an impermeable barrier to groundwater flow. (Note that there still
is some groundwater stored in an aquiclude, but it moves very, very slowly.) Aquicludes and
aquitards are often referred to as confining beds. As opposed to lenses of low-permeable mate-
rials, they extend over relatively large areas and have major impact on the regional groundwa-
ter flow directions.

2.3.1 Aquifers in Unconsolidated Sediments


Aquifers developed in unconsolidated sediments, which are composed of various mixtures
of grains of varying size and shape such as clay, silt, sand, and gravel, are called intergranular
50 Hydrogeological Conceptual Site Models

aquifers. Depending on the predominance of certain grain fraction, such aquifers may be
called sand aquifers or sand-and-gravel aquifers, for example. It is also common to call a par-
ticular intergranular aquifer by the depositional process that created it. One such classification
by the USGS groups unconsolidated sand-and-gravel aquifers into four broad categories: (1)
stream-valley, or alluvial aquifers located beneath channels, floodplains, and terraces in the
valleys of major streams; (2) basin-fill aquifers, also referred to as valley-fill aquifers because
they commonly occupy topographic valleys; (3) blanket sand-and-gravel aquifers; (4) aquifers
in semiconsolidated sediments; and (5) glacial-deposit aquifers. In many cases, more than just
one process is responsible for creating unconsolidated deposits (e.g., stratified glacial-drift
aquifer systems in stream valleys), and every attempt should be made to at least understand
the most important depositional mechanisms. This is because important characteristics of the
intergranular porous media, such as anisotropy and heterogeneity, are a direct result of deposi-
tional processes. For example, an aquifer developed in thick aeolian sands (former sand dunes)
Downloaded by [University of Auckland] at 23:40 09 April 2014

should be very prolific, given enough historic or current natural recharge, because of the high
storage capacity and effective porosity of uniform (homogeneous) clean sands. On the other
hand, alluvial deposits around a stream in a drainage area consisting of many different rock
types may create very heterogeneous local flood plain aquifers.

2.3.1.1 Alluvial Aquifers


Alluvial aquifers usually consist of various proportions of gravel, sand, silt, and clay
deposited as layers and lenses of varying thicknesses. When gravel and sand dominate,
with finer fractions forming thin interbeds and lenses, the aquifer may be considered
as one continuum, providing water to a pumping well through its entire screen. It is
therefore not uncommon to present alluvial aquifers using generalized cross sections as
shown in Figure 2.34. Because of fluvial depositional mechanisms, however, all alluvial

:HVW (DVW
 
$OOXYLDO DQG (VWXDULQH 9DOOH\ $OWLWXGH LQ PHWHUV DERYHEHORZ VHD OHYHO
$OWLWXGH LQ PHWHUV DERYHEHORZ VHD OHYHO

6LOW
DQG ILQH
VDQG
3RWRPDF 5LYHU
 
6DQG 6DQG DQG JUDYHO
DQG
JUDYHO
 6LOW DQG ILQH VDQG 
3RWRPDF *URXS
6DQG

 
5HJLRQDO
GLVFKDUJH

 
9HUWLFDO VFDOH [  (;3/$1$7,21
*HQHUDOL]HG :DWHU WDEOH
JURXQGZDWHU IORZ

FIGURE 2.34
Generalized hydrogeologic section showing idealized flow through Potomac River alluvial deposits near
Washington, DC. (Modified from Ator, S. W. et al., A Surficial Hydrogeologic Framework for the Mid-Atlantic
Coastal Plain, U.S. Geological Survey Professional Paper 1680, 44 pp., 2005.)
Conceptual Site Models 51
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.35
Alluvial aquifers often consist of layers and lenses of gravel, sand, silt, and clay mixed in various proportions
and arranged spatially in many different ways depending on mechanisms of sediment deposition.

aquifers show some degree of heterogeneity and stratification as illustrated in Figure 2.35.
This is particularly important at contaminated sites where dissolved contaminants may
move faster through layers of more permeable porous media, creating convoluted pref-
erential pathways intersecting a well at discrete intervals (Figure 2.36). Detecting such
pathways, although difficult, is often the key for successful groundwater remediation,
whereas it may not be of much importance when quantifying groundwater flow rates

3XPSLQJ ZHOO

&RQWDPLQDQW
OHDN
:DWHU WDEOH

3OXPH

3UHGRPLQDQWO\ VDQG DQG JUDYHO LQ YDULRXV PL[WXUHV


3UHGRPLQDQWO\ VLOW DQG FOD\

FIGURE 2.36
Aquifer consisting of predominantly gravel and sand provides water to a well through the entire screen length.
At the same time, dissolved contaminants may enter the well through just a few discrete intervals following
more permeable (preferential) pathways.
52 Hydrogeological Conceptual Site Models

for water supply. As a result, regulators often require vertical profiling of groundwater
at hazardous waste sites (for example, collecting a groundwater sample every 5 or 10 ft
of depth) to ensure that the screened interval of a monitoring well does not miss a lens
of preferential contaminant transport. Notoriously difficult to characterize are aquifers
developed in deposits left by braided streams. Such deposits exhibit rapid vertical and
horizontal changes in sediment type and should therefore be represented by anything
other than continuous layers of sand, gravel, or clay. This is illustrated in Figure 2.37,
which shows a draft cross section prepared as part of the subsurface characterization at a
site in the Los Angeles basin.
The areal extent and thickness of an alluvial aquifer depend on the size of the parent
stream and the aquifer’s location in the drainage area. Aquifers in flood plains of smaller
streams and in higher upstream areas are of limited extent, rarely exceeding 10 m in thick-
ness (Figure 2.38). On the other hand, alluvial aquifers developed in flood plains of major
Downloaded by [University of Auckland] at 23:40 09 April 2014

rivers (Figure 2.39) are among the most prolific and widely used for water supply through-
out the world. In addition to thick, extensive deposits of sand and gravel, they are typi-
cally in direct hydraulic connection with the river, which provides for abundant aquifer

FIGURE 2.37
Draft cross section based on data from geotechnical borings completed at a site in the Los Angeles basin.
Relatively more permeable sediments, including poorly graded sand, silty sand, and gravel, are colored in red.
(Courtesy of Tony Marino, AMEC.)
Conceptual Site Models 53
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.38
Floodplain of a small perennial river near Culpeper, VA. Fine alluvial sands with darker bands (indicating
the likely presence of organic material and/or different chemical composition) are exposed in the river cut.
Determining the presence and contents of natural soil organic carbon is important for quantifying sorption of
most organic contaminants.

FIGURE 2.39
Merging of two large floodplains at the confluence of the Sava and Danube rivers at Belgrade, Serbia, provides
favorable conditions for groundwater supply. A recharge basin serving one portion of the city’s well field is in
the lower center.
54 Hydrogeological Conceptual Site Models

DG
 UR

9$
H
YH

6$
/H
6DQG 6DQGDQG*UDYHO
Downloaded by [University of Auckland] at 23:40 09 April 2014

6DQG &OD\ 6LOW

&OD\

FIGURE 2.40
Schematics of one of the numerous Ranney (collector) wells in Makis, the Sava River floodplain, serving city of
Belgrade, Serbia. (From Milojević, N., Hidrogeologija (Hydrogeology), Univerzitet u Beogradu, Zavod za izdavanje
udzbenika Socijalisticke Republike Srbije, Beograd, 1967.)

recharge. Large well fields for public and industrial water supply are often designed to
induce additional recharge from the river by creating increased hydraulic gradients from
the river to the aquifer as a result of pumping (Figures 2.40 and 2.41). This is a good exam-
ple of why surface water and groundwater should be managed as a singular resource.
Note that induced infiltration to alluvial aquifers from surface water is not always permis-
sible as dewatering of small streams may occur during low-flow conditions that severely

FIGURE 2.41
Three of many Ranney wells lining the shores of the recharge basin shown in Figure 2.38.
Conceptual Site Models 55

threatens the ecological health of the stream and/or affects downstream human consump-
tive uses. In some cases, alluvial water supply wells that have been operating for decades
are now under increased regulatory scrutiny for inducing infiltration during low-flow
conditions.

2.3.1.2 Basin-Fill Aquifers


Basins between mountains, filled with unconsolidated and semiconsolidated sediments,
are another major group of aquifers present on all continents. The thickness of such depos-
its may sometimes exceed several thousand meters as a result of constant tectonic lower-
ing of the basin via boundary faults and abundant supply of sediments eroded by streams
from the rocks in the mountains adjacent to the basins. The basin deposits may locally
include windblown sand, coarse-grained glacial outwash, and fluvial sediments depos-
Downloaded by [University of Auckland] at 23:40 09 April 2014

ited by streams that flow through the basins. Coarser sediment (boulders, gravel, sand) is
deposited near the basin margins, and finer sediment (silt, clay) is deposited in the central
parts of the basins. This depositional pattern occurs because finer particles have longer
settling times and, therefore, do not fall out of suspension until more quiescent flow is
achieved in areas of flatter topography further from erosional sources. Some basins con-
tain lakes or playas (dry lakes) at or near their centers; it is not uncommon for very thick
clay deposits (e.g., up to 100 ft or more in thickness) to accumulate beneath these current
or historical (e.g., glacial) lake centers. Windblown sand might be present as local beach
or dune deposits along the shores of lakes. Deposits from mountain, or alpine, glaciers
locally form permeable beds where the deposits consist of outwash transported by glacial
meltwater. Sand and gravel of fluvial origin are common in and adjacent to the channels of
through-flowing streams. Basins in arid regions might contain deposits of salt, anhydrite,
gypsum, or borate produced by evaporation of mineralized water in their central parts
(Miller 1999).
Basin and range provinces in the western United States, basins in Southern California
(see Figure 2.42), and the northern Rocky Mountain basins are examples of basin-fill aqui-
fers that are heavily utilized for drinking water supply and irrigation. Large-scale ground-
water extraction is usually from deeper portions of basins, which can potentially cause
such unwanted effects as induced upconing (vertical upward migration) of highly miner-
alized saline groundwater. This water resides at greater depths where there is no flushing
by fresh meteoric water. Another negative effect of groundwater extraction from basin-fill
aquifers in arid climates is aquifer mining because of the lack of significant present-day
natural aquifer recharge.
Where precipitation is significant on the surrounding mountains, concentrated recharge
occurs when surface water flowing from the mountains infiltrates into the permeable
coarse-fill deposits, such as colluvial/alluvial fans along mountain fronts (Figure 2.43).
In many cases, however, this recharge, called mountain-front recharge, is intermittent
because the streamflow that enters the basins is also mostly intermittent. As the streams
exit their bedrock channels and flow across the surface of the alluvial fans, infiltration
occurs through the permeable deposits on the fans and moves downward to the water
table. In arid or semiarid basins, much of the infiltrating water is lost by evaporation or as
transpiration by riparian vegetation (plants on or near stream banks).
Open basins contain through-flowing streams and commonly are hydraulically con-
nected to adjacent basins. Some recharge may occur in an open basin as streamflow infil-
tration from the through-flowing stream and as underflow (groundwater that moves in
the same direction as streamflow) from an upgradient basin. Before development, water
56 Hydrogeological Conceptual Site Models

West East

S. Gabriel R.

V. View Ave
Alameda St

L.A. River
200 200
itio n
Ga Expos
0 ge Gaspur
0
Exposition Gage
Lyn
wo
-200 od Ga -200
Holly rde

Elevation (ft, msl)


Elevation (ft, msl)

Sil d ale na d ale


ver
a Ga g
e Holly
-400 do -400
on
Su Lynwood rs
fe
nn
ysi Jef
-600 d e -600
Silverad
o

-800 -800
Downloaded by [University of Auckland] at 23:40 09 April 2014

-1000 -1000

-1200 -1200
e
sid
n ny
-1400 0 4000 8000 ft Su -1400

FIGURE 2.42
Generalized cross section of Central Basin, Los Angeles, CA, showing different aquifers in shades of blue;
brown color indicates low-permeable aquitards in between. (Modified from Santa Ana Watershed Project
Authority, Chapter IV, Groundwater Basin Reports, Los Angeles County Coastal Plain Basins, http://www.sawpa.org,
accessed October 2007. From Water Replenishment District of Southern California (WRD), Technical Bulletin—​
An Introduction to the Central and West Coast Groundwater Basins, 2004. Modified from California Department of
Water Resources, Bulletin 104: Planned Utilization of the Ground Water Basins of Coastal Plain of Los Angeles County,
Appendix A, Ground Water Geology, 1962, Plate 4.)

is discharged from basin-fill aquifers largely by evapotranspiration within the basin,


but also as surface flow and underflow into downstream basins. During initial stages of
development, water wells are often flowing or artesian (Figure 2.44), and in later phases
of development, most discharge is by withdrawals from wells that cease to flow because
of general lowering of the hydraulic head in the aquifer.
In basins that contain thick sequences of deposits, the sediments become increasingly
more compacted (consolidated), cemented, and less permeable with depth. Depending on
the local ongoing sedimentary mechanisms, some basins may contain semiconsolidated
sediments close to the land surface and may even exhibit some characteristics of fractured
aquifers with well-defined fractures and fracture and fault zones. It is therefore important
for a hydrogeologist developing a CSM not to rely solely on basin sediment descriptions
obtained from well drillers, which are often recirculated in various official reports by oth-
ers. This is illustrated in Figure 2.45, which shows rock core samples collected from two
different borings completed in basin deposits previously generally described as sand and
gravel. It is the experience of the authors and other professionals working in different
hydrogeologic settings that well cuttings briefly described by water-supply well drillers
(often using rotary methods that break down consolidated or semiconsolidated mate-
rial) are vastly different from the rock cores obtained during site-specific hydrogeologic
characterizations. A similar example is provided in Chapter 7 related to rotary versus
coring drilling methods at bedrock sites. It is therefore puzzling that some profession-
als in the western United States still have a hard time conceptualizing three-dimensional
Conceptual Site Models 57
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.43
Perspective block diagram of the Gallatin Local Water Quality District, Montana. (Modified from Taylor, C. J.,
and Alley, W. M., Ground-Water-Level Monitoring and the Importance of Long-Term Water-Level Data, U.S.
Geological Survey Circular 1217, Denver, CO, 2001; Kendy, E., Magnitude, Extent, and Potential Sources of
Nitrate in Ground Water in the Gallatin Local Water Quality District, Southwestern Montana, 1997–98, U.S.
Geological Survey Water-Resources Investigations Report 01-4037, 2001.)

groundwater movement in the basins as something other than simple intergranular flow
through sand and gravel.
Figure 2.46 illustrates another related difficulty caused by faults in different portions of
the basins and at varying scales. Because it is hard to associate faults with loose sand and
gravel, and even harder to consider them as barriers to groundwater flow within deposits
of loose (real) sand and gravel, any information collected in the field similar to the situa-
tion presented in Figure 2.46 may be misinterpreted because of the erroneously precon-
ceived concept.
Faults often form hydraulic boundaries for groundwater flow in both consolidated and
unconsolidated rocks. In general, however, they may have one of the following three roles:
(1) conduits for groundwater flow, (2) storage of groundwater because of increased porosity
within the fault (fault zone), and (3) barriers to groundwater flow because of decrease in
porosity within the fault. The following discussion by Meinzer (1923) illustrates this point:

“Faults differ greatly in their lateral extent, in the depth to which they reach, and in the
amount of displacement. Minute faults do not have much significance with respect to
ground water except, as they may, like other fractures, serve as containers of water. But
the large faults that can be traced over the surface for many miles, that extend down to
great depths below the surface, and that have displacements of hundreds or thousands
58 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.44
Flowing artesian well at the Bonham Ranch, southern Smoke Creek Desert, NV. (Courtesy of Terri Garside.)

FIGURE 2.45
Rock core samples collected from a fault zone in an aquifer developed in semiconsolidated quaternary sedi-
ments near Phoenix, AZ (core diameter approximately 6 cm). Top left photo depicts conglomerate clasts verti-
cally oriented along their long axis, likely a post-depositional deformational feature. Top right photo illustrates
potential groundwater circulation pathways through sediments partially cemented with calcium carbonate.
Bottom photo depicts clear evidence of brittle deformation (faulting) with the observation of slickensides within
the rock core. The cores are from borings completed in sediments routinely described as sand and gravel by
water supply–well drillers and in various published reports. (Courtesy of Jeff Manuszak.)
Conceptual Site Models 59
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.46
Hydrogeologic cross section at a basin margin near Phoenix, AZ.

of feet are very important in their influence on the occurrence and circulation of ground
water. Not only do they affect the distribution and position of aquifers, but they may
also act as subterranean dams, impounding the ground water, or as conduits that reach
into the bowels of the earth and allow the escape to the surface of deep-seated waters,
often in large quantities. In some places, instead of a single sharply defined fault, there
is a fault zone in which there are numerous small parallel faults or masses of broken
rock called fault breccia. Such fault zones may represent a large aggregate displacement
and may afford good water passages.”

The impounding effect of faults is caused by the following main mechanisms:

• The displacement of alternating permeable and impermeable beds in such a man-


ner that the impermeable beds are made to abut against the permeable beds
• Presence of clayey gouge along the fault plane produced by rubbing and mashing
during displacement of the rocks (the impounding effect of faults is most common
in formations that contain considerable clayey material)
• Cementation of the pore space by precipitation of material, such as calcium car-
bonate, from the groundwater circulating through the fault zone (see Figure 2.45)
• Rotation of elongated flat clasts parallel to the fault plane so that their new arrange-
ment reduces permeability perpendicular to the fault (see Figure 2.45)

Mozley et al. (1996) discuss reduction in hydraulic conductivity associated with high-angle
normal faults that cut poorly consolidated sediments in the Albuquerque Basin in New
Mexico. Such fault zones are commonly cemented by calcite, and their cemented thickness
ranges from a few centimeters to several meters as a function of the sediment grain size
on either side of the fault. Cement is typically thickest where the host sediment is coarse
grained and thinnest where it is fine grained. In addition, the fault zone is widest where
it cuts coarser-grained sediments. Extensive discussion on deformation mechanisms and
60 Hydrogeological Conceptual Site Models

hydraulic properties of fault zones in unconsolidated sediments is given in the work of


Bense et al. (2003). Various aspects of fluid flow related to faults and fault zones are dis-
cussed in the work of Haneberg et al. (1999).
An example of major faults in alluvial-fill basins in Southern California acting as imper-
meable barriers for groundwater flow is shown in Figure 2.47 (see also Figure 7.10). The
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.47
Groundwater basins in Southern California separated by impermeable faults developed in alluvial-fill sedi-
ments. White lines are contours of hydraulic head; white arrows are general directions of groundwater flow;
dashed lines are surface streams; bold black lines are major faults. Also shown are state highways 10, 15, and
215. (Modified from Danskin, W. R. et al., Hydrology, Description of Computer Models, and Evaluation of
Selected Water-Management Alternatives in the San Bernardino Area, California, U.S. Geological Survey Open-
File Report 2005-1278, Reston, VA, 178 pp., 2006.)
Conceptual Site Models 61

Rialto-Colton basin, which is heavily pumped for water supply, is almost completely sur-
rounded by impermeable fault barriers and receives negligible recharge from precipitation
and very little lateral inflow in the far northwest from the percolating Lytle Creek waters.
In contrast, the Bunker-Hill basin to the north, which is also heavily pumped for water
supply, receives most of its significant recharge from numerous losing surface streams and
runoff from the mountain front. As a result, the hydraulic heads in the Rialto-Colton basin
are tens of feet lower than in the Bunker-Hill basin.
One important concept that often confounds the nonhydrogeologist is the impact of
focused recharge on groundwater levels in portions of a basin that receive limited or no
infiltration from directly overlying soils, such as sites with paved surfaces or very thick
vadose zones (>100 ft). For example, in semiarid and arid groundwater basins in the
American southwest, aerial recharge above a well is often negligible, yet over the course
of the year, significant seasonal water-level fluctuations may be seen in a well even in the
Downloaded by [University of Auckland] at 23:40 09 April 2014

absence of nearby groundwater pumping. The rise in potentiometric surface is often a


result of focused recharge via washes that become active during snowmelt events on the
mountains surrounding a basin. This recharge creates pressure transients that propagate
through the fractured portions of the aquifer faster than the theoretical rate of groundwater
flow applicable to unconsolidated (loose) sand and gravel. A well may experience a rapid
rise in water level elevation in the spring when the wash is running despite being several
miles away from the wash. In contrast, a linear transport of water over such distances, if
interpreted as Darcian flow through intergranular porous media (sand and gravel), could
take several or more years.

2.3.1.3 Blanket Sand-and-Gravel Aquifers


Thick, widespread, sheet-like deposits that contain mostly sand and gravel from uncon-
solidated and semiconsolidated aquifers are called blanket sand-and-gravel aquifers. They
largely consist of alluvial deposits brought in from mountain ranges and deposited in low-
lands. However, some of these aquifers, such as the High Plains aquifer in the United States
(Ogallala aquifer), include large areas of windblown sand, whereas others, such as the
surficial aquifer system of the southeastern United States, contain some alluvial deposits
but are largely composed of beach and shallow marine sands (Miller 1999). The principal
water-yielding geologic unit of the High Plains aquifer, which extends over approximately
174,000 mi2 in parts of eight states, is the Ogallala Formation of Miocene age, a heteroge-
neous mixture of sand, gravel, silt, and clay that was deposited by a network of braided
streams, which flowed eastward from the ancestral Rocky Mountains. Permeable dune
sand is part of the aquifer in large areas of Nebraska and smaller areas in the other states.
The Ogallala aquifer is principally unconfined and in direct hydraulic connection with the
alluvial aquifers along the major rivers that flow over it.
The origin of water in the Ogallala aquifer is mainly from the last ice age, and the rate
of present-day recharge is much lower. This has resulted in serious long-term water table
decline in certain portions of the aquifer resulting from intensive groundwater extraction
for water supply and irrigation. Decreases in saturated thickness result in a decrease in
well yields and an increase in pumping costs because the pumps must lift the water from
greater depths. These conditions occur over much of the Ogallala aquifer with the greatest
impact in the southern part of the system (see Figures 3.1 through 3.3).
Other major blanket sand-and-gravel aquifers in the United States include the Seymour
aquifer of Texas, which, like the High Plains aquifer, was deposited by braided, eastward-
flowing streams but has been dissected into separate pods by erosion; the Mississippi River
62 Hydrogeological Conceptual Site Models

Valley alluvial aquifer, which consists of sand and gravel deposited by the Mississippi
River as it meandered over an extremely wide floodplain; and the Pecos River Basin allu-
vial aquifer, which is mostly stream-deposited sand and gravel but locally contains dune
sands (Miller 1999).

2.3.1.4 Aquifers in Semiconsolidated Sediments


Sediments that primarily consist of semiconsolidated sand, silt, and clay interbedded with
some carbonate rocks underlie the coastal plains that border the Atlantic Ocean and the Gulf
of Mexico. The sediments extend from Long Island, NY, southwestward to the Rio Grande
in Texas and generally form a thick wedge of strata that dips and thickens seaward from
a featheredge at its updip limit. Coastal plain sediments are water-laid and were depos-
ited during a series of transgressions and regressions of the sea. Depositional environments
Downloaded by [University of Auckland] at 23:40 09 April 2014

ranged from fluvial to deltaic to shallow marine, and the exact location of each environment
depends upon the relative position of landmasses, shorelines, and streams at given points
in geologic time. Consequently, the position, shape, and number of the bodies of sand and
gravel that form aquifers in these sediments vary greatly from place to place (Miller 1999).
A good example of the complexity of coastal plain aquifers is the Potomac aquifer, described
by McFarland and Bruce (2006) as a heterogeneous aquifer composed of sediments deposited
by braided streams, meandering streams, and deltas that exhibit sharp contrasts in texture
across small distances as a result of highly variable and frequently changing depositional envi-
ronments (Figure 2.48). The Potomac aquifer is hydraulically continuous on a regional scale but
locally exhibits discontinuities where flow is impeded by fine-grained interbeds. Designation
of Potomac Formation sediments as composing a single Potomac aquifer or the equivalent

Homogeneous
aquifer

Confining
unit
Confining
zone
Heterogeneous
aquifer

Vertical scale greatly exaggerated


EXPLANATION
0 5 mi Coarse-grained sediment
Fine-grained sediment
Flow through aquifer
Leakage between aquifers

FIGURE 2.48
Simplified cross section showing conceptualized flow relations among homogeneous and heterogeneous aqui-
fers, confining units, and confining zones in the Virginia Coastal Plain. (Modified from McFarland, E. R., and
Bruce, T. S., The Virginia Coastal Plain Hydrogeologic Framework, U.S. Geological Survey Professional Paper
1731, 118 pp., 25 pls., 2006.)
Conceptual Site Models 63

was made in earlier studies of part or all of the Virginia Coastal Plain by the USGS and by the
Virginia Division of Mineral Resources. As more field information became available, subse-
quent studies in Virginia by the USGS subdivided the Potomac aquifer into upper, middle, and
lower aquifers separated by intervening confining units.

2.3.1.5 Glacial-Deposit Aquifers


Large areas of the north-central and northeastern United States are covered with sediments
that were deposited during several advances and retreats of continental glaciers. The mas-
sive ice sheets planed off and incorporated soil and rock fragments during advances and
redistributed these materials as ice-contact or meltwater deposits or both during retreats.
Thick sequences of glacial materials were deposited in former river valleys cut into bed-
rock, whereas thinner sequences were deposited on the hills between the valleys. Figure
Downloaded by [University of Auckland] at 23:40 09 April 2014

2.49 illustrates typical sediment types associated with river valley glacial deposits that are
still being formed in glaciated terrains worldwide (Figure 2.50). Some of these deposits are
formed at the ice-bedrock contact, and some fill cracks or crevasses in the ice. As the ice
melts, outwash deposits of sand and gravel form deltas at the ice front or in glacial lakes
and fluvial valley-train deposits downstream from the ice front.
The glacial ice and meltwater derived from the ice deposit several types of sediments,
which are collectively called glacial drift. Till, which consists of dense, unsorted, and
unstratified material that ranges in size from boulders to clay, is deposited directly by the

Crevasse Ice-contact
fillings deposits
Fluvial valley-train
deposits

Lake-bottom
fine-grained
deposits

Bedrock

Fluvial valley-train
deposits
Delta Collapsed
deposits ice-contact
deposits
Lake-bottom
fine-grained deposits

Bedrock

FIGURE 2.49
Perpendicular (a) and longitudinal (b) cross sections of typical glacial deposits formed in bedrock valleys.
(From Trapp, H., Jr., and Horn, M. A., Delaware, Maryland, New Jersey, North Carolina, Pennsylvania, Virginia,
West Virginia. Ground Water Atlas of the United States, U.S. Geological Survey, HA 730-L, 1997. Modified from
Lyford, F. P., In Regional Aquifer–System Analysis Program of the US Geological Survey–Summary of Projects, 1978–
1984, edited by R. J. Sun, U.S. Geological Survey Circular 1002, 162–167, 1986.)
64 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.50
Glaciated terrain in Alaska with bedrock valleys filled with glacial deposits. (Courtesy of Jeff Manuszak.)

ice (Figure 2.51). Outwash, which is mostly stratified sand and gravel (Figure 2.52), and
glacial-lake deposits consisting mostly of clay, silt, and fine sand are deposited by meltwa-
ter. Ice-contact deposits consisting of local bodies of sand and gravel are deposited at the
face of the ice sheet or in cracks in the ice.
The distribution of the numerous sand-and-gravel beds that make up the glacial-deposit
aquifers and the clay and silt confining units that are interbedded with them is extremely
complex. The multiple advances of lobes of continental ice originated from different direc-
tions, and different materials were eroded, transported, and deposited by the ice, depend-
ing upon the predominant rock types in its path. When the ice melted, coarse-grained

FIGURE 2.51
Glacial till of the Harbor Hill terminal moraine (Wisconsin interglacial period) exposed on Long Island Motor
Parkway, half a mile north of Creedmoor, Hempstead quadrangle, Queens County, NY, October 29, 1917.
(Courtesy of USGS Photographic Library 2007.)
Conceptual Site Models 65
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.52
Stratified glacial sand and gravel, 1 mi north of Asticou, northeast of Lower Hadley Pond, Acadia National Park,
ME, September 14, 1907. (Courtesy of USGS Photographic Library 2007.)

sand-and-gravel outwash was deposited near the ice front, and the meltwater streams
deposited successively finer material farther and farther downstream. During the next
ice advance, heterogenous deposits of poorly permeable till might be laid down atop the
sand-and-gravel outwash. Small ice patches or terminal moraines dammed some of the
meltwater streams, causing large lakes to form. Thick deposits of clay, silt, and fine sand
accumulated in some of the lakes, and these deposits form confining units where they
overlie sand-and-gravel beds. The glacial-deposit aquifers are either localized in bedrock
valleys or are in sheet-like deposits on outwash plains (Miller 1999).
The glacial sand-and-gravel deposits form numerous local but highly productive aquifers.
Yields of wells completed in aquifers formed by continental glaciers are as much as 3000 gallons
per minute (1000 gallons per minute equals 63 L/s) where the aquifers consist of thick sand and
gravel. Locally, yields of 5000 gallons per minute have been obtained from wells completed in
glacial-deposit aquifers that are located adjacent to rivers and can obtain recharge from sur-
face water. Aquifers that were formed by mountain glaciers yield as much as 3500 gallons per
minute in Idaho and Montana, and wells completed in mountain-glacier deposits in the Puget
Sound, WA, area yield as much as 10,000 gallons per minute (Miller 1999).

2.3.2 Sandstone Aquifers


Sandstone aquifers worldwide are more widespread than those in all other types of con-
solidated sedimentary rocks. Although generally less permeable and usually with a lower
natural recharge rate than unconsolidated sand-and-gravel aquifers exposed at the land
surface, sandstone aquifers in large sedimentary basins are one of the most important
water supply sources worldwide. Loosely cemented sandstones retain significant pri-
mary (intergranular) porosity (Figure 2.53), whereas secondary fracture porosity may be
as important for well-cemented and older sandstones (Figure 2.54). In either case, storage
66 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.53
Arenaceous sandstone, showing aeolian cross bedding, Sheridan County, MT, 1915. (Courtesy of USGS Photo­
graphic Library 2007.)

FIGURE 2.54
One of the canyons cut deep into the sandstones of the Permian De Chelly Formation, Canyon De Chelly
National Monument, AZ.
Conceptual Site Models 67

capacity of such deposits is high because of the considerable thickness of major sandstone
basins.
Sandstone aquifers are highly productive in many places and provide large volumes of
water for all uses. The Cambrian–Ordovician aquifer system in the north-central United States
is composed of large-scale, predominantly sandstone aquifers that extend over parts of seven
states. The aquifer system consists of layered rocks that are deeply buried where they dip into
large structural basins. It is a classic confined, or artesian, system and contains three aquifers
(Figure 2.55). In descending order, these are the St. Peter-Prairie du Chien-Jordan aquifer (sand-
stone with some dolomite), the Ironton-Galesville aquifer (sandstone), and the Mount Simon
aquifer (sandstone). Confining units of poorly permeable sandstone and dolomite separate
the aquifers. Low-permeability shale and dolomite compose the Maquoketa confining unit
that overlies the uppermost aquifer and is considered part of the aquifer system. Wells that
Downloaded by [University of Auckland] at 23:40 09 April 2014

Feet
1,200
Cannon River

1,000 Mississippi River

80
850

800 0
900 75
75

0
0

600
700
650

850
700

400 700
0

0
75

80

200 800

Sea
level

–200
Vertical scale greatly exaggerated
0 5 10 15 miles
0 5 10 15 kilometers

Surficial aquifer system Mount Simon aquifer

St. Peter–Prairie du Chien–Jordan aquifer Crystalline-rock aquifer

St. Lawrence–Franconia confining unit Water table

800 Line of equal hydraulic


Ironton–Galesville aquifer potential - interval 50 feet
Eau Claire confining unit Direction of ground-water
movement

FIGURE 2.55
Regional hydrogeologic cross section through part of the Cambrian–Ordovician aquifer system showing
groundwater flow toward the Mississippi River, the main discharge area for several aquifers. (Modified from
Miller, J. A., Introduction and National Summary. Ground-Water Atlas of the United States, United States
Geological Survey, A6, 1999, http://caap.water.usgs.gov/gwa/index.html.)
68 Hydrogeological Conceptual Site Models

penetrate the Cambrian–Ordovician aquifer system commonly are open to all three aquifers,
which are collectively called “the sandstone aquifer” in many reports.
The rocks of the sandstone aquifer system are exposed in large areas of northern
Wisconsin and eastern Minnesota. Regionally, groundwater in the system flows from
these topographically high recharge areas eastward and southeastward toward the
Michigan and Illinois Basins. Subregionally, groundwater flows toward major streams,
such as the Mississippi and Wisconsin rivers, and toward major well-pumping centers,
such as those at Chicago, IL, and Green Bay and Milwaukee, WI. One of the most dra-
matic effects of groundwater use known in the United States was caused by withdrawals
from the Cambrian–Ordovician aquifer system, primarily for industrial use in Milwaukee
and Chicago. This excessive sandstone aquifer pumping caused declines in water levels
of more than 375 ft in Milwaukee and more than 800 ft in Chicago from 1864 to 1980 with
the pumping influence extending over 70 mi. Beginning in the early 1980s, withdrawals
Downloaded by [University of Auckland] at 23:40 09 April 2014

from the aquifer system decreased as some users, including the city of Chicago, switched
to Lake Michigan as a supply source. Water levels in the aquifer system began to rise in
1985 as a result of decreased withdrawals (Miller 1999).
The chemical quality of the water in large parts of the sandstone aquifer system is suit-
able for most uses. The water is not highly mineralized in areas where the aquifers crop
out or are buried to shallow depths, but mineralization generally increases as the water
moves downgradient toward the structural basins. The deeply buried parts of the aquifer
system contain saline water.
Other large layered sandstone aquifers in the United States that are exposed adjacent
to domes and uplifts or that extend into large structural basins or both are the Colorado
Plateau aquifers; the Denver Basin aquifer system; the Upper and Lower Cretaceous aqui-
fers in North and South Dakota, Wyoming, and Montana; the Wyoming Tertiary aquifers;
the Mississippian aquifer of Michigan; and the New York sandstone aquifers (Miller 1999).

2.3.3 Fractured-Bedrock Aquifers


This category includes aquifers developed in metamorphic and igneous crystalline rocks.
Examples include the crystalline rocks of the Blue Ridge and Piedmont regions of the eastern
United States (Figure 2.56), northern Minnesota, and northeastern Wisconsin. Metamorphic
and igneous crystalline rocks underlie most of the Blue Ridge and Piedmont and range in
composition from felsic to ultramafic and range in age from Middle Proterozoic for gra-
nitic rocks in the Blue Ridge to Triassic–Jurassic for the unmetamorphosed dikes and sills of
mafic composition that intrude older Piedmont rocks. Rocks that crop out in the Piedmont
underlie parts of the Atlantic Coastal Plain at depth (Daniel and Dahlen 2002).
Where present, a layer of unconsolidated, highly weathered rock with a clayey residue
of low permeability called regolith, saprolite, or residuum is considered an integral part of
the bedrock aquifer system (Figure 2.57). Below this zone, the bedrock becomes progres-
sively less weathered and more consolidated, transitioning into fresh fractured bedrock.
The soil–saprolite zone has many features of low permeability clay deposits. However, where
present, the transition zone underlying the saprolite has a moderately high hydraulic conduc-
tivity because of incomplete weathering (Nutter and Otton 1969; see Figure 2.58). This occurs
because chemical alteration of the bedrock has progressed to a stage of minute fracturing of
the crystalline rock, yet it has not progressed so far that the rock minerals have been altered to
clays, which would clog small fractures (LeGrand 2007). It is not uncommon at fractured bed-
rock sites that a zone of concentrated groundwater flow and a preferential flow of contami-
nants is associated with the transition zone (Figure 2.59) with the saprolite and the underlying
Conceptual Site Models 69

NEW YORK
Valley and Ridge PENNSYLVANIA
Blue Ridge
Piedmont
NEW
JERSEY

DELA-
WEST D.C. WARE
VIRGINIA
MARYLAND

KENTUCKY
Downloaded by [University of Auckland] at 23:40 09 April 2014

VIRGINIA

TENNESSEE

NORTH
CAROLINA

SOUTH
CAROLINA

ALABAMA GEORGIA

0 100 200 mi
0 100 200 km
FLORIDA

FIGURE 2.56
Crystalline metamorphic and magmatic rocks (fractured-rock aquifer type) are predominant in the Piedmont
and Blue Ridge provinces, whereas carbonate rocks and sandstone are predominant in the Valley and Ridge
province where there are fully developed karst aquifers in limestone and dolomite. (Modified from Daniel, C. C.,
III et al., Hydrogeology and Simulation of Ground-Water Flow in the Thick Regolith-Fractured Crystalline Rock
Aquifer System of Indian Creek Basin, North Carolina, Chapter C, Ground-Water Resources of The Piedmont-
Blue Ridge Provinces of North Carolina, U.S. Geological Survey Water-Supply Paper 2341, C1–C137, 1997.)

bedrock much less or not at all affected some distance from the contaminant source zone. A
similar phenomenon can occur in glaciated fractured rock environments where weathered,
rotten bedrock acts as a zone of preferential flow between overlying, low-permeability till and
underlying competent bedrock.
Because regolith has a much higher storage capacity than bedrock, the regolith can be
thought of as a groundwater reservoir or sponge that feeds the underlying bedrock dis-
continuities. Joint concentrations, fractures enhanced by dissolution, and other disconti-
nuities in bedrock and combinations of these features also can store a substantial quantity
70 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.57
Residuum on Piedmont crystalline rocks, North Carolina, showing the preserved texture of the parent rock,
such as vertical fractures and horizontal layering. Bottom left: evidence of a partially dissolved portion of the
bedrock, now part of saprolite. Bottom right: a diffusion stain around vertical fracture left by percolating fluids.

of water. The storage capacity of the regolith/bedrock system is mainly influenced by dif-
ferences in the weathering characteristics of various rock types. Thin saprolite is devel-
oped on more resistant, quartz-rich rock types, whereas thick saprolite is developed on
less resistant rock types rich in potassium feldspar. The biotite gneiss unit is particularly
susceptible to deep weathering and typically has a thick saprolite cover. Mafic rocks (such
as amphibolite) typically are characterized by a thin saprolite cover because of the gen-
eral lack of potassium feldspar. In compositionally layered rocks, saprolite may develop
between layers of more competent rock. This weathering profile is common where less
Conceptual Site Models 71

Relative Increases
permeability
Soil zone

Clay, silt,

Clay fraction
and sand

increases
Degree of weathering increases
Residual

Regolith
quartz vein
Downloaded by [University of Auckland] at 23:40 09 April 2014

Transition
zone

Weathered
boulders Maximum
at 30-40 feet
depth
Bedrock

FIGURE 2.58
Idealized weathering profile through the regolith showing relative permeability. (From Nutter, L. J., and
Otton, E. G., Ground-Water Occurrence in the Maryland Piedmont, Maryland Geological Survey Report of
Investigations no. 10, 56 pp., 1969.)

chemically resistant rock (such as biotite gneiss) is interlayered with more chemically
resistant rock such as amphibolites (Williams et al. 2005).
The bedrock, in which fractures typically decrease in number with increasing depth,
can be generally considered as a zone of low permeability. This is illustrated in Figure 2.60,
which shows gneiss exposed in a road cut in Atlanta, GA. The rock has a limited number
of fine fissures and an absence of any major fractures for tens of feet at this particular site.
For all practical purposes, portions of bedrock such as this one would be considered as
barriers to groundwater flow. In general, however, understanding the hydrogeology of
crystalline rocks is complicated because of complex structure and porosity that is almost
exclusively secondary. As a result, the hydraulic conductivity of fractured bedrock aqui-
fers is extremely variable and not easily defined for a particular geologic formation or even
a particular rock type (Daniel and Dahlen 2002). Consequently, the distinction between
aquifers and confining units, which is the usual approach for describing the hydrogeologic
framework of a CSM, is not applicable in fractured-bedrock aquifers.
When present, fractures typically occur in sets, which are often composed of two sets of
nearly vertical fractures at approximately right angles to each other and a third, nearly hori-
zontal, set (Figure 2.61). As discussed by LeGrand (1967, 2007), in gneiss and schist, the orien-
tation of some joints and fractures tends to parallel the foliation and compositional layering,
which are rarely horizontal. In massive rocks, particularly granite, nearly horizontal ten-
sion joints often occur in the upper 100 ft of bedrock. Many nonhorizontal fracture patterns
can be traced by observing their topographic expression on the ground or on topographic
maps. Almost invariably, fractures that are not horizontal are represented by depressions
in the topography or by an alignment of topographic features such as stream segments (see
Figure 2.8 in Section 2.2.1). LeGrand (1954) demonstrated that many fractures are enlarged
72 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.59
Photograph taken during the excavation phase of a coal tar remediation project shows distinctive reddish-
brown mottling in a weathered siltstone where oil seeps occur along shallow dipping (3°–4°) bedding plane
fractures. (Courtesy of Peter Thompson, AMEC; photograph by Leo E. Arbaugh, Jr., of the U.S. Army Corps of
Engineers.)

by dissolution, especially in gneiss and schist containing silicates of calcium. Many of these
enlarged fractures underlie draws or linear depressions in surface topography.
Because fractures in the bedrock decrease in size and abundance with depth, con-
tamination of these aquifers is difficult to remediate, especially if the contaminants are
heavier than water and have low solubility in water (dense, nonaqueous phase liquids or
DNAPLs). Contaminants that settle or move into deeper parts of fractured-rock aquifers
tend to become trapped as fracture widths become narrower and groundwater velocities
diminish (Daniel and Dahlen 2002).
Prediction of the natural direction of groundwater flow in fractured rock aquifers can be
related to surface topography. Groundwater moves continuously from uphill areas toward
streams where it discharges as small springs and as bank channel seepage into streams. Small
springs and seeps are also common in draws and other topographic depressions, especially
near the base of valleys (Figure 2.62). Springs and seeps at higher elevations are commonly of
the wet-weather type and may suggest poorly fractured rocks below (LeGrand 2007).
As shown earlier in Figure 2.2, the perennial-stream drainage basin is a complete flow-
system cell, similar to and yet generally separate from surrounding basins (LeGrand 1958,
2007). Described differently, the hydraulic head beneath upland areas decreases with
depth, resulting in the overall downward movement of groundwater and providing the
Conceptual Site Models 73
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.60
Gneiss exposed in a road cut in Atlanta, GA (note lens cap for scale).

FIGURE 2.61
Granitic rocks in the Acadia National Park, ME, observed at two different scales (note lens cap on the bottom
photo for scale).
74 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.62
Two small springs issuing from fractured crystalline aquifers in the Piedmont physiographic province, north-
central Virginia. Top: a simple capture of a low-yielding spring tapped for the water supply of a farmhouse
before the Civil War. Many similar springs are still used as sources of potable water, although drilled wells are
now the main form of water supply in the region. Bottom: Small family farms in Virginia have been relying on
springs like this for their continuing operation.

mechanism for recharge to the aquifer. For example, a well 75 ft deep is likely to have a higher
water level than a well 300 ft deep at the same site. The hydraulic head beneath lowland areas
increases with depth, indicating upward movement of groundwater. For example, a well 300
ft deep is likely to have a higher water level than a well 75 ft deep at the same site. Although
this concept has been verified in the field countless times, every so often a regulatory agency
will insist that site-specific data, including installation of expensive, deep bedrock wells, be
collected to verify the same concept yet one more time. Figure 2.63 shows one such example
Conceptual Site Models 75

PZ-2 PZ-1

PZ-3
674.12 G-8 Unnamed Branch 671.92
665.73
666.38 667.58

LEGEND

Well with
screen interval
Water elevation in
674.12 deep bedrock well
Water elevation in
Downloaded by [University of Auckland] at 23:40 09 April 2014

666.38 shallow well Well


Well Groundwater bottom
Water table
bottom flow direction 614.5
611.0

FIGURE 2.63
Cross section showing monitoring wells and groundwater flow directions at a fractured rock site in Athens,
GA. The wells were installed to prove the concept of groundwater discharge into the stream from both sides
and to demonstrate an absence of risk for groundwater users located thousands of feet from the site, across
several small stream drainages like this one.

from the Piedmont physiographic province in north-central Georgia where installation


of well clusters on either side of a small stream was required to convince regulators that
contamination from an industrial site would not negatively impact water-supply wells
located thousands of feet away across several drainage basins (flow cells). Unfortunately, it
still occasionally appears that the simple presence of fractures to some stakeholders means
groundwater can flow anywhere regardless of the above-described conceptual model and
general hydraulic principles of groundwater flow [groundwater cannot move upgradient,
i.e., against the hydraulic gradient (uphill), in any media, including fractures].
The following example from the North Carolina Department of Environment and
Natural Resources guidance manual on hydrogeologic characterization of fractured rock
aquifers (LeGrand 2007) illustrates how these general principles can be used in a site-
specific setting to avoid unnecessary investigative costs and help focus remedial efforts
where they are needed. The setting consists of a waste disposal site on a topographic slope,
halfway between a ridge top and a small creek (Figure 2.64). It is a common granite gneiss
setting, having no rock outcrops and presumably a thick soil–saprolite zone underlying
the slope and the adjacent slope across the creek. Mr. Smith, the owner of the facing prop-
erty across the creek, plans to drill a well, which would be about 1000 ft horizontally from
the waste-disposal site. The waste site is about 30 ft higher in elevation than the creek, and
the elevation of Mr. Smith’s well on the opposite slope is also about 30 ft above the creek.
Mr. Smith is concerned about the possibility of his well water being contaminated by leak-
age from the waste-disposal site.
Based on the surface topography and groundwater flow generalizations described
earlier, groundwater naturally flows toward the creek from opposite directions, thereby
eliminating a continuous gradient from the waste site to the well. Each is in a different
slope–aquifer system from the other (see Figure 2.2). There is no deep aquifer system to
connect the systems as fractures decrease in size and number with increasing depth. The
76 Hydrogeological Conceptual Site Models

Landfill Mr. Smith’s


Arrows indicate direction well
Regolith

of groundwater flow
W at
er ta
ble
Leachate
Transition
zone

Stream
Bedrock

re
c tu
ra
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.64
Concept of contaminant migration in fractured rock aquifers with a stream receiving groundwater discharge
from both sides. Based on the hydraulics of groundwater flow, Mr. Smith’s well is the unlikely receptor of
contaminated groundwater. (Modified from LeGrand, H. W., Sr., A Master Conceptual Model for Hydrogeological
Site Characterization in the Piedmont and Mountain Region of North Carolina: A Guidance Manual, North Carolina
Department of Environment and Natural Resources, Division of Water Quality, Groundwater Section, 40 pp.,
2007.)

creek is a hydraulic boundary for natural flow, and it is extremely unlikely that the
water table during pumping of Mr. Smith’s well would be depressed to the level of the
creek. Even if that were possible, his well should theoretically dry the creek before a
true hydraulic gradient from the waste site to the well would be possible. The contami-
nated plume from the waste site may reach the creek, where the discharging contami-
nated groundwater would mix with downstream creek flow. Based on this CSM, it seems
unlikely that Mr. Smith’s well water would be contaminated by the waste disposal site
(LeGrand 2007).
Well yields in igneous and metamorphic fractured rock aquifers are generally low com-
pared to those of sedimentary rock terrains, so use of groundwater as a supply has usually
been restricted to domestic wells or small municipal and industrial supplies. However, in
the Piedmont physiographic province of the United States, there is a growing interest in
the use of bedrock groundwater for larger supplies, especially during emergencies such
as prolonged droughts. This is mainly because most favorable surface-water sites have
already been developed and because of increased concerns regarding the environmental
impacts of reservoir construction, interbasin transfer of water, and declining surface-water
quality.
As discussed by Harned (1989) and Lindsay et al. (1991), although crystalline rocks in
the Piedmont and Blue Ridge physiographic provinces are typically described as yield-
ing only small quantities of groundwater to wells, this description is based upon large
numbers of shallow wells drilled for domestic supplies requiring 2–10 gallons per min-
ute. Unfortunately, the reported yield of a well drilled for a domestic supply is rarely an
indication of the full aquifer potential at that site. Thus, if many of these domestic wells
were drilled deeper, it is probable that additional water-producing fractures within an
aquifer would be intersected. In addition, most homes and their wells are located on
Conceptual Site Models 77

ridges, hilltops, or slopes, which are the least favorable areas for obtaining large yields of
groundwater.
Results of studies in several areas of the Piedmont physiographic province show that
the aquifers can sometimes provide significant amounts of water to wells. Daniel and
Sharpless (1983) identify more than 300 wells in an eight-county area of central North
Carolina with yields at or above 50 gallons per minute. Cressler et al. (1983) report that a
significant number of wells in the Piedmont physiographic province in Georgia yielded
more than 100 gallons per minute, and some wells yielded nearly 500 gallons per minute.
They also identify 66 wells used primarily for industrial and municipal supplies, at flow
rates significantly above those of domestic consumption, that had been in use for periods
of 12 to more than 30 years without declines in yield. Similarly, Cederstrom (1972) reports
that well yields of 100–300 gallons per minute were common for bedrock wells in the
Piedmont and Blue Ridge physiographic provinces from Maine to Virginia. Williams et al.
Downloaded by [University of Auckland] at 23:40 09 April 2014

(2005) provide detailed discussion of water-bearing characteristics of various crystalline


rocks in the Lawrenceville, GA, area where more than a dozen high-yielding wells are
utilized for public and industrial water supply (see Figure 2.65).
Partially because of an increased demand for water supply, state and federal environ-
mental regulatory agencies in the United States have put an additional pressure on both
the groundwater users and various parties potentially responsible for groundwater con-
tamination to better characterize bedrock aquifers. Figures 2.66 through 2.68 present just
a few of the many hydrogeological aspects and field investigation tools that can be used
to develop CSMs for fractured rock sites. These figures illustrate how graphics and photo-
graphs can simply and effectively convey main aspects of a fractured-rock CSM to nonge-
ologists and nonhydrogeologists alike, which is critical because of the high complexity of

FIGURE 2.65
One of more than a dozen high-yielding wells in the Lawrenceville, GA, area completed in the crystalline rock aquifer.
78 Hydrogeological Conceptual Site Models

Ambient Down Flow Pumping Up Flow


-0.08 -0.04 0 0.04 0.08 0.12
0

Depth Below Ground Surface, in Feet


50

100

150

200
Downloaded by [University of Auckland] at 23:40 09 April 2014

250

300

FIGURE 2.66
Left: Heat-pulse flowmeter for characterization of fracture-specific groundwater flows in open bedrock bore-
holes under ambient and pumping conditions (shallow pump intake ~20 ft pumping at +0.1 gallons per minute).
Right: Graph of ambient versus pumping flow rates at discrete intervals in an open bedrock borehole. (Courtesy
of Peter Thompson and Scott Culkin, AMEC.)

Degrees
Feet
0 30 60 90
250 Downview at 266 feet
bg-bs
Fracture
opening below land surface looking
at a subhorizontal fracture
formed along foliation-
compositional layering
Biotite gneiss unit

a-w/bg

300
A. 297 feet B. 297 feet

bg

bg-w/a

a-w/bg

350

EXPLANATION

Predominantly amphibolite Equal amounts of Tadpole — shows


a-w/bg bg-bs
with some biotite gneiss biotite gneiss and azimuth direction
button schist
Foliation
bg-w/a Predominantly biotite gneiss
bg Biotite gneiss
with some amphibolite Joint

Water-bearing zone Major foliation opening

FIGURE 2.67
Subsurface lithologic characteristics, water-bearing zones, a structural tadpole plot, and borehole camera images
for part of the geophysical log of well 14FF59 in Lawrenceville, GA. Images A and B show a subhorizontal frac-
ture at 297 ft below land surface formed parallel to compositional layering; high-angle joints (black arrows)
terminate into the fracture from below; the aperture is 3–4 in. (Modified from Williams, L. J. et al., Influence
of Geologic Setting on Ground-Water Availability in the Lawrenceville Area, Gwinnett County, Georgia, U.S.
Geological Survey Scientific Investigations Report 2005-5136, Reston, VA, 50 pp., 2005.)
Conceptual Site Models 79

Po
we
r Li
ne
Approximate
MW101R
Landfill Boundary

No
Celeron Square

rth
Apartment Complex MW103R Hi
lls
id e
MW123SR Ro
MW104R MW109R
a d
MW121R
Downloaded by [University of Auckland] at 23:40 09 April 2014

MW122R

Water Pollution
Control Facility
Motor
MW105R

W156 Pool

N F-Lot
MN
14.5o

W125

EXPLANATION
W80
Borehole in Bedrock
Po
we
Power Line

Orientation of Transmissive r Li
Fracture(s) ne
0 400 800 1200 feet
Multiple Orientation of
Transmissive Fracture(s) 0 100 200 300 meters

FIGURE 2.68
Orientation of transmissive fractures in bedrock boreholes at the University of Connecticut landfill study
area, Storrs, CT. (Modified from Johnson, C. D. et al., Borehole-Geophysical Investigation of the University of
Connecticut Landfill, U.S. Geological Survey Water-Resources Investigations Report 01-4033, Storrs, CT, 42 pp.,
2002.)

fractured-rock aquifers. Additional visualizations including animations of characteriza-


tion and remediation of several fractured-bedrock sites are provided on the accompanying
DVD courtesy of Peter Thomson, Scott Culkin, and Rod Rustad of AMEC.

2.3.4 Karst Aquifers


Developing a CSM for a karst site is the most difficult task in hydrogeology. Countless
examples of erroneous concepts that resulted in wasted efforts and funds abound for just
about every aspect of hydrogeologic practice in karst terrains. This includes construction
of dams and reservoirs (see Figure 2.69), tunnels, and other infrastructure, development
of water-supply sources, and groundwater remediation. In addition to the natural com-
plexities of karst aquifers, there are two other main reasons for the ongoing difficulties in
developing accurate CSMs for karst sites:
80 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.69
Montejaque Dam in the Sierra de Grazalema, Spain, was abandoned before the Second World War because of
large losses of water from the reservoir and at the dam site through numerous sinks and other karst features.
This arc dam on the River Compobuche-Guadares, a tributary of the Guadiaro River, is 83.75 m high with an
84-m-long crest. After heavy rains, the reservoir behind the dam fills with water only to lose it soon after rains
stop. (Courtesy of Dr. Petar Milanović.)

• Attempts to treat karst as an equivalent porous medium (EPM) and develop hydro-
geologic concepts only applicable to intergranular aquifers (sand and gravel).
This leads to numeric modeling of karst aquifers with inappropriate models (see
Chapter 5).
• Regulatory policies and politics that purposely or inadvertently result in mis-
classifying a site as nonkarstic. Typical examples include use of terms such as
carbonate aquifer or fractured-rock aquifer while ignoring clear evidence of
karstification.

Unfortunately, both unnatural reasons listed above are still very common in the hydrogeo-
logic practice despite significant advances in characterization, quantification, and manage-
ment of karst hydrogeologic systems made over the last several decades worldwide. While
there is no excuse for a working professional hydrogeologist to treat karst as an EPM or
fractured rock, some stakeholders may be less critical when it is government regulatory
agencies engaging in this erroneous practice. It is suspected that regulators often take
this stance because the very presence of karst may mean that a task at hand is infeasible
(a related term is impracticable; see discussion in Chapter 8). In many parts of the world,
however, the general public living in karst environments understands and can appreciate
this simple fact and is given enough credit by various government agencies when there is
a common problem to solve. This, however, is not the case in some highly litigious societ-
ies, such as the United States, where there are examples of continuous, misguided, and
Conceptual Site Models 81

expensive efforts in trying to deal with groundwater contamination in karst without even
acknowledging the nature of the problem. More discussions on various difficulties associ-
ated with groundwater remediation in karst, including technical impracticability, are given in
Chapter 8.
As discussed in Section 2.3.3, it is not uncommon that at fractured bedrock sites a zone
of concentrated groundwater flow and preferential flow of contaminants exists within
the weathered transition zone between the overlying clayey saprolite and the competent
unweathered rock (see Figure 2.59). In such cases, the saprolite and the underlying bed-
rock are much less or not at all affected downgradient of the contaminant source zone.
Unfortunately, although this concept is not applicable to karst, it has been applied indis-
criminately at some contaminated sites in a misguided effort to localize the problem and
justify various attempts of active groundwater remediation at any cost. Such approaches,
aimed at pleasing certain stakeholders and/or avoiding hard decisions, have a high likeli-
Downloaded by [University of Auckland] at 23:40 09 April 2014

hood of failure in karst. This is simply because the karstification process creates preferen-
tial flow paths deep into the underlying bedrock (such as limestone and dolomite). These
preferential flow paths can develop and interconnect at various depths and can be vertical,
horizontal, and anything in between. They also share one common element that may be
the reason for all the controversy: Their formation is often unpredictable in orientation
(Figure 2.70). This is true for karst aquifers worldwide: in Texas, Tennessee, Kentucky,
Virginia, or Florida in the United States; in France, China, Russia, or Jamaica; and in clas-
sic karst of the Dinarides (the area in The Balkans in Europe where the word karst comes
from). The reader can extend the list to include all areas, large and small, that are under-
lain by carbonate and other soluble rocks.
What is still hard to understand by some, however, is that the terms karst and karst
aquifer do not necessarily imply the presence of sinkholes at the land surface and/or caves
that can be easily accessed by a passerby or a spelunker (Figures 2.71 through 2.74). In
other words, in an aquifer developed in limestone, dolomite, and other soluble rocks, there
will always be conduits and voids of varying size and (unknown) spatial extent deep in
the subsurface that even the most skillful caver cannot access (Figure 2.75) or the most
advanced technology cannot discover. Consequently, the role of a karst hydrogeologist in
developing CSMs is to cost-effectively use multiple investigative tools applicable to karst,
narrow down the inevitable uncertainties, and minimize risk associated with management

FIGURE 2.70
Image of surveyed cave passages of the Wind Cave created by the WinKarst computer program. (Courtesy of
Resurgent Software, available at http://www.resurgentsoftware.com/winkarst.html.)
82 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.71
Two sinkholes formed in surficial sand overlying the karstic Floridan aquifer in Florida. Left: Dairy Farm Sink
southeast of Bartow. Right: Sink in an orchard in south Hillsborough County. (Courtesy of Francis Sowers;
photographs by George Sowers.)

FIGURE 2.72
Convoluted pattern of sinkholes and residual limestone hills in an aerial photograph taken by George Sowers
in the Dominican Republic. Note the few houses on the right, near patches of brown bare soil. (Courtesy of
Francis Sowers.)
Conceptual Site Models 83
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.73
Naturally formed sinkholes of varying sizes and hydrologic function in different parts of the world all have one
thing in common: they are the main signatures of the underlying karst aquifers.

FIGURE 2.74
Hydrogeologist Radisav Golubović in front of the easily accessible Petnica Cave near Valjevo, Serbia.
84 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.75
Small karst conduits entering a larger passage in a cave developed in Edwards Limestone, TX. Note the camera
lens for scale. (From Kresic, N., Groundwater Resources: Sustainability, Management, and Restoration, McGraw Hill,
New York, 852 pp., 2009. With permission.)

decisions. Understanding that the risk can never be eliminated and conveying this to all
interested parties is arguably the most important aspect of karst hydrogeology regardless
of the specific project goal.
As discussed by Worthington and Ford (2009) and based on extensive research, includ-
ing laboratory and field experiments and geochemical modeling, the invariable result of
flow through limestone aquifers is that the positive feedback between increasing flow and
increasing dissolution results in a high-permeability, self-organized network of channels
(conduits). If the flux of water through an aquifer is minimal, such as in some confined
aquifers with limited recharge, then the formation of channel networks will be retarded.
Conversely, in confined aquifers where there is a substantial flux of water and in uncon-
fined aquifers, channel/conduit networks are likely to convey most of the flux of water
through the aquifer after periods of 103–106 years following the onset of substantial flow
through the aquifer. This range of times is short in comparison to the time that most
unconfined limestone aquifers have been functioning, so it is reasonable to infer that most
such aquifers should have well-developed channel/conduit networks.
The density, size, and geometry of individual conduits in the karst aquifer network can
vary by many orders of magnitude depending on site-specific characteristics such as min-
eral composition, stratigraphy, tectonic fabric, recharge mechanisms, and position of the
erosional base. One of the most common errors made by hydrogeologists not experienced
in karst is to rely solely on investigative borings and monitoring wells when developing a
CSM for a limestone (carbonate) aquifer site. Declaring that the aquifer (site) is nonkarstic
based on three, five, or even ten borings that did not encounter a large void is a typical
example. Similarly, a nonkarst hydrogeologist may conclude that the aquifer is not karstic
because test wells cannot produce large yields that, somehow, are expected from a karst
aquifer. Figure 2.76 illustrates these points. One can easily imagine a boring advanced
into the large cavity shown in the top photograph or the smaller cavity in the bottom
Conceptual Site Models 85
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.76
Cavities of varying sizes at a deep construction site in Paleozoic limestone, Hartsville, TN. The measured matrix
porosity is 2%. (Courtesy of Francis Sowers; photographs by George Sowers.)

photograph. In such case, hardly any hydrogeologist would argue that the limestone is
not karstified (an inexperienced driller not equipped for drilling in karst may even lose
some of his/her equipment when this happens). One could also imagine another boring
completed in solid rock adjacent to either cavity that could be used, by some, as evidence of
nonkarst. Figure 2.77 shows that, unfortunately, surprises can happen even after extensive
and expensive investigations performed by professionals well versed in the nature of karst.
86 Hydrogeological Conceptual Site Models

Dam

#1

Reservoir
#4 #2

#3

Cave
Downloaded by [University of Auckland] at 23:40 09 April 2014

Borehole
Limestone Axis of
N grout curtain

0 50 m

FIGURE 2.77
Cave in the left abutment of Sklope Dam, Croatia, discovered with an exploration gallery excavated after exten-
sive grouting failed to achieve backpressure. (Modified from Božičević, S., Primjena Speleologije Pri Injektiranjima
u Krsu (Application of Speleology in Grouting of Karst Terranes), First Yugoslav Symposium on Hydrogeology
and Engineering Geology, Herceg Novi, 1971.)

Borings #1 and #4 at a future dam site encountered single small voids, whereas borings #2 and
#3 were completed in solid limestone. A grout curtain subsequently designed to deal with lim-
ited karstification failed to achieve results even after injecting disproportionally more grout
than what was designed. A 20-m-long exploration gallery was then excavated into the dam
abutment, and a cave was discovered with a large hall up to 20 m high. The cave was surveyed
in detail, resulting in a complete redesign of the grout curtain. As can be seen, parts of the cave
almost completely surround boring #3. Many similar examples illustrating the elusive nature
of karst aquifers, including a detailed discussion of various risks associated with engineering
projects in karst, are provided in an excellent book by Milanović (2004).
The situations shown in Figures 2.76 and 2.77 can happen at any depth within a karst
aquifer including hundreds or even thousands of feet below the ground surface and below
the water table, provided, of course, that the carbonate sediments are that thick. The depth
to which karstification has progressed, the so-called “base of karstification,” is not uni-
form and varies from site to site depending on usual factors—rock (mineral) composi-
tion and stratigraphy, tectonic fabric, and recharge/discharge mechanisms. The degrees
of karstification and fracture density generally decrease with depth but can locally reach
much greater depths in fault zones as shown in Figure 2.78 (Milanović 2004). As the ero-
sional basis for karst, groundwater discharge is not necessarily at a static elevation and
may be constantly lowered as a result of surface stream incision, sea level regression, or
other factors. The depth of karstification and the depth to the water table would conse-
quently increase in time as well. For example, in some areas of the Dinaric carbonate plat-
form in Europe, the depth to the water table exceeds 1000 meters (Figure 2.79), and cavities
and voids of varying size have been discovered with investigative drilling at depths of
several thousand meters. Numerous large ascending and submarine springs along the
Conceptual Site Models 87
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.78
Schematic representation of the depth of karstification: (1) relationship between karstification and depth;
(2) zone of base flow; (3) zone of water table fluctuation; (4) distribution of active karst porosity; (5) base of
karstification; (6) curve of electrical sounding; (7) nonkarstified limestone; (8) level of water table. (Modified
from Milanović, P. T., Water Resources Engineering in Karst, CRC Press, Boca Raton, FL, 312 pp., 2004.)

Mediterranean coast such as the one shown in Figure 2.80 testify to deep karstification
below the current sea level. Similarly, along the coast of Florida in the United States and
Yucatan in Mexico there are numerous submarine springs and submerged cave passages
testifying that the sea level in the past was also lower (Figure 2.81).
An additional complicating factor in analyzing karst aquifer characteristics is the pos-
sible presence of paleokarst. This term usually describes karst features, now buried below
noncarbonate rocks, which were developed during periods when carbonates were exposed
at the surface. The same general area with carbonate rocks may have also been subject to
multiple periods of karstification, depending on the depositional and tectonic history and
fluctuations of sea level. In all these cases, it is possible to find very transmissive zones,
together with karst conduits, at varying aquifer depths and/or below overlying noncar-
bonates. It is therefore not advisable to make generalizations, based on some karst litera-
ture examples, as to the expected depth of karstification at any particular site.
The ill-defined term epikarst has recently gained popularity, bringing more confusion
into the already complicated task of developing hydrogeological CSMs in karst. For many
karst hydrogeologists involved in solving practical engineering problems throughout the
world, this term simply refers to the shallow, more weathered (more karstified) portion
of the subsurface. Higher karstification is a result of a higher density of fractures at and
near the land surface (which is typical for any solidified rock) and an abundant supply of
carbon dioxide from the atmosphere and the soil layer, where present (dissolution of CO2
in water creates carbonic acid, the main agent of carbonate rock dissolution and karstifica-
tion). Figure 2.82 illustrates one such highly weathered zone, equivalent to epikarst, with
an abundance of unconsolidated residuum and large infilled dissolutional features.
Detailed description of the epikarst concept is given by Ford and Williams (2007), among
others. Common to this and other descriptions is that epikarst is viewed as a perched
aquifer sitting well above the main saturated zone of the karst aquifer (the main, per-
manent water table). From this perched aquifer (epikarst), water percolates to the main,
88 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.79
Top: Deep jamas (potholes) on the Velebit Mountain, Croatia. As of August 2010, the depth of Lukina Jama, the
deepest Croatian pothole, is 1421 m including 40 m of the submerged passage at the bottom. This part of the
Dinaric karst is drained by a number of coastal and submarine springs in the Adriatic Sea, such as Jablanac
(shown schematically in the left). (Courtesy of Croatian Speleological Server, www.speleologija.hr, drawing
by Darko Bakšić.) Bottom: The entrance to the 1421-m-deep Lukina Jama in the Velebit National Park, Croatia.
(Courtesy of Vlado Božić.)

deeper aquifer via isolated vertical drains (shafts). Because of the contrast in permeability
between the epikarst (permeability is higher in the shallower, more weathered rock) and
the underlying competent rock, as well as limited transfer capacity of the isolated drains
(which are also narrowing down with depth), the epikarst is underdrained. The result of
this is formation of a permanent or semipermanent perched saturated zone (perched aqui-
fer), the characteristics of which can vary based on recharge pattern (climate of the site)
and the degree of weathering and presence of soil/residuum.
Conceptual Site Models 89
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.80
Submarine spring in the Adriatic Sea near Brela, Croatia.

FIGURE 2.81
Cave divers in submerged passages of the Nohoch Nah Chich in the Yucatan Peninsula. The abundant speleo-
thems, stalactites, stalagmites, flowstone, and columns were formed prior to cave submergence. (Courtesy of
David Rhea, Global Underwater Explorers.)
90 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.82
Highly weathered and karstified carbonates (epikarst) exposed in a road cut near Knoxville, TN. (Courtesy of
Francis Sowers; photograph by George Sowers.)

Unfortunately, the above-described concept of epikarst is being applied indiscrimi-


nately, including at sites where there is clear evidence of its complete absence. In fact, many
karst areas worldwide show the absence rather than the presence of perched aquifers sit-
ting above some main, deeper, permanently saturated karst aquifer zone. It also would be
erroneous, at any karst site, to assume that transfer of recharge (percolating) water from
the land surface to some deeper, permanently saturated karst aquifer zone is happening
only via vertical shafts, whatever their number and density may be. Countless worldwide
examples provided by monitoring wells, borings, surveyed cave passages, tunnels, and
galleries at various depths and groundwater tracing tests show that the concept of epikarst
as explained above does not hold water in many site-specific settings (Kresic 2012).
There are various examples provided by completely dry caves with horizontal, vertical,
and otherwise oriented passages located very close to the land surface and with completely
dry shallow monitoring wells/piezometers completed above the cave ceilings. People in
many parts of the world cannot drill or dig any shallow wells to obtain drinking water
from karst terrains for any reasonable period because the subsurface is completely dry for
hundreds of feet of depth. All precipitation that falls over the karst terrain quickly finds its
way deep into the subsurface through fractures and dissolution features without creation
of any perched epikarst aquifer. Trickles of water or waterfalls that appear in dry cave pas-
sages at various depths or in accessible, observable vertical shafts may last for some time
but will disappear after cessation of rainfall as quickly as they appeared. Deep sinkholes
may never show semipermanent or permanent, however small, seeps or springs coming
out of their walls from the shallow epikarst zone. Figures 2.83 through 2.85 illustrate the
absence of epikarst water in terrains as geographically diverse as Croatia, China, and the
Dominican Republic.
Notwithstanding that epikarst conditions may be applicable at some sites, the authors
caution novice karst hydrogeologists to carefully design field investigations and continu-
ously monitor groundwater characteristics at their site. This will enable them to come to
informed conclusions as to the presence of perched water in epikarst at their particular
site.
Conceptual Site Models 91
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.83
Red Lake near Imotski, Croatia. Total depth of this deepest sinkhole in the world is 518 m, and the depth of
water in the sinkhole during average low conditions is about 250 m. The lake water level, which is also the level
of the water table in the karst aquifer, fluctuates approximately 22 m. (Courtesy of the Croatian National Tourist
Board, www.croatia.hr.)

FIGURE 2.84
Stone Forest in Shilin County, 90 km from Kunming, China. (Courtesy of Francis Sowers; photograph by George
Sowers.)
92 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.85
Completely dry railroad cut in limestone, Dominican Republic, showing no signs of perched epikarst zone.
(Photograph by George Sowers; printed with kind permission of Francis Sowers.)

As discussed in detail by Milanović (1979, 1981, 2004, 2006) and Kresic (2007, 2009, 2010,
2012), groundwater level measurements in monitoring wells (piezometers) in karst must be
designed, performed, and interpreted with great care. The most common conceptual error
in karst hydrogeology is to draw maps of the potentiometric surface (water table, piezomet-
ric surface, and hydraulic head map are all interchangeably used terms) and use them to
estimate groundwater flow directions and hydraulic gradients (Figure 2.86; see also Figure
4.32 and Figure 4.33). Preferential flow paths create local troughs (linear depressions) in
the potentiometric surface that may be detected only if a sufficient number of piezometers
(monitoring wells) screened at the right depths and on either side of the conduit are avail-
able. This, however, is not the case at many sites because of the costs and difficulties associ-
ated with drilling in karst. In addition, the preferential flow path(s) of greatest importance
to the specific project may never be found because of the many uncertainties discussed
previously. In any case, extensive drilling in karst should be performed only after some
preliminary investigations using noninvasive techniques such as geophysics, remote sens-
ing, field mapping, and dye tracing are conducted to locate possible preferential flow paths
(see Figures 2.87, 2.88, 2.89a, and 2.89b).
Karst conduits filled with water partially or fully (under pressure) may act as strong
hydraulic head–dependent sources of water (so-called equipotential boundaries), just like
Conceptual Site Models 93

1
2
Downloaded by [University of Auckland] at 23:40 09 April 2014

3
4
5
6
7

FIGURE 2.86
Groundwater flow and its map presentation in a karst (left) and an intergranular (right) aquifer. (1) Preferential
flow path, such as a fracture, fault zone, or karst conduit/channel; (2) fracture/fault; (3) local flow direction;
(4) general flow direction; (5) position of hydraulic head (water table); (6) hydraulic head contour line; (7)
groundwater divide. (Modified from Kresic, N., Kvantitativna hidrogeologija karsta sa elementima zastite podzemnih
voda (Quantitative Karst Hydrogeology with Elements of Groundwater Protection; in Serbo-Croatian), Naucna
Knjiga, Belgrade, 192 pp., 1991.)

FIGURE 2.87
Alignment of sinkholes marked by arrows may indicate a preferential flow path in the underlying karst aqui-
fer. This color-shaded surface map of an area near Ocala, FL, was created from high-resolution LiDAR digital
elevation data.
94 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.88
Example of electrical resistivity imaging in karst. Two parallel resistivity inversion sections (20-ft line separa-
tion) show large low resistivity zones (blue), indicating the location, depth, and approximate dimensions of the
dissolution and large cavity structures. (Courtesy of Finn Michelsen, AMEC.)

(a)

FIGURE 2.89
(a) This 3D apparent resistivity model shows high-resistivity zones (orange-red) representing competent and
weathered dolomitic limestone. The low-resistivity zones (blue) represent the locations of cavities and solution
zones. (Courtesy of Finn Michelsen, AMEC.)
Conceptual Site Models 95
Downloaded by [University of Auckland] at 23:40 09 April 2014

(b)

FIGURE 2.89 (Continued)


(b) 3D model shown in Figure 2.89a can be sliced horizontally at any desired depth to create maps of likely cavities
and solution zones. Shown is depth slice at 20 ft below ground surface. (Courtesy of Finn Michelsen, AMEC.)

surface streams do when hydraulically connected with the underlying aquifer. Conduits
can also be connected with surface streams, providing for an even stronger equipotential
boundary as illustrated by the following example. Figures 2.90 and 2.91 show the general
setup and time-drawdown data for a karst aquifer test performed in the Sabana Seca/Vega
Baja area in Puerto Rico (Torres and Diaz 1984). There is little doubt that the drawdown
recorded at the pumping Ceiba well was almost immediately influenced by a strong equipoten-
tial boundary (after less than 10 minutes) and then remained unchanged for the remainder of
the test. Just a few hours after the test was initiated, freshwater shrimp and fish started flowing
out of the test well, clearly indicating a cavernous hydraulic connection to Rio Cibuco located
approximately 500 ft southwest of the test well (Arturo Torres, personal communication).
Karst conduits and voids do not have to be of such size to allow passage and provide
habitat for fish or other creatures, small or large (Figure 2.92). Regardless of their size,
however, they are the main reason for the unique characteristics of karst aquifers, that is,
groundwater flow under pressure akin to pipe flow in conventional fluid mechanics. This
pressure can be propagated quickly and to great distances (Figure 2.93) following major
recharge episodes. The hydraulic head can sometimes rise tens of meters, resulting in a
complete flooding of all interconnected voids (channels, conduits) in a matter of hours
(Figure 2.94). This buildup of pressure in the hydraulic system is typical for karst aquifers
96 Hydrogeological Conceptual Site Models

South Alluvium, Sand CEIBA OW-1A North


and Gravel WELL
OW-1
50 Rio Cibuco OW-2 OW-3

0
Feet, above sea level

Surficial Deposits, Clay


-50

-100

-150 Limestone,
Cavernous at Site
-200

-250
Downloaded by [University of Auckland] at 23:40 09 April 2014

0 200 400 600 800 1000 1200 1400 1600 1800


Distance in feet

FIGURE 2.90
Vertical cross section of observation wells and a pumping well of the Cibuco aquifer test near Vega Baja, Puerto
Rico. (From Torres, A. G., and Diaz, J. R., Water Resources of the Sabana Seca to Vega Baja Area, Puerto Rico, U.S.
Geological Survey Water Resources Investigations Report 82-4115, 53 pp., 1984.)

developed in compacted older carbonates of Paleozoic and Mesozoic age having a low
matrix porosity of a few percent. A monitoring well located near and hydraulically con-
nected to the conduit system (preferential flow paths) and/or land surface will have a fast
response time to recharge episodes as illustrated in Figure 2.95. At the same time, a well
completed in less fractured or solid rock may show very little response or a lack of it. In

10
Drawdown, in feet

1 Ceiba Well
OW-1A Q = 1,800 gpm
80 feet

0.1 OW-1
104 feet
OW-3
860 feet
OW-2
230 feet

0.01
0.01 1 10 100 1,000 10,000
Time, in minutes

FIGURE 2.91
Time-drawdown curves at a Ceiba well and four observation wells recorded during the Cibuco aquifer test near
Vega Baja, Puerto Rico. (From Torres, A. G., and Diaz, J. R., Water Resources of the Sabana Seca to Vega Baja
Area, Puerto Rico, U.S. Geological Survey Water Resources Investigations Report 82-4115, 53 pp., 1984.)
Conceptual Site Models 97
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.92
Karst underground is inhabited and visited by various creatures that may have competing interests regarding
development of a CSM. (Courtesy of Marin Kresic.)

contrast, monitoring wells in young carbonates with high matrix porosity (see Figure 2.96)
will often have much less pronounced water-level fluctuations regardless of their location
in the aquifer. This is because groundwater storage and flow rates are equally significant
in both the rock matrix and the conduits.
As discussed by Kresic (2007, 2012), Darcy’s law for calculation of groundwater velocity
and flow rate in porous media is not valid for karst. For example, the total energy line of
the flow can only decrease from the upgradient cross section toward the downgradient
98 Hydrogeological Conceptual Site Models

Recharge

t0 C0 t1
C1
t2
C2

Saturated zone Discharge


A
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.93
Formation and movement of a groundwater wave caused by a localized recharge event. Velocity of the wave is
C0 at time t0, C1 at time t1, and C2 at time t2, where C0 > C1 > C2 because of decreasing hydraulic gradients. A is the
volume of old water stored in the aquifer and discharged under pressure at the spring because of the recharge
event. The newly infiltrated water will start discharging at the spring with delay. A hypothetical conduit sys-
tem representing the flow under pressure is depicted in yellow. (Modified from Yevjevich, V. M., Karst Water
Research Needs, Water Resources Publications, Littleton, CO, 1981.)

cross section of the same pipe (conduit) because of energy losses. On the other hand,
the hydraulic head may go up and down along the same pipe as the cross-sectional area
increases or decreases, respectively. The total energy of the flow, which includes the flow
velocity component, can be directly measured only by the Pitot device, the installation of
which is not feasible in most field conditions. Monitoring wells and piezometers, on the
other hand, only record the hydraulic head, which does not include the flow velocity com-
ponent. It is therefore conceivable that two piezometers in or near the same karst conduit
with rapid flow may not provide useful information for calculation of the real flow velocity
and flow rate between them and may even falsely indicate the opposite flow direction. In
fact, as discussed by Bögli (1980, p. 87), it is possible that water rising through a tube in an
enlargement passage can flow backward over the main flow conduit and into another tube
that begins at a narrow passage in the same main conduit.
In addition, even when one attempts to apply the Bernoulli equation for real viscous flu-
ids, many uncertainties associated with the geometry of karst conduits make calculations
of groundwater velocity in the conduit network not feasible.
There are additional complicating factors when attempting to calculate flow through
natural karst conduits using the pipe approach, even when Pitot tubes are correctly used:

1. Flow through the same conduit may be under pressure (i.e., full) or as an open
channel (i.e., with a free surface) at different locations.
2. Because pipe/conduit walls are more or less irregular (rough), the related coeffi-
cient of roughness has to be estimated and inserted into the general flow equation.
3. Conduit cross section may vary significantly over short distances and in the same
general area.
4. The flow may be both laminar and turbulent in the same conduit, depending on
the flow velocity, cross-sectional area, and wall roughness.
Conceptual Site Models 99
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.94
Encounter with a summer storm in Bat River Cave, Minnesota Cave Preserve. (1) Ladder extending from the
new drilled vertical entrance to the cave; visible is normal, typically low and calm water flow; (2) looking up
from the cave; (3) after sudden intense storm on August 18, 2007, a waterfall occurred where dry ceiling existed
before (4:03 pm); (4) the water begins to rise and becomes turbulent (4:17 pm); (5) cavers round the bend and
struggle against the current to retreat (4:22 pm); (6) seconds after this photo was taken, it was almost impossible
to stand in the passage without getting swept away (4:59 pm). (Courtesy of John Ackerman; available at http://
www.karstpreserve.com/index.html.)

5. More than one conduit is usually responsible for transferring groundwater in the
aquifer from one general area to another. The difficulties described earlier mul-
tiply when attempting to calculate groundwater flow rates through a network of
conduits. While appropriate fluid mechanics solutions exist for pipe networks,
the major challenge is accurately identifying and characterizing all the disparate
branches in the field.
100 Hydrogeological Conceptual Site Models

Flow Type Dominated By


A B
Conduits Fractures Matrix
Downloaded by [University of Auckland] at 23:40 09 April 2014

A B
Hydraulic

Hydraulic
Head

Head
Rainfall

Rainfall

Time Time

FIGURE 2.95
Response of the hydraulic head in monitoring wells to different types of flow in karst aquifers. (a) Rapid
conduit flow after major recharge events and no significant storage in the matrix; (b) delayed and dampened
response of the aquifer matrix. Flow dominated by fractures may include any combination of these two
extremes, whereas monitoring wells completed in solid rock may show no response at all. (From Kresic, N.,
Hydrogeology and Groundwater Modeling, Second Edition, CRC Press, Taylor & Francis Group, Boca Raton, FL,
807 pp., 2007.)

Whatever the method of estimating groundwater velocity and flow rate in a karst aquifer
may be, one should be very careful when making a (surprisingly common) statement such
as “groundwater velocity in karst is generally very high.” Although this may be true for
fast confined flow taking place in karst conduits, a disproportionately larger volume of any
karst aquifer has relatively low groundwater velocities (laminar flow) through small frac-
tures or fissures and the rock matrix. One should therefore always have in mind that karst
aquifers, unlike any other aquifer types, have three types of porosity (matrix, fracture, and
solutional or conduit/channel), and the groundwater velocity can vary over many orders
of magnitude in the same aquifer.
One common method for determining groundwater flow directions and apparent flow
velocities in karst is dye tracing (Figure 2.97). However, most dye tracing tests in karst
are designed to analyze possible connections between known (or suspect) locations of
Conceptual Site Models 101
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.96
Miami oolitic limestone of the Biscayne aquifer, Florida, with hydraulic conductivity >1000 ft/day. High pri-
mary porosity of 40%–50% is further increased by karstification. (Courtesy of Francis Sowers; photograph by
George Sowers.)

surface water sinking and locations of groundwater discharge (springs). Because such
connections involve some kind of preferential flow paths (sink-spring type), the appar-
ent velocities calculated from the dye tracing data are usually biased toward the high
end. Based on results of 43 tracing tests in karst regions of West Virginia (Jones 1997), the
median groundwater velocity is 716 m/day, while 50% of the tests show values between
429 and 2655 m/day (25th and 75th percentile of the experimental distribution, respec-
tively). It is interesting that, based on 281 dye tracing tests, the most frequent velocity
(14% of all cases) in the classic Dinaric karst of Herzegovina (Europe), as reported by
Milanović (1979), is quite similar: between 864 and 1,728 m/day. Twenty-five percent of
the results show groundwater velocity greater than 2655 m/day in West Virginia and
greater than 5184 m/day in Herzegovina. The West Virginia data do not show any obvi-
ous relationship between the apparent groundwater velocity and the hydraulic gradient,
102 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.97
Dye tracing test with uranine in the karst system of the Alta Cadena range near Malaga, Spain. (Courtesy of
Dr. Bartolome Andre Navarro, CEHIUMA.)

again disproving the notion that Darcian EPM techniques can be used to characterize or
quantify flow.
Confined karst aquifers, which do not have major concentrated discharge points in the
form of large springs, generally have significantly lower groundwater flow velocities. This
is regardless of the predominant porosity type because the whole system is under pres-
sure, and the actual displacement of old aquifer water with the newly recharged one is
rather slow. For example, Hanshaw and Back (1974) discuss the results of groundwater flow
velocity estimates using carbon-14 isotope dating for the confined portion of the Floridan
aquifer in central Florida. The average groundwater velocity based on 40 measurement
points is 6.9 m/year or 0.019 m/day.
One very important consequence of karstification is that the conduit/channel network
continues to expand both laterally and vertically. As a result, karst aquifers can often
develop across topographic drainage divides where deposits are sufficiently thick and
extensive. The two most obvious results of this inconsistency between topographic and
groundwater divides in karst terrains are sinking streams and large springs (Cvijić 1893,
1918, 1924). In fact, karst aquifers give rise to the largest springs in the world (Kresic and
Stevanović 2010; see also Figure 2.98) and are, in general, drained by only one or several
springs because of the self-organized conduit network (Figure 2.99). Dye tracing, the only
reliable method for determining drainage areas of karst springs and karst groundwater
drainage areas in general, often results in unexpected connections between surface water
and groundwater (Figure 2.100). These connections also often vary in time depending on
Conceptual Site Models 103
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.98
Buna Spring near Mostar, Herzegovina, the largest in the Dinaric karst and one of the largest in the world, with
a maximum flow rate >300 m3/s. This ascending spring is issuing from a siphonal cave explored by cave divers
to depth of 68 m. Note the oblique fault and former, higher spring cave, now dry.

FIGURE 2.99
Karst spring in the Cirque de Consolation in the French Jura Mountains during high-flow conditions. The water
emerges from conduits in Jurassic karst limestone and flows into a cave-like hall located in a steep cliff. Shortly
after emerging at the spring, the water forms an impressive waterfall. (Courtesy of Dr. Nico Goldscheider.)
104 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.100
Results of dye-tracing investigations that successfully identified a hydrologic connection between a sinkhole
or a losing stream reach and a major spring in south-central Missouri. Dye traces are in pink; different color
p­olygons represent various land uses. (Modified from Imes, J. L. et al., Recharge Area, Base-Flow and Quick-
Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter County, Missouri, 2000–04,
U.S. Geological Survey Scientific Investigations Report 2007-5049, 80 pp., 2007.)

the season, so dye tracing tests must be repeated with different dyes and for different
hydrologic conditions to accurately delineate surface and subsurface areas contributing
water to a spring or a stream. Assessing temporal variation is a critical step at any site for
which a CSM is being developed.
More detail on various challenging aspects of karst hydrogeology and hydrology can be
found in Milanović (1979, 1981, 2004), Ford and Williams (2007), and Kresic (2012).

2.3.5 Basaltic and Other Volcanic Rock Aquifers


Few if any aquifer types can compete with karst in terms of the generation of large, first
magnitude springs worldwide. However, in the United States, a dozen or so such springs,
of equal economic and utilization importance, are issuing from Pliocene and younger
basaltic rocks present chiefly in the Snake River drainage area in Idaho, Wyoming, and
the Cascade Range in Oregon. Younger basic lavas also provide water to large springs in
Hawaii. Because of their thickness, transmissivity, and large groundwater reserves, these
aquifers are extensively utilized for water supply and irrigation using drilled wells.
Conceptual Site Models 105

The Snake River Plain regional aquifer system in southern Idaho and southeastern
Oregon is a large graben-like structure that is filled with basalt of Miocene and younger
age. The basalt consists of a large number of flows, the youngest of which was extruded
about 2000 years ago. The maximum thickness of the basalt, as estimated by using electri-
cal resistivity surveys, is about 5500 ft (Miller 1999). The permeability of basaltic rocks is
highly variable and depends largely on the following factors: the cooling rate of the basal-
tic lava flow, the number and character of interflow zones, and the thickness of the flow.
The cooling rate is most rapid when a basaltic lava flow enters water. The rapid cooling
results in pillow basalt, in which ball-shaped masses of basalt form with numerous inter-
connected open spaces at the tops and bottoms of the balls. Large springs that discharge
thousands of gallons per minute issue from pillow basalt along the walls of the Snake
River Canyon in the general area of Twin Falls, ID.
In general, highly permeable but relatively thin rubbly or fractured lavas act as excellent
Downloaded by [University of Auckland] at 23:40 09 April 2014

preferential flow paths but have only limited storage (Figure 2.101). Overlying and under-
lying, thick, porous but poorly permeable volcanic ash may act as the storage medium for
this dual system. In glaciated terrains, permeable gravels and sands of outwash plains
may be found interbedded with multiple lava flows and volcanic ash deposits resulting in
thick, prolific, heterogeneous aquifer systems (Figure 2.102).
As discussed by Whitehead (1994), Pliocene and younger basaltic rocks are mainly flows,
but in many places in the Cascade Range, the rocks contain thick interbeds of basaltic ash
as well as sand-and-gravel beds deposited by streams and glaciers. Most of the Pliocene
and younger basaltic rocks were extruded as lava flows from numerous vents and fissures

FIGURE 2.101
Columnar basalt near Gardiner, MT. Inset shows individual columns, approximately 2 ft wide.
106 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.102
Columnar basalt topping Yellowstone River canyon in Wyoming. Glacial gravels and volcanic ash are beneath
the basalt. Bottom photo shows contact between the basalt and highly permeable glacial gravels.

concentrated along rift or major fault zones in the Snake River Plain. The lava flows spread
for as much as 50 mi from some vents and fissures. Overlapping shield volcanoes that
formed around major vents extruded a complex of basaltic lava flows in some places.
Thick soil, much of which is loess, covers the flows in many places. Where exposed at
the land surface, the top of a flow typically is undulating and nearly barren of vegetation.
The barrenness of such flows contrasts markedly with those covered by thick soil where
Conceptual Site Models 107

agricultural development is intensive. The thickness of the individual flows is variable; the
thickness of flows of Holocene and Pleistocene age averages about 25 ft, whereas that of
Pliocene-age flows averages about 40 ft.
In some shield-volcano eruptions, basaltic lava pours out quietly from long fissures
instead of central vents and floods the surrounding countryside with lava flow upon lava
flow, forming broad plateaus. Lava plateaus of this type can be seen in Iceland, southeast-
ern Washington, eastern Oregon, and southern Idaho. Along the Snake River in Idaho and
the Columbia River in Washington and Oregon, these lava flows are beautifully exposed
and measure more than a mile in total thickness. This geologic environment has, in many
ways, acted hydrogeologically as karst because of the existence of sinking streams and the
development of integrated networks of overlapping and intersecting lava flows that drain
at some of the most spectacular large springs in the United States (Figure 2.103). Often
present in lava flows are interconnected lava tubes at various depths below the water table,
Downloaded by [University of Auckland] at 23:40 09 April 2014

which may act similarly to karst conduits, thus feeding springs of variable discharge rate

FIGURE 2.103
Niagara Falls Springs issuing from basalts in the Snake River valley near Twin Falls, ID. (Courtesy of Clear
Foods, Inc.)
108 Hydrogeological Conceptual Site Models

that react quickly to rainfall events. For this reason, some practitioners describe such an
environment as pseudokarst.
Silicic volcanic rocks in the United States are present chiefly in southwestern Idaho
and southeastern Oregon where they consist of thick flows interspersed with unconsoli-
dated deposits of volcanic ash and sand. Silicic volcanic rocks also are the host rock for
much of the geothermal water in Idaho and Oregon. Big Springs in Fremont County, ID, is
the source of the South Fork of the Henrys Fork River. Designated as a National Natural
Landmark in 1980, it is the only first magnitude spring in the United States that emanates
from rhyolitic lava flows of the Madison Plateau.

2.3.6 Aquitards
Although aquitards play a very important role in groundwater systems, in many cases, they
Downloaded by [University of Auckland] at 23:40 09 April 2014

are still evaluated qualitatively rather than quantitatively. Field and laboratory research
studies have only recently started including the role of aquitards in the fate and transport
of various contaminants in the subsurface. A similar effort has yet to materialize in the
evaluation of the role of aquitards in the storage of groundwater available for water sup-
ply. Aquitards can release significant volumes of water to adjacent aquifers that are being
stressed by pumping; they can also transfer water from one aquifer to another, both under
natural conditions and as a result of artificial groundwater withdrawal. Understanding
various roles aquitards can play in a hydraulically stressed groundwater system is espe-
cially important when designing artificial aquifer recharge systems and predicting long-
term exploitable reserves of groundwater (Kresic 2010).
One usually thinks of an aquitard, when continuous and thick and when overlying a
highly productive confined aquifer, as a perfect protector of the valuable groundwater
resource. Various sedimentary, magmatic, and metamorphic rocks can act as aquita-
rds, depending on their mineral composition and effective porosity. Clay-rich rocks are
always good candidates provided they are relatively extensive (Figure 2.104). However,
some professionals would argue that every aquitard leaks, and it is only a matter of time
before existing shallow groundwater contamination could enter the confined aquifer and
threaten the source. Of course, it does not help anyone (e.g., interested stakeholders) if
such professionals rely only on their best professional judgment and are much less specific
in terms of the reasonable amount of time after which the contamination would break
through the aquitard. If confronted with some field-based data, such as the thickness and
the hydraulic conductivity of the aquitard porous material, they may have the best answer
ready in hand: “But the measurements did not include flow through the fractures, and
we all know that all rocks and sediments comprising an aquitard, including clay, do have
some fractures, somewhere.”
The truth is, as always, somewhere in between. There are perfectly protective, competent,
thick aquitards of high integrity, which would not allow migration of shallow contamina-
tion to the underlying aquifer for thousands of years or more. Good examples are regional
aquitards stretching for hundreds of miles along and off the Atlantic coast of the eastern
United States. They prevent downward vertical migration of seawater into the freshwater
aquifers utilized for more than 150 years as main sources of water for millions of people. If
viewed in terms of hundreds of thousands of years or more, this fact may not entirely con-
tradict statements made by some that any aquitard will leak given enough time. However,
such statements, often made because of some underlying agenda such as a lawsuit, should
be ignored by working hydrogeologists. Instead, when applicable to the CSM, full atten-
tion should be given to site-specific aquitard evaluations.
Conceptual Site Models 109
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 2.104
Outcrop in eastern Alaska illustrating interbedded lithified submarine fine sands (light color) and clay (darker
color). The sediments comprising the rock were deposited by successive turbidity flows, which form repeti-
tive graded sequences often capped by substantial clay layers. Individual stratigraphic horizons are laterally
continuous over tens to hundreds of meters. Continuous layers like these can often form effective aquitards for
circulating fluids. (Courtesy of Jeff Manuszak.)

For example, Hendry and Wassenaar (2010) used high-resolution, one-dimensional pro-
files of naturally occurring environmental tracers (3H, δD, δ18O, 14C-DIC, 14C-DOC, 36Cl,
4He, and major ions) and hydraulic data to study residence times, transport mechanisms,

and sources of pore water and solutes in an aquitard system in Saskatchewan, Canada.
The aquitard system consisted of 80 m of plastic, clay-rich Battleford till unconformably
110 Hydrogeological Conceptual Site Models

overlying 77 m of Late Cretaceous plastic marine clay. Individual tracers independently


revealed molecular diffusion as the dominant transport mechanism in the unoxidized,
nonfractured till and clay. This study showed that solute transport in clay-rich aquitards
is typically slow with groundwater velocities less than 1 m per 1000 years (Figure 2.105),
proving that such aquitards are suitable for long-term isolation of most wastes.
There also are leaky aquitards of low integrity, which do not prevent such migration for
more than several decades or so. Of course, if an aquitard is not continuous, or is only a
few feet thick in places, all bets are off. In such cases, the site-specific conditions in the adja-
cent aquifers would play the key role in contaminant transport. These conditions, in a worst
case, may include large regional drawdowns caused by pumping in the underlying confined
aquifer and the resulting steep hydraulic gradients between the two aquifers (the shallow
and the confined) separated by a discontinuous aquitard (Figure 2.106). Contamination with
DNAPLs, which are denser than water, is especially difficult to assess or predict because they
Downloaded by [University of Auckland] at 23:40 09 April 2014

can move irrespective of the groundwater hydraulic gradients in an aquifer–aquitard–aqui-


fer system. However, it is surprising how many investigations in contaminant hydrogeology
fail to collect more (or any) field information on the aquitard, even though determining its
role may be crucial for the success of a groundwater remediation project.
The only direct method for determining if actual flow through an aquitard is taking
place is dye tracing, but it is of no practical use because of normally very long travel times
through aquitards. Several indirect methods, which utilize hydraulic head measurements
in the system and chemical and isotopic analyses of water residing in an aquitard, can

0
Oxidized Till
Unoxidized Till

10
Depth (m)

20
9 ka
11 ka

30

40

0 20 40 60 80 100
14C-DOC (pmC)

FIGURE 2.105
Contents of 14C-DOC versus depth in an aquitard system in Saskatchewan, Canada. Best-fit simulated diffusive
transport + radioactive decay profiles are presented as lines. The red lines represent a transport time of 11,000
years, and the blue line represents a transport time of 9000 years. pmC denotes percent of modern carbon.
(From Hendry, M. J., and Wassenaar, L. I., Water Resources Research, 41 WO2021, doi:10.1029/2004WR003157, 2005.
Reproduced with permission of the American Geophysical Union.)
Conceptual Site Models 111

Particle release High-conductivity Aquitard


zone

Particle flowpaths Well screens below aquitard

FIGURE 2.106
Results of a 3D particle-tracking model showing the effects of a high-conductivity zone in an aquitard on par-
ticle flowpaths. (Modified from Chiang, W. H. et al., 3D Master—A Computer Program for 3D Visualization and
Downloaded by [University of Auckland] at 23:40 09 April 2014

Real-Time Animation of Environmental Data, Excel Info Tech, Inc., 146 pp., 2002.)

be used to reasonably accurately assess the rates of groundwater movement through it.
However, caution should be used when relying on hydraulic head data collected from
monitoring wells that are not completed in the aquitard itself. A difference in the hydraulic
heads measured in the overlying and underlying aquifers does not necessarily mean that
groundwater is moving between them at any appreciable rate. The existence of the actual
flow can be indirectly confirmed only by hydraulically stressing (pumping) one of the
aquifers and confirming the obvious related hydraulic head change in the other two units
(i.e., including the aquitard itself). When interpreting the hydraulic head changes (fluctua-
tions) caused by pumping, all possible natural causes such as barometric pressure changes
or tidal influences should be accounted for.
Figure 2.107 illustrates how possibly misleading conclusions can result from measuring
the hydraulic heads at only one depth in the surficial aquifer (say, at MP-4A, where the
head is 180.07 ft) and only one depth in the confined aquifer (at MP-4F, the head is 61.77 ft).
The vertical difference between these two hydraulic heads is 118.3 ft, which may lead one
to believe that there must be a significant vertical flow downward through the aquitard
caused by such a strong vertical hydraulic gradient (incidentally, the confined aquifer is

Feet Feet
MW-3 MP-1 MW-5 MP-4 MW-9 MP-7
200 200
A 184.12 A 180.07
160 A 173.84 160
B 179.47
B 174.12 B 170.21
C 171.03 Unconfined
120 C 169.34 Aquifer C 167.80 120
D 170.98
D 169.31 D 167.82
80 Aquitard 80
E
76.61 E 72.54 E 67.38
40 Confined 40
F 65.39
F 61.77 Aquifer F 56.22
0 0 500 ft 0

FIGURE 2.107
Measurements of the hydraulic head at multiport monitoring wells screened above and below an aquitard. The
confined aquifer is being pumped for water supply with an extraction well located approximately 4600 ft from
MP-7. (From Kresic, N., Hydrogeology and Groundwater Modeling, Second Edition, CRC Press, Taylor & Francis
Group, Boca Raton, FL, 807 pp., 2007. With permission.)
112 Hydrogeological Conceptual Site Models

being pumped for water supply). However, the head difference between the last two ports
in the aquifer above the aquitard, at all multiport wells, is absent for all practical purposes:
it is within 1/100 of 1 ft, upward or downward. The flow is strictly horizontal indicating an
absence of advective flow (free gravity flow) of groundwater from the unconfined aquifer
into the underlying aquitard. The higher downward vertical gradients at shallow depths
in the unconfined aquifer may be the result of recharge, possibly combined with the influ-
ence of some lateral pumping (boundary) in the unconfined aquifer. When measurements
of the hydraulic head are available at various depths within an aquitard, a more defini-
tive conclusion as to the probable rates and velocities of groundwater flow through it can
be made, including the presence of possibly varying hydraulic head inside the aquitard
caused by heterogeneities.
A detailed discussion on the hydrogeologic role of aquitards including various methods of
their characterization is given in the works of Cherry et al. (2006) and Bradbury et al. (2006).
Downloaded by [University of Auckland] at 23:40 09 April 2014

References
ASTM International, 2008. Standard Guide for Developing Conceptual Site Models for Contaminated
Sites. E 1689-95, West Conshohocken, PA, 8 p.
Ator, S. W., Denver, J. M., Krantz, D. E., Newell, W. L., and Martucci, S. K., 2005. A Surficial
Hydrogeologic Framework for the Mid-Atlantic Coastal Plain. U.S. Geological Survey
Professional Paper 1680, 44 pp.
Bense, V. F., Van den Berg, E. H., and Van Balen, R. T., 2003. Deformation
����������������������������������
mechanisms and hydrau-
lic properties of fault zones in unconsolidated sediments; the Roer Valley Rift System, The
Netherlands. Hydrogeol. J., 11, 319–332.
Božičević, S., 1971. Primjena speleologije pri injektiranjima u krsu (Application of speleology in
grouting of karst terranes; in Croatian). First Yugoslav Symposium on Hydrogeology and
Engineering Geology, Herceg Novi.
Bögli, A., 1980. Karst Hydrology and Physical Speleology. Springer-Verlag, New York.
Böhlke, J. K., 2002. Groundwater recharge and agricultural contamination. Hydrogeol. J., 10(1),
153–179.
Bradbury, K. R., Gotkowitz, M. B., Hart, D. J., Eaton, T. T., Cherry, J. A., Parker, B. L., and Borchardt,
M. A., 2006. Contaminant Transport through Aquitards: Technical Guidance for Aquitard
Assessment. American Water Works Association Research Association (AwwaRF), Denver,
Colorado, 144 pp.
California Department of Water Resources (CADWR), 1962. Bulletin 104: Planned Utilization of the
Ground Water Basins of Coastal Plain of Los Angeles County. Appendix A, Ground Water
Geology.
Cederstrom, D. J., 1972. Evaluation of Yields of Wells in Consolidated Rocks, Virginia to Maine. U.S.
Geological Survey Water-Supply Paper 2021, 38 pp.
Cherry, J. A., Parker, B. L., Bradbury, K. R., Eaton, T. T., Gotkowitz, M. B., Hart, D. J., and Borchardt,
M. A., 2006. Contaminant transport through aquitards: A state of the science review. American
Water Works Association Research Association (AwwaRF), Denver, Colorado, 126 pp.
Chiang, W. H., Chen, J., and Lin, J., 2002. 3D Master—A Computer Program for 3D Visualization and
Real-Time Animation of Environmental Data. Excel Info Tech, Inc., 146 pp.
Cressler, C. W., Thurmond, C. J., and Hester, W. G., 1983. Ground Water in the Greater Atlanta Region,
Georgia. Georgia Department of Natural Resources, U.S. Environmental Protection Agency,
and The Georgia Geologic Survey, in cooperation with U.S. Geological Survey, Information
Circular 63, 143 pp.
Conceptual Site Models 113

Cvijić, J., 1893. Das Karstphänomen. Versuch einer morphologischen Monographie. Geographische
Abhandlungen herausgegeben von Prof. Dr A. Penck, Wien, Bd. V. Heft. 3, pp. 1–114.
Cvijić, J., 1918. Hydrographie souterraine et évolution morphologique du karst. Recueil des Travaux
de l’Institut de Géographie alpine, Grenoble, t. VI, fasc. 4, pp. 1–56.
Cvijić, J., 1924. Geomorfologija (Morphologie Terrestre). Knjiga druga (Tome Second). Beograd, 506 pp.
Daniel, C. C., III, and Sharpless, N. B., 1983. Ground-Water Supply Potential and Procedures for
Well-Site Selection Upper Cape Fear River Basin. Cape Fear River Basin Study 1981–83, North
Carolina Department of Natural Resources and Community Development and U.S. Water
Resources Council in cooperation with U.S. Geological Survey, 73 pp.
Daniel, C. C., III, Smith, D. G., and Eimers, J. L., 1997. Hydrogeology and Simulation of Ground-Water
Flow in the Thick Regolith-Fractured Crystalline Rock Aquifer System of Indian Creek Basin,
North Carolina, Chapter C, Ground-Water Resources of The Piedmont-Blue Ridge Provinces of
North Carolina. U.S. Geological Survey Water-Supply Paper 2341, pp. C1–C137.
Daniel, C. C., III, and Dahlen, P. R., 2002. Preliminary Hydrogeologic Assessment and Study Plan for
Downloaded by [University of Auckland] at 23:40 09 April 2014

a Regional Ground-Water Resource Investigation of the Blue Ridge and Piedmont Provinces of
North Carolina. U.S. Geological Survey Water-Resources Investigations Report 02-4105, 60 pp.
Danskin, W. R., McPherson, K. R., and Woolfenden, L. R., 2006. Hydrology, Description of Computer
Models, and Evaluation of Selected Water-Management Alternatives in the San Bernardino
Area, California. U.S. Geological Survey Open-File Report 2005-1278, Reston, VA, 178 pp.
Dimitrijević, M., 1978. Geolosko kartiranje (Geological Mapping, in Serbian). ICS, Beograd, 486 pp.
Ford, D., and Williams, P., 2007. Karst Hydrogeology and Geomorphology. John Wiley & Sons, Chichester,
West Sussex, England, 562 pp.
Halford, K. J., and Mayer, G. C., 2000. Problems associated with estimating ground-water discharge
and recharge from stream-discharge records. Ground Water, 38(3), 331–342.
Haneberg, W., Mozley, P., Moore, J., and Goodwin, L. (Eds.), 1999. Faults and Subsurface Fluid Flow
in the Shallow Crust. American Geophysical Union Monograph, 113, 51–68.
Hanshaw, B. B., and Back, W., 1974. Determination of Regional Hydraulic Conductivity through Use
of 14C Dating of Groundwater. Memoires, Tome X, 1. Communications. 10th Congress of the
International Association of Hydrogeologists, Montpellier, France, pp. 195–198.
Harned, D. A. 1989. The Hydrogeologic Framework and a Reconnaissance of Ground-Water Quality
in the Piedmont Province of North Carolina, with a Design for Future Study. U.S. Geological
Survey Water-Resources Investigations Report 88-4130, 55 pp.
Hendry, M. J., and Wassenaar, L. I., 2005. Origin and migration of dissolved organic carbon frac-
tions in a clay-rich aquitard: 14C and b13C evidence. Water Resources Research, 41, WO2021, doi:
10.1029/2004WR003157.
Hendry, M. J., and Wassenaar, L. I., 2010. Millennial-scale diffusive migration of solutes in thick
clay-rich aquitards: evidence from multiple environmental tracers. Hydrogeol. J., DOI 10.1007/
s10040-010-0647-4.
Imes, J. L., Plummer, L. N., Kleeschulte, M. J., and Schumacher, J. G., 2007. Recharge Area, Base-Flow
and Quick-Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter
County, Missouri, 2000–04. U.S. Geological Survey Scientific Investigations Report 2007-5049,
80 pp.
Johnson, C. D., Haeni, F. P., Lane, Jr. J. W., and White, E. A., 2002. Borehole-Geophysical Investigation
of the University of Connecticut Landfill, Storrs, Connecticut. U.S. Geological Survey Water-
Resources Investigations Report 01-4033, 42 pp.
Jones, W. K., 1997. Karst Hydrology Atlas of West Virginia. Karst Waters Institute, Special Publication
4, Charles Town, WV, 111 pp.
Kendy, E., 2001. Magnitude, Extent, and Potential Sources of Nitrate in Ground Water in the Gallatin
Local Water Quality District, Southwestern Montana, 1997–98. U.S. Geological Survey Water-
Resources Investigations Report 01-4037.
Kresic, N., 1991. Kvantitativna hidrogeologija karsta sa elementima zastite podzemnih voda (Quantitative
Karst Hydrogeology with Elements of Groundwater Protection; in Serbo-Croatian). Naucna Knjiga,
Belgrade, 192 pp.
114 Hydrogeological Conceptual Site Models

Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC Press, Taylor & Francis
Group, Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources: Sustainability, Management, and Restoration. McGraw Hill,
New York, 852 pp.
Kresic, N., 2010. Chapter 2, Types and Classification of Springs. In: Kresic, N., and Stevanovic, Z.,
eds., Groundwater Hydrology of Springs; Engineering, Theory, Management, and Sustainability.
Elsevier, Butterworth-Heinemann, Amsterdam, pp. 31–85.
Kresic, N., 2012. Water in Karst: Management, Vulnerability, and Restoration. McGraw Hill, New York,
in press.
Kresic, N., and Mikszewski, A., 2009. Chapter 3, Groundwater Recharge. In: Kresic, N., Groundwater
Resources. Sustainability, Management, and Restoration. McGraw Hill, New York, pp. 235–292.
Kresic, N., and Stevanovic, Z., eds., 2010. Groundwater Hydrology of Springs; Engineering, Theory,
Management, and Sustainability. Elsevier, Butterworth-Heinemann, Amsterdam, 573 pp.
LeGrand, H. E., 1954. Geology and Ground Water in the Statesville Area, North Carolina. North
Downloaded by [University of Auckland] at 23:40 09 April 2014

Carolina Department of Conservation and Development Bulletin 68, 68 pp.


LeGrand, H. E., 1958. Chemical character of water in the igneous and metamorphic rocks of North
Carolina. Econ. Geol., 53(12), 179–189.
LeGrand, H. E., 1967. Ground Water of the Piedmont and Blue Ridge Provinces in the Southeastern
States. U.S. Geological Survey Circular 538.
LeGrand, H. W., Sr., 2007. A Master Conceptual Model for Hydrogeological Site Characterization in
the Piedmont and Mountain Region of North Carolina: A Guidance Manual. North Carolina
Department of Environment and Natural Resources, Division of Water Quality, Groundwater
Section. 40 pp. + Appendices.
Lindsay, A. S., Mesko, T. O., and Hollyday, E. F., 1991. Summary of the hydrogeology of the Valley
and Ridge, Blue Ridge, and Piedmont physiographic provinces in the eastern United States.
Regional Aquifer System Analysis—Appalachian Valley and Piedmont. U.S. Geological Survey
Professional Paper 1422-A, Reston, VA, 23 pp.
Lutgens, F. K., and Tarbuck, E. J., 1995. The Atmosphere. Prentice-Hall, Inc., Englewood Cliffs, NJ,
462 pp.
Lyford, F. P., 1986. Northeast Glacial Regional Aquifer-System Study. In: Sun, R. J. (ed.), Regional
Aquifer–System Analysis Program of the U.S. Geological Survey—Summary of Projects, 1978–
1984, U.S. Geological Survey Circular 1002, pp. 162–167.
McFarland, E. R., and Bruce, T. S., 2006. The Virginia Coastal Plain Hydrogeologic Framework. U.S.
Geological Survey Professional Paper 1731, 118 pp., 25 pls.
Meinzer, O. E., 1923. The Occurrence of Ground Water in the United States with a Discussion of
Principles. U.S. Geological Survey Water-Supply Paper 489, Washington, DC, 321 pp., reprinted
1959.
Milanović, P., 1979. Hidrogeologija Karsta i Metode Istraživanja (Karst Hydrogeology and Methods of
Investigations, in Serbian). HE Trebišnjica, Inst. za korištenje i zaštitu voda na kršu, Trebinje,
302 pp.
Milanović, P. T., 1981. Karst Hydrogeology. Water Resources Publications, Littleton, CO, 434 pp.
Milanović, P. T., 2004. Water Resources Engineering in Karst. CRC Press, Boca Raton, FL, 312 pp.
Milanović, P., 2006. Karst istocne Hercegovine i dubrovackog priobalja (Karst of Eastern Herzegovina and
Dubrovnik Litoral, in Serbian). ASOS, Belgrade, 362 pp.
Miller, J. A., 1999. Introduction and National Summary. Ground-Water Atlas of the United States.
United States Geological Survey, A6. http://caap.water.usgs.gov/gwa/index.html.
Milojević, N., 1967. Hidrogeologija (Hydrogeology, in Serbian). Univerzitet u Beogradu, Zavod za izda-
vanje udzbenika Socijalisticke Republike Srbije, Beograd, 279 pp.
Mozley, P. S., Goodwin, L. B., Heneykamp, M., and Haneberg, W. C., 1996. Using the Spatial Distribution
of Calcite Cements to Infer Paleoflow in Fault Zones: Examples from the Albuquerque Basin, New
Mexico [abstract]. American Association of Petroleum Geologists 1996 Annual Meeting.
NASA, 2011. Earth Observatory. Images available at http://earthobservatory.nasa.gov/. Accessed
in March 2011.
Conceptual Site Models 115

Nutter, L. J., and Otton, E. G., 1969. Ground-Water Occurrence in the Maryland Piedmont. Maryland
Geological Survey Report of Investigations no. 10, 56 pp.
Pettyjohn, W. A., and Henning, R., 1979. Preliminary Estimate of Regional Effective Ground-Water
Recharge Rates, Related Streamflow, and Water Quality in Ohio. Ohio State University Water
Resources Center Project Completion Report No. 552, Columbus, OH, 333 pp.
Risser, D. W., Conger, R. W., Ulricj, J. E., and Asmussen, M. P., 2005. Estimates of Ground-Water
Recharge Based on Streamflow-Hydrograph Methods: Pennsylvania. U.S. Geological Survey
Open File Report 2005-1333, Reston, VA, 30 pp.
Rorabaugh, M. I., 1964. Estimating Changes in Bank Storage and Ground-Water Contribution to
Streamflow. Intl. Assoc. Sci. Hydrol. Publ. 63, 432–441.
Rutledge, A. T., 1992. Methods of Using Streamflow Records for Estimating Total and Effective
Recharge in the Appalachian Valley and Ridge, Piedmont, and Blue Ridge Physiographic
Provinces. In: Hotchkiss, W. R., and Johnson, A. I., eds., Regional Aquifer Systems of the
United States—Aquifers of Southern and Eastern States. American Water Resources Association
Downloaded by [University of Auckland] at 23:40 09 April 2014

Monograph Series no. 17, Bethesda, MD, pp. 59–73.


Rutledge, A. T., 1993. Computer Programs for Describing the Recession of Ground-Water Discharge
and for Estimating Mean Ground-Water Recharge and Discharge from Streamflow Records.
U.S. Geological Survey Water-Resources Investigations Report 93-4121, 45 pp.
Rutledge, A. T., 1998. Computer Programs for Describing the Recession of Ground-Water Discharge
and for Estimating Mean Ground-Water Recharge and Discharge from Streamflow Records–
Update. U.S. Geological Survey Water-Resources Investigations Report 98-4148, 43 pp.
Rutledge, A. T., 2000. Considerations for Use of the RORA Program to Estimate Ground-Water
Recharge from Streamflow Records. U.S. Geological Survey Open-File Report 00-156, Reston,
VA, 44 pp.
Torres, A. G., and Diaz, J. R., 1984. Water Resources of the Sabana Seca to Vega Baja Area, Puerto Rico.
U.S. Geological Survey Water Resources Investigations Report 82-4115, 53 pp.
Trapp, H., Jr., and Horn, M. A., 1997. Delaware, Maryland, New Jersey, North Carolina, Pennsylvania,
Virginia, West Virginia Ground Water Atlas of the United States. U.S. Geological Survey, HA
730-L.
United States Geological Survey (USGS), 2007. USGS Photographic Library. Available at http://library​
photo.cr.usgs.gov.
Waichler, S. R., and Yabusaki, S. B., 2005. Flow and Transport in the Hanford 300 Area Vadose Zone-
Aquifer-River System. Pacific Northwest National Laboratory, Richland, WA.
Water Replenishment District of Southern California (WRD), 2004. Technical Bulletin–An Introduction
to the Central and West Coast Groundwater Basins.
Whitehead, R. L., 1994. Ground Water Atlas of the United States–Segment 7: Idaho, Oregon,
Washington. U.S. Geological Survey Hydrologic Investigations Atlas HA–730–H, 31 pp.
Wierman, D. A., Broun, A. S., and Hunt, B. B., 2010. Hydrogeologic Atlas of the Hill Country Trinity
Aquifer, Blanco, Hays, and Travis Counties, Central Texas. Prepared by the Hays-Trinity, Barton
Springs/Edwards Aquifer, and Blanco Pedernales Groundwater Conservation Districts, July
2010, 17 Plates + DVD, Austin, TX.
Williams, L. J., Kath, R. L., Crawford, T. J., and Chapman, M. J., 2005. Influence of Geologic Setting
on Ground-Water Availability in the Lawrenceville Area, Gwinnett County, Georgia. U.S.
Geological Survey Scientific Investigations Report 2005-5136, Reston, VA, 50 pp.
Worthington, S. R. H., and Ford, D. C., 2009. Self-organized permeability in carbonate aquifers. In:
Kresic, N. (guest editor), Theme Issue Ground Water in Karst, Ground Water, vol. 47, no. 3,
pp. 319–320.
Yevjevich, V. M., 1981. Karst Water Research Needs. Water Resources Publications, Littleton, CO.
Downloaded by [University of Auckland] at 23:40 09 April 2014
3
Data Management, GIS, and GIS Modules

3.1 Introduction
The computer age has revolutionized the fields of hydrogeology and environmental engi-
neering. Advanced mapping and numerical modeling software are readily available to
modern-­day hydrogeologists, complete with user-friendly preprocessing and postprocess-
ing interfaces. The practicing professional is no longer burdened by hard-copy data storage,
hand calculation, and hand-mapping, or even numerical method formulation and com-
puter coding. The implications of this transformation are far-reaching. Most importantly,
hydrogeological data, which are inherently spatial in nature, can easily be stored and visu-
alized through geographic information systems (GIS). The United States Geological Survey
(USGS) defines GIS as “computer system(s) capable of capturing, storing, analyzing, and
displaying geographically referenced information; that is, data identified according to
location” (USGS 2007). In a sense, GIS and associated software create an environment for
the hydrogeologist’s data that is a simulation of the real world in three dimensions: lon-
gitude, latitude, and elevation. All data in the field of hydrogeology possess these three
defining dimensional features, which can be accurately represented in GIS. However, the
real-world representation offered by GIS is temporally discrete in nature; the data stored
and visualized in GIS represent snapshots in time. Whether a GIS map is being used to
visualize a water table surface, the structure of a river channel, or land surface elevations
in a watershed, the hydrogeologist is depicting a discrete representation of these data. In
other words, the hydrogeologist can show the average potentiometric surface of the High
Plains Aquifer in 1930, the river channel geometry of the Mississippi River in 2005, or the
predevelopment elevation surface of the watershed 100 years ago.
The addition of computer-assisted numeric models to the GIS environment completes
the four-dimensional simulation of the natural world, adding the critical element of time.
Numeric models can simulate the transient (time-dependent) physical, chemical, and
biological processes of Earth, all of which can be integrated with the three-dimensional
structure provided by GIS. The hydrogeologist can now represent the fluctuation of a
water-table surface, the evolving geomorphology of a watershed, or the creation or erosion
of mountain ranges. To continue with the examples previously listed, the hydrogeologist
can use numeric models and GIS to demonstrate the dewatering of the High Plains Aquifer
in the 20th century, predict changes in the Mississippi River caused by dam decommis-
sioning, or determine the rate of erosion in the Appalachian Mountains. The role of GIS
and numeric models is to provide the hydrogeologist with the means to store and visual-
ize four-dimensional data collected in the real world and to simulate the behavior of fun-
damental transient processes, which dictate the creation of data collected in the past and
of data to be collected in the future. Through GIS and numerical modeling software, the

117
118 Hydrogeological Conceptual Site Models

hydrogeologist can quantitatively and visually represent the conceptual site model (CSM)
and better communicate results and recommendations to others.
Figures 3.1 through 3.3 illustrate the concepts of discrete and transient data visualization.
Figures 3.1 and 3.2 are discrete visualizations of depth-to-water measurements across the
High Plains Aquifer predevelopment (Figure 3.1) and in 2007 (Figure 3.2). Data collected
from the two data sets are treated separately and form two distinct snapshots in time
of hydrogeological conditions. Figure 3.3 depicts the change in depth-to-water measure-
ments between the two years, illustrating the effects of intensive aquifer pumping for irri-
gation purposes. Figure 3.3 connects the predevelopment and 2007 data sets and illustrates
the transient processes at work during the years in between when significant groundwater
pumping occurred. The ability to quantify transient processes through monitoring data is
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.1
Contour map of water-table depth in feet below ground surface measured prior to groundwater development
between the 1930s and 1980s. (Data from USGS, USGS High Plains Aquifer WLMS: Water-Level Data by Water
Year (October 1 to September 30). US Department of the Interior, US Geological Survey, 2010. Available at http://
ne.water.usgs.gov/ogw/hpwlms/data.html, accessed October 2, 2010.)
Data Management, GIS, and GIS Modules 119
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.2
Contour map of water-table depth in feet below ground surface measured in 2007. (Data from USGS, USGS
High Plains Aquifer WLMS: Water-Level Data by Water Year (October 1 to September 30). US Department of the
Interior, US Geological Survey, 2010. Available at http://ne.water.usgs.gov/ogw/hpwlms/data.html, accessed
October 2, 2010.)

limited by the frequency of measurement. Conversely, numerical models can represent data
that are nearly continuous in time, filling in the gaps and, most importantly, allowing predic-
tion of future conditions. Discrete, disjointed measurements can thus be turned into smooth
data animations using any available video creation software such as Windows Movie Maker.
With the widespread availability of GIS software and numeric models integrated in a
GIS environment, the practicing professional faces increasing demands from clients and a
higher quality standard for technical deliverables. Hand-drawn figures and calculations
are no longer appropriate or feasible in most cases, and the expectation is that, with an
equivalent amount of financial resources, the hydrogeologist must deliver a superior prod-
uct. Use of three-dimensional visualizations of site data, animations of modeling output,
color-rich graphics at myriad scales, and other products are now considered to be standard
120 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.3
Contour map of the drawdown in water-table depth (feet) between predevelopment and 2007 conditions.

practice. Unfortunately, there are few professional standards or official technical guidance
documents regarding the use of GIS and numeric models in hydrogeology. The ones that
do exist, such as those produced by ASTM International, are often too broad to offer much
help on specific projects. Hence, the professional is faced with a surplus of computer tools
without sufficient direction for how they should be used. The aim of this chapter is to
identify tools and methods for using databases, GIS, and spatial data analysis programs
(termed GIS modules) to produce high-quality deliverables for projects in hydrogeology
and environmental engineering. This section is not a tutorial in GIS software; rather, this
is a guide for developing the structure of geodatabase and GIS systems, producing GIS
graphics for hydrogeological applications, and integrating GIS with GIS modules.
Data Management, GIS, and GIS Modules 121

3.2 Data Management for GIS


Any practicing hydrogeologist will confirm that his or her number one grievance is the
absence of an ideal amount of data with which to make a decision. The hydrogeologist
can always use more data thanks to the following simple fact: Most data are collected
from discrete (point) locations and have to be interpolated to describe spatially continuous
hydrogeologic characteristics. This process of interpreting data is often not unique and
involves subjectivity (professional judgment) by default. At the same time, as environmen-
tal regulations have advanced and become more rigorous, the amount of data typically
collected during water resources and hazardous waste projects has increased significantly.
Technological improvements in field and laboratory instrumentation have also driven an
increase in the volume and diversity of data. Even though the hydrogeologist may not
Downloaded by [University of Auckland] at 23:40 09 April 2014

always have enough of the right type of data, often he or she is completely inundated
by physical, chemical, and biological sampling data associated with site investigations.
Further complicating the issue, data in different formats (e.g., units, coordinate systems)
are often received from previous investigations conducted by different consultants and
public government agencies, such as the USGS.
To avoid costly and embarrassing mistakes, the hydrogeologist must have a transparent,
well-organized data management system in place for each and every project. Fatal errors
such as working with the wrong units can be avoided with appropriate data management.
Case studies of data management failures and suggestions for their avoidance are pre-
sented in the next section.

3.2.1 Data Management Failures


The following case studies represent common errors in professional hydrogeological prac-
tice. While it may seem otherwise, data management failures are not solely attributable to
gross negligence or incompetence. Quite often, the project delivery system of a company
is poorly suited to effective data management, resulting in inefficiency and unavoidable
errors. For example, a company may not develop standard procedures for data man-
agement, resulting in systems that vary widely between projects and company offices.
Companies may not invest in the requisite computer software for data management or fail
to train employees properly. Most commonly, project managers may not see immediate
benefits to investing in data management at the beginning of a project when the quantity
of existing data is small. Oftentimes these projects rapidly balloon into extensive field
investigations and remediations. Once this has happened, it is too late to retroactively
develop data management procedures, and the flood of incoming data is stored haphaz-
ardly in Microsoft Excel spreadsheets, portable document format (PDF) files, or even in
hard copies. The common remedy for all the data management failures is the development
of an effective data management system at the very beginning of projects when appropriate
staff and computer software can be selected to meet project objectives.

3.2.1.1 Working with the Wrong Units


In a high-profile litigation case, a hydrogeologist is required to develop and calibrate a
groundwater model to evaluate potential groundwater plume concentrations based on soil
analytical data from a former waste disposal area. The hydrogeologist is provided with a
Microsoft Access geodatabase containing all available site data for use in model calibration.
122 Hydrogeological Conceptual Site Models

The data are used to calibrate the model, and all required simulations are executed. During
deposition, the hydrogeologist is questioned about the model, and it is ultimately exposed
that the data used in the model calibration were in the wrong units. The hydrogeologist had
not cross-checked the database against raw laboratory reports and, consequentially, made
major errors in the model calibration, which led to erroneous future simulations.
Figures 3.4 and 3.5 demonstrate how the confusion in units led to flawed modeling and
an incorrect conclusion by the hydrogeologist. Figure 3.4 displays the modeling output of
the hydrogeologist, in which contaminant concentrations for calibration were assumed
to be in milligrams per kilogram of soil (mg/kg). Figure 3.5 depicts what the modeling
output would look like in the correct units, micrograms per kilogram of soil (µg/kg). As
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.4
Output from modeling software when wrong units are used to represent the contaminant source term.

FIGURE 3.5
Output from modeling software when correct units are used to represent the contaminant source term.
Data Management, GIS, and GIS Modules 123

shown by the figures, the use of incorrect units results in the erroneous prediction that a
private water-supply well (Johnson well) and a surface water feature (Little River) will be
impacted by the resultant groundwater plume.
To avoid this mistake, databases should clearly denote both the units and the source of
the data, commonly accomplished by listing the analytical laboratory and an associated
sample delivery group number. When provided with a foreign database, the hydrogeolo-
gist should always check the accuracy of the data against the original sources. While check-
ing every single piece of data may not be practical, a more efficient quality assurance/
quality control (QA/QC) program, such as spot-checking 1%–10% of all data, can be quite
effective in catching global errors, such as using incorrect units.

3.2.1.2 Working with Unknown or Mixed Coordinate Systems


Downloaded by [University of Auckland] at 23:40 09 April 2014

A hydrogeologist is provided with the existing geodatabase for a newly inherited site.
After making initial plots of the data with GIS software, it becomes apparent that some
of the data are not being displayed in the correct location. After locating these misplaced
data in the database, the hydrogeologist realizes that they are most likely in a different
coordinate system from the rest of the data. Unfortunately, there is no field in the database
identifying the coordinate system of these data, and it is not feasible to contact the origi-
nal architect of the database. As a result, the hydrogeologist spends many trial-and-error
hours reprojecting the data until they align properly.
To avoid this problem, a coordinate system field/label should be provided with all
location coordinates in a database or spreadsheet table. Furthermore, while modern GIS
software, such as ArcMap by Esri, is capable of displaying layers in different coordinate
systems on the same map (termed projecting on the fly), external quantitative data analysis
programs may not have that capability (Ormsby et al. 2004). If at all possible, all data in a
database should be in the same coordinate system to facilitate data export to coordinate-
dependent GIS modules, such as contouring or groundwater modeling software. Even
ArcMap cannot display data properly when different coordinate systems are used within
the same layer as demonstrated by Figures 3.6 and 3.7.

FIGURE 3.6
Map of soil boring locations at a hypothetical site shown at the extent of the site limits. Without further exami-
nation, it appears that all boring locations are shown.
124 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.7
When zooming to the full extent of the soil boring layer, it is apparent that some of the boring locations are
in the wrong coordinate system as they are shown over a million feet away from the data correctly overlying
the site.

3.2.1.3 Erroneous Query Results


A hydrogeologist is tasked with contouring contaminant concentrations in shallow soils at
a Superfund site to delineate areas requiring excavation and off-site transport and disposal.
The data are provided to the hydrogeologist by the database manager, and numerous hours
are spent creating the contours and presenting the results in formal figures. Upon review, the
project manager notices that the data set does not look quite right and checks the contours
against previous data plots of soil results. It turns out that all nondetect observations were
excluded from the contouring data set by the database manager’s query. The query is fixed
and rerun, and the hydrogeologist reproduces all deliverables over the weekend.
If this error was not discovered by the project manager, the erroneous contours would
have led to excavation of significantly more soil than was necessary, resulting in higher
costs for the client. Figures 3.8 and 3.9 demonstrate how the exclusion of nondetect data
resulted in erroneous contours that would have increased the extent of soil excavation.
This error occurred because the database manager did not understand the concept of
nondetections and thought that the nondetect results would be useless to the contouring
process. The database manager may have confused nondetect data for rejected data, for exam-
ple. Regardless of the source of confusion, this mistake highlights a common problem in
professional hydrogeology; database managers are typically information technology (IT)
or computer-science professionals without an educational background or extensive profes-
sional experience in hydrogeology, geology, or environmental engineering. Similarly, the
hydrogeologist who is performing the required data analysis (i.e., the contouring in this
example) often has limited or no exposure to database technology. These hydrogeologists
are incapable of querying data for themselves and, as a result, must ask database manag-
ers or IT staff to design and execute necessary queries. As the database manager or IT
professional does not understand hydrogeological data fields, query requests are often
misinterpreted, and data are added or omitted erroneously. Furthermore, a severe bottle-
neck develops under these circumstances as the number of data users (e.g., hydrogeologists
and engineers) is significantly greater than the number of database users (e.g., the database
Data Management, GIS, and GIS Modules 125
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.8
Contour map of contaminant concentrations in shallow soils with nondetect results erroneously excluded.

FIGURE 3.9
Contour map of contaminant concentrations in shallow soils with nondetect results included. Note significant
increase in interpolated area less than 1 mg/kg. This would have significant cost implications if the action level
for remediation were 1 mg/kg. The reporting limit of 0.5 mg/kg was substituted for nondetect results to create
contour maps.
126 Hydrogeological Conceptual Site Models

manager or IT professional). It is not uncommon for a database manager to have a weeklong


backlog of data query requests. Therefore, hours of productivity are lost as nondatabase-
savvy hydrogeologists and engineers wait for the singular database manager to provide
them with data output. Even more frustrating is the fact that if there is a problem with the
data output, the hydrogeologist must again wait for the database manager to debug and
rerun the query (after first completing the other queries on his or her to-do list).
The specific solution to this example of data-management failure is for hydrogeologists
to be familiar enough with the data to realize when major things look wrong and to check
the output of database queries for accuracy before using the data for quantitative analysis.
However, the broader solution is for hydrogeologists to learn to develop their own queries
or for database managers and IT professionals to increase their understanding of hydroge-
ology. While the hydrogeologist cannot always be as much of an expert as an educated GIS
or database professional (and vice versa), developing database literacy and the ability to
Downloaded by [University of Auckland] at 23:40 09 April 2014

perform simple queries can greatly increase project delivery efficiency. At the minimum,
increasing technical literacy on both sides of the equation will reduce the incidence of
communication breakdowns when the hydrogeologist does not understand what instruc-
tion the database manager needs to design a query and the database manager does not
understand what data the hydrogeologist needs to perform the analysis in question.

3.2.1.4 Data Entry Inefficiency


A client provides historic sampling data to a hydrogeologist at the beginning of a new
project. After collecting new data and analyzing the results, the hydrogeologist begins to
prepare a summary report for submittal to the applicable regulatory agency. When com-
piling historic and recent sampling results into a comprehensive table, the hydrogeologist
realizes that his or her client did not provide the data electronically but only in hard copy
in a file box. Because the report is due the following day, there was not sufficient time to
work with the client and the client’s laboratory to issue an electronic report with report-
ing limits. As a result, the hydrogeologist had to spend significant time at the last minute
manually entering reporting limits into a table for inclusion in the report. The lesson is
that hydrogeologists should always request to obtain data electronically (not as hard copy
or PDF file) as early as possible in any project.

3.2.2 Data Management Systems


Beyond helping to avoid the critical errors described in Section 3.2.1, data management
systems enable rapid computational querying and ultimately lead to powerful data visu-
alization in GIS. Fundamentally, data management systems ensure that the hydrogeolo-
gist spends most of his or her time performing important technical analyses related to the
decision at hand rather than searching aimlessly for missing data and/or working with a
flawed data set. This section outlines appropriate data management techniques for hydro-
geological investigations with an emphasis on process design and linking data manage-
ment systems with GIS. Key database functions such as querying are also introduced for
hydrogeological applications.
We begin our discussion by offering a real-world framework for data management sys-
tems. The term system is used because data management is a holistic, multifaceted process
that extends well beyond a traditional Microsoft Access database. In fact, a well-designed
data management system encompasses nearly all aspects of hydrogeological project work.
The Data Management Association (DAMA) defines data management as the “development
Data Management, GIS, and GIS Modules 127

and execution of architectures, policies, practices, and procedures that properly manage
the full data lifecycle needs of an enterprise” (DAMA 2007). This rigorous, comprehen-
sive mindset is necessary because hydrogeologists, as technical professionals, base all their
decisions on raw data collected in the field or on quantitative inferences made with those
field data. In other words, data quality and accessibility are of the utmost importance to
any practicing hydrogeologist. Quite simply, data management can make or break a project.
It is somewhat tedious and of minimal value to the reader to offer technical standards
for a data management system. As noted above, the concept of a data management system
extends beyond a geodatabase and covers tasks high-level professionals may associate
with entry level personnel. It is more useful to outline the steps for creating a successful
data management system for a typical hydrogeological project in chronological order. The
necessary steps are as follows:
Downloaded by [University of Auckland] at 23:40 09 April 2014

• Define the project objective


• Determine the quantity and type of field and/or laboratory data to be collected
• Perform the required data collection and analysis
• Present the study conclusions

The steps illustrate how data management is involved at each stage of project execution
and how data management, collection, and analysis are all synergistic processes.

3.2.2.1 Define the Project Objective


Every assignment in professional or research-based hydrogeology has a specific objec-
tive. Clear definition of the objective will help focus the project team by answering this
question: What will we use our data for, and what will our data mean? There is increas-
ing emphasis on systematic planning in hydrogeology; the best example of which is the
United States Environmental Protection Agency (U.S. EPA) Data Quality Objective (DQO)
process (U.S. EPA 2006), which is discussed in greater detail in Chapter 7. Elements of
the U.S. EPA’s systematic planning process are listed in Figure 3.10. One way to interpret
systematic planning is that the hydrogeologist knows what his or her data mean before
they are even collected. This may seem confusing, but with more thought, this concept is
intuitive. If the hydrogeologist has clearly defined the project objective, every piece of data
collected by that hydrogeologist has a distinct purpose, the meaning of which is already
known (the exact outcome is what remains to be determined).
For example, if performing an assessment on cadmium contamination in sediment abut-
ting a landfill, the hydrogeologist may define the following project objective: determine
if cadmium concentrations in the sediment pose significant risk to ecological receptors.
This objective would then lead to the following questions: How will we evaluate cad-
mium concentrations in the sediment? and How do we determine if it poses a significant
risk to ecological receptors? The answer statement is as follows: We will collect sediment
samples in the top foot of sediment for laboratory analysis of cadmium concentration and
then compare the 95% upper confidence limit (UCL) of the mean concentration estimate to
established regulatory standards for ecological health. By going through this process, the
hydrogeologist now knows that the data he or she collects will mean that average sediment
concentrations are either above or below the applicable regulatory standard, indicating
the presence or nonpresence of risk to ecological receptors. Figure 3.11 is a site plan for the
cadmium investigation project used as an example in this section.
128 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.10
Systematic planning table. (From United States Environmental Protection Agency (U.S. EPA). Guidance on Systematic
Planning Using the Data Quality Objectives Process, Office of Environmental Information, Washington DC, 2006.)

FIGURE 3.11
Site plan for hypothetical landfill site with proposed sediment sampling locations in the wetlands downgradi-
ent of the landfill, abutting a small river. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS
(http://www.mass.gov/mgis/colororthos2008.htm).
Data Management, GIS, and GIS Modules 129

3.2.2.2 Determine the Quantity and Type of Field and Laboratory Data to Be Collected
This is arguably the most important step of any project and can be quite difficult with the
budgetary constraints commonly faced. Too often sacrifices are made on the scope of data
collection efforts in order to lower costs and increase the likelihood of winning projects
or to keep existing clients happy. Regardless of political and budgetary concerns, this step
will drive the data management system and lead to the formulation and answering of key
questions. In keeping with the cadmium example from step 1, the hydrogeologist might
develop the following questions (Q) and answers (A) to determine data requirements:

• Q1: How much data do we need to meet our project objective?


A1: In the sediment area abutting the landfill, internal field staff will collect at
least 10 randomly placed samples (the amount required by regulations) of the
Downloaded by [University of Auckland] at 23:40 09 April 2014

upper foot of sediment.


• Q2: What field methods will be required to collect these data?
A2: Samples will be collected using hand augers. Hand augers must be decon-
taminated between sampling locations using stainless-steel bowls, brushes, and
Alconox and clean water solutions.
• Q3: What level of data accuracy and precision is required? This includes, but is not
limited to, field instrument accuracy, laboratory instrument accuracy, laboratory
analytical reporting limits, and QA/QC.
A3: Samples will be sent to a laboratory for analysis of total cadmium by U.S. EPA
Method 6010B with a minimum reporting limit of 2.5 mg/kg to enable comparison
with the regulatory standard of 10 mg/kg. Field staff will collect one duplicate
sample for QA/QC purposes. The laboratory must perform all required QA/QC
to maintain compliance with applicable regulations, including laboratory control
sample analysis and matrix spike.
• Q4: In what format will field personnel and/or laboratory service present the data?
This includes, but is not limited to, sample identifiers, sample locations, date/time
of sample collection, units of measurement, and field note format.
A4: Field personnel will take notes in standard field books according to company
protocol. They will classify the sediment at each location using the United Soil
Classification System (USCS) and note the date and time of each sample in the field
book. Field personnel will also record the approximate X and Y coordinates of
each sampling location using a handheld Global Positioning System (GPS) device.
The coordinate system shall be U.S. State Plane in feet using the North American
Datum of 1983 (NAD 83). The sample nomenclature SED-X shall be used. Field
personnel will submit samples to the designated laboratory for total cadmium
analysis under a chain of custody following laboratory direction regarding sam-
ple preservation, sample hold times, and sample storage. The laboratory must
present/report the data as milligrams per kilogram and submit the final data in
electronic spreadsheet format.
• Q5: How will the data be managed electronically?
A5: Sediment classification notes will be transcribed into gINT, the company’s stan-
dard boring log–generation software. Upon receipt of the data from the labora-
tory, internal office personnel will validate the data and upload the data into a
Microsoft Access database for storage and querying. Sample location GPS data will
130 Hydrogeological Conceptual Site Models

be differentially corrected and uploaded from the handheld device into the project
database to enable data linkage with GIS software for visual data presentation.

3.2.2.3 Perform Required Data Collection and Analysis


Once the data have been collected and transferred to a format that can be easily queried
and manipulated, the required conceptual and quantitative analyses can be performed. In
keeping with the above example, the hydrogeologist exports queried data into ProUCL,
a statistical software package distributed by the U.S. EPA available in the public domain.
With ProUCL, the hydrogeologist would determine if the 95% UCL of the cadmium con-
centration exceeded the applicable regulatory standard.
Data verification and validation is a key component of this step, during which the qual-
ity of the collected data will be assessed to ensure it meets project objectives. Typically,
Downloaded by [University of Auckland] at 23:40 09 April 2014

data verification and validation occurs after initial compilation of laboratory and field
data and before final import of the data into a geodatabase. This ensures that all data in
the geodatabase meet quality standards. Different projects will have different levels of
data verification and validation requirements, often driven by applicable regulations. For
example, the Massachusetts Department of Environmental Protection (MassDEP) requires
Representativeness Evaluations and Data Usability Assessments be conducted in support
of hazardous-waste site cleanups in Massachusetts (MassDEP 2007).
As stated in MassDEP (2007),

“The Representativeness Evaluation determines whether the data set in total suffi-
ciently characterizes conditions at the disposal site and supports a coherent Conceptual
Site Model. The Representativeness Evaluation determines whether there is enough
information from the right locations, both spatially and temporally, to support the (site
closure).”

Key to the above definition is the interconnection of the data collection and evaluation
process with the CSM as described in Chapter 2. Data usability is evaluated through the
results of QA/QC samples collected in the field, such as blanks, spiked samples, and/or
duplicates. This is a field-based QA/QC component, in which the accuracy and precision of
the sampling methods are assessed. Data usability is also evaluated through analysis of
raw laboratory data to ensure that reported sampling results are qualified appropriately
(MassDEP 2007). This is an analytical-based QA/QC component, in which the accuracy and
precision of the laboratory analysis are assessed through examination of parameters, such
as initial instrument calibration results, surrogate spike recoveries, matrix spike/matrix
spike duplicate recoveries, and laboratory control sample recovery (MassDEP 2007).
When performing data verification and validation, it is easy to get bogged down in labora-
tory analytical minutiae and lose sight of the big picture, forgetting why the process is being
conducted in the first place. The authors recommend that focus be placed on the following
areas to ensure that project decisions are made with reliable, accurately quantified data:

• Comparison between primary and duplicate samples to ensure that the sample
technique and matrix do not exhibit unacceptable variability.
• Evaluation of blank samples to ensure that cross-contamination is not occurring
between samples.
• Evaluation of the chain of custody to ensure that all samples that were supposed
to be analyzed were indeed analyzed.
Data Management, GIS, and GIS Modules 131

• Evaluation of sample hold times to ensure that samples did not sit idle in a refrig-
erator too long prior to analysis.
• Evaluation of the attained reporting limits for individual constituents to ensure
that a sample is not listed as nondetect at a reporting limit above the applicable
regulatory standard. For example, if the cleanup level for a contaminant in soil is
5 mg/kg, a nondetect result at a reporting limit of 10 mg/kg is inadequate.
• Evaluation of surrogate recoveries to quantify potential bias. For example, if low
surrogate recoveries were observed for an individual constituent, it is likely that
the reported result is biased low. This may be significant if the reported concentra-
tion is just below the applicable regulatory standard. Qualifiers such as J, to repre-
sent estimated concentration, may be appropriate in these circumstances.

As the hydrogeologist is often not an expert at laboratory analytical methods, it is impor-


Downloaded by [University of Auckland] at 23:40 09 April 2014

tant to designate a more qualified person to perform this analysis and ensure that data
quality is acceptable. Regardless, the hydrogeologist must be literate in the data verifica-
tion and validation process so that he or she understands big picture concepts and how
data quality can impact project decisions.

3.2.2.4 Present Study Conclusions


After the data business has been completed, the hydrogeologist would then present the
study conclusions based on the collected data in a clear and scientific manner. Data in the

³ 95% UCL = 6.1 mg/kg


(Below Action Level of 10 mg/kg)

SED-10 SED-8
SED-9

SED-2
SED-7

SED-6
SED-5

SED-4

SED-1

SED-3

Legend
Landfill Boundary
Measured Cadmium Concentrations
0 75 150 300
< 5 mg/kg
10 - 20 mg/kg
Feet

FIGURE 3.12
Results of the hypothetical sediment investigation at the landfill site with sample identifications labeled and
results depicted through color coding. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS
(http://www.mass.gov/mgis/colororthos2008.htm).
132 Hydrogeological Conceptual Site Models

Access database can be imported into GIS software to assess the spatial correlation of data,
if necessary, and to produce high-quality figures illustrating sampling results. Figure 3.12
presents an example data visualization for the project that clearly displays the sampling
results and the study conclusion.
The four data management steps presented in this section are summarized in the flow
chart shown in Figure 3.13. The steps effectively demonstrate the life cycle of data manage-
ment issues in hydrogeology. All aspects of project execution, from data collection in the
field to graphics generation in the office, must be mindful of data characteristics, such as
sample ID nomenclature, coordinates and coordinate system selection, units, data preci-
sion, and accuracy, and other important characteristics, such as the date/time of collection
(and the format of the date/time). A successful data management system would define
standard protocol for all of the above data fields to ensure seamless integration of field
and laboratory data with databases capable of performing queries and exporting results to
Downloaded by [University of Auckland] at 23:40 09 April 2014

other programs (e.g., groundwater models) and GIS.

Step 1: Define the project objective

What are the problems


to be solved?

What are the deliverables


to be produced?

What are the next steps for


the different outcomes?

Step 2: Determine the type & quantity


of data to be collected

Laboratory data Historic/public data Field data

Determine data collection


methods (including QA/QC)

Step 3: Perform data collection


& analysis

Step 4: Present the study


Perform data verification conclusions
& validation

Import data into Query, sort & export Perform data analysis,
geodatabase data as needed answer questions

FIGURE 3.13
Data management flow chart created by the authors.
Data Management, GIS, and GIS Modules 133

3.3 Introducing the Geodatabase


A data management system is a holistic, qualitative approach to integrating field, laboratory,
and interpolated/extrapolated data. However, the heart of any successful data management
enterprise is a functional, robust geodatabase. Esri, Inc., developer of the ArcGIS family of GIS
software, broadly defines the term geodatabase as a “collection of geographic data sets of vari-
ous types held in a common folder.” This folder can be a personal Microsoft Access database
(e.g., one user) or a more advanced multiuser database constructed with Oracle or Microsoft
SQL Server (Esri 2010a). The only distinguishing factor between a geodatabase and a common
Microsoft Access database is that a geodatabase contains geographic, or spatial, data with asso-
ciated spatial features (i.e., X, Y, Z coordinates) in the real world.
When working in a GIS environment, however, the geodatabase becomes much more pow-
Downloaded by [University of Auckland] at 23:40 09 April 2014

erful. A geodatabase provides the underlying data structure for visualization in ArcMap by
Esri. In other words, a GIS translates raw data in a geodatabase into visual data depicted on
maps. One can think of GIS as a visual processor for a geodatabase, and without the geodata-
base, GIS would not exist. Computer graphics would be no more than electronic renderings
of hand drawings. Because of the interdependence of GIS and the geodatabase, designing
functional geodatabases is of the utmost importance to practicing hydrogeologists.
For most hydrogeological applications, a personal Microsoft Access or Esri file geoda-
tabase is sufficient. An Esri file geodatabase has the same basic definition as a standard
geodatabase and is best thought of as a proprietary Esri geodatabase (Esri 2010b). Esri
file geodatabases are more efficient at storing spatial data than a personal geodatabase
and are, therefore, becoming the predominant geodatabase in professional practice for
applications with extensive mapping. However, in order to view or manipulate a file geo-
database, the user must have an ArcGIS license. Therefore, Microsoft Access will remain
an important database engine for many projects in hydrogeology. The practicing hydroge-
ologist should be literate in both programs and be able to design and operate geodatabases
for project work. Some very large projects with multiple database users may benefit from
more advanced (and complex) systems run through Oracle or Microsoft SQL Server. The
operation of these advanced programs is commonly beyond the training of the typical
hydrogeologist; hence, external help from database specialists would be required.
A geodatabase is used to store spatial data. The three primary data sets in a geodatabase
are

• Tables
• Feature classes
• Rasters

3.3.1 Tables
Tables in a geodatabase are similar to what one might find in a nonspatial database. A
table is simply a series of rows with unique data fields specified by a column header. An
example of a table of monitoring well-location information is provided in Figure 3.14.
In a geodatabase, tables may specify X, Y, and Z coordinates to identify the spatial posi-
tion of data in the real world. Esri (2010c) defines a table as a storage vehicle for attributes
based on the following relational concepts:
134 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.14
Table of monitoringwell-location information. The aquifer field corresponds to the screened interval of the
monitoring wells.

• Tables contain rows.


• All rows in a table have the same columns.
• Each column has a data type, such as integer, decimal number, character, and date.
• A series of relational functions and operators is available to operate on the tables
and their data elements (i.e., to design and execute queries).

Nearly all hydrogeological data are stored in tables; it is therefore very important for the
hydrogeologist to develop standard table formats that are compatible with the following
data source examples:

• Electronic data deliverables from analytical laboratories


• Field-based measurements, such as water-level elevations and soil/bedrock
classifications
• Survey or GPS locational data
Data Management, GIS, and GIS Modules 135

FIGURE 3.15
Feature class types.

3.3.2 Feature Classes


Data from spatial data tables may be converted into feature classes, which are homoge-
neous collections of features composed of points, lines, polygons, or annotations (labels)
Downloaded by [University of Auckland] at 23:40 09 April 2014

with common attributes (Esri 2010d). For example, a feature class may be a point file con-
taining the locations and construction properties of monitoring wells, a polyline file speci-
fying an isocontour line, or a polygon file detailing the orientation and properties of a
building. Feature classes are generally used to specify vector data or data sets defined by
discrete points in space. Features in a feature class have both shape and size (Ormsby et al.
2004). A slightly different definition of the term vector as applied in a GIS environment is
“data that are comprised of lines or arcs, defined by beginning and end points, which meet
at nodes. The locations of these nodes and the topological structure are usually stored
explicitly” (de Smith et al. 2006–2011). Figure 3.15 depicts the common feature class types.
Feature classes are the most widely used elements in hydrogeological GIS applications
as they enable visualization of data collected in the field or in a laboratory. Point, polygon,
and polyline feature classes are digital elements with numerically defined locations and
orientations. The digital structure of feature classes is the characteristic trait of a GIS as
opposed to a manually drawn map or a drawing in a different computer program.
A shapefile is a single feature class that exists outside of a geodatabase (Ormsby et al.
2004). As defined by Esri, “A shapefile is a simple, nontopological format for storing the
geometric location and attribute information of geographic features” (Esri 2010l). Shapefiles
are efficient ways to export and share data from a map or geodatabase as access to the
entire geodatabase file is not required. However, many GIS practitioners continue to use
shapefiles as a primary means of storing and manipulating spatial data. This practice is
outdated as Esri file geodatabases can efficiently store multiple feature classes in addition
to annotations and data tables, decreasing storage requirements and facilitating the query-
ing and display of spatial data in the ArcGIS environment (Ormsby et al. 2004). Therefore,
the use of a singular file geodatabase is highly recommended over disparate shapefiles
for the mapping and analysis of spatial data. Shapefiles should only be used to transfer
data between users and between different computer programs (e.g., between ArcGIS and
groundwater models).

3.3.3 Rasters
Unlike vector data, rasters are used to represent continuous geographic data and are cre-
ated by dividing the spatial domain into gridded squares or rectangles (Esri 2010e). Rasters
do not have shape and instead have numeric values (Ormsby et al. 2004). They are matri-
ces of identically sized square cells and have at least one value associated with each cell
position (Ormsby et al. 2004; de Smith 2006–2011). The most common raster data elements
are aerial images (photographs), such as that depicted in Figure 3.16. Rasters are therefore
136 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.16
Aerial photograph raster with 30-cm resolution (~1 ft). USGS Color Ortho Imagery (2008/2009) downloaded
from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).

often used as base maps, or underlying background layers, for all types of figures. They
are of particular importance to hydrogeology as they are required to generate various
contour maps. Further discussion is provided in Chapter 4. Rasters are often termed grids
in hydrogeological applications and in some contouring programs such as Surfer (Golden
Software 2004). They may also be called surfaces.
Prior to creating contour lines, data must be interpolated to a continuous raster across the
X, Y extents of the data. The result is a raster file with a discrete value corresponding to each
grid cell. Figure 3.17 is a raster interpolated from surface soil concentrations of pesticides,
in which each value, and thereby each 10 ft × 10 ft raster cell, has its own color. This form
of raster display cannot be readily used to demonstrate results as there are far too many
individual colors to label clearly. Therefore, raster values must be grouped to create mean-
ingful figures. The pesticide concentration raster is grouped in concentration intervals in
Figure 3.18; however, the display is still discrete such that the individual cells remain visible.
Finally, the individual raster grid cell values can be smoothly interpolated for display in con-
tour form as in Figure 3.19, creating the illusion of a truly continuous surface. It is important
to remember that the raster used in Figure 3.19 is exactly the same as that used in Figure
3.17; the only difference is the grouping and smoothness of the display. One can consider the
resolution of this raster to be 10 ft. While digital elevation models (e.g., topographic surfaces
or water-table surfaces) and imagery (e.g., aerial photos) are the primary data represented
through rasters, myriad other applications exist. As rasters are, by definition, digital data
with values stored in grid cells, they can be efficiently stored in geodatabases for easy trans-
mittal and linkage with visual GIS processors, such as ArcMap by Esri, Inc.
The above description of data sets in a geodatabase is merely an introduction and does
not cover the many variations and extentions made possible in a GIS environment [i.e., the
Data Management, GIS, and GIS Modules 137
Downloaded by [University of Auckland] at 23:40 09 April 2014

0 60 120 240

Feet

FIGURE 3.17
Raster with unique value for each 10 ft × 10 ft cell.

FIGURE 3.18
Raster with grouped/classified values with unsmoothed display.
138 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.19
Raster with grouped/classified values with smoothed/interpolated display.

incorporation of annotations (labels), attribute domains, and dimension features in a geo-


database]. However, for most applications in hydrogeology, a simple approach is usually
the best, and adhering to the basic feature class and raster data formats is more than ade-
quate. The following sections offer suggestions for how to organize and relate the data in
a geodatabase for use in hydrogeological applications. The broader subject can be referred
to as geodatabase design.

3.4 Basics of Geodatabase Design


Similar to the development of a data management system, the design of the geodatabase
for a project requires an understanding of the life-cycle data needs and the broader CSM.
The geodatabase must be designed to accommodate all types of data needed for a project
and organize data in an efficient manner to facilitate creation of GIS figures and analysis
of data in external programs.
The majority of data in a hydrogeological geodatabase will be stored in tables. Rather
than try to house all laboratory or field-collected data in a single table, it is much more prac-
tical to develop multiple tables with common themes to make importing and exporting
data more streamlined. For example, it is good practice to have a single table that provides
the spatial location of all point features that have associated laboratory or field-collected
data. This location table may contain the geographic coordinates of monitoring wells, soil
borings, staff gauges, and any other number of hydrogeological points in the real world.
Data Management, GIS, and GIS Modules 139

Separate tables should then be used to store data associated with those points, split again by
data type. For example, separate tables may be used to house well-construction information,
water-level elevation data, groundwater analytical data from laboratories (e.g., chemical
analyses results), soil analytical data from laboratories, and soil lithology data from boring
logs. When dealing with a project involving a large volume of laboratory analysis, it may
also be beneficial to have a table containing the description information for each sample,
including but not limited to, sample date/time; sample analysis parameters, such as vola-
tile organic compounds and metals; sample type (primary, duplication, equipment blank);
and sample depth or vertical interval.
Once tables are separated in a logical manner, it will be easier to import data from the
field or the laboratory as consistent table structures can be used. In Microsoft Access, a
database program included in the Microsoft Office package, data can be added to a table
from an exterior source (such as a spreadsheet or text file) as long as the column headers
Downloaded by [University of Auckland] at 23:40 09 April 2014

are exactly the same. Once the table structure of a geodatabase is established, the hydro-
geologist can communicate with the laboratory and field personnel to ensure that all data
are formatted properly for easy upload into the geodatabase.

3.4.1 Table Format and Querying


It is critical that all tabular data in a geodatabase be entered in database format. Database
format uses one row for every unique piece of information in a table. This concept can be
difficult to grasp and is best explained by Figures 3.20 and 3.21.

Well ID Date Depth to Water (feet)


MW-01 10/1/2009 7.36
MW-01 10/1/2010 5.68
MW-02 10/1/2009 6.04
MW-02 10/1/2010 12.81
MW-03 10/1/2009 5.08
MW-03 10/1/2010 4.64
MW-04 10/1/2009 5.06
MW-04 10/1/2010 5.18
MW-05 10/1/2009 9.89
MW-05 10/1/2010 13.04

FIGURE 3.20
Table in correct database format.

Depth to Water (feet) Depth to Water (feet)


Well ID
10/1/2009 10/1/2010
MW -01 7.36 5.68
MW -02 6.04 12.81
MW -03 5.08 4.64
MW -04 5.06 5.18
MW -05 9.89 13.04

FIGURE 3.21
Table in incorrect format for database applications.
140 Hydrogeological Conceptual Site Models

Figure 3.20 is in correct database format. All possible data elements have their own unique
column. Conversely, Figure 3.21 is incorrectly formatted. Gauging dates are provided as col-
umn headers. While this may seem like a good idea, it precludes effective querying by sample
date. If a table is desired with gauging dates as column headers (with results below it in the
column itself), a cross-tab query should be used to make a new data export table. The basics of
querying in Microsoft Access are described in Sections 3.4.1.1 and 3.4.1.2.
The most critical element of geodatabase design is selecting the common data field that will
link tables in the geodatabase. Typically, this would be a field containing the name of the data
points in question, such as a well or soil boring identifier (e.g., MW-100 or SB-100). This identify-
ing field must appear in each of the tables in question to allow the hydrogeologist to establish
relationships and is often a primary key for at least one of the related tables. A primary key is a
field that uniquely identifies records (rows) in the table. In other words, values in the primary
key field cannot be repeated and cannot be null values (empty).
Downloaded by [University of Auckland] at 23:40 09 April 2014

The development of table relationships is the first step in performing data queries or
commands that select data from multiple tables and export the selection as a separate
table. A well-conceived relationship diagram for a real-life project geodatabase is pre-
sented in Figure 3.22.
A relationship is established by selecting the common field between two tables and then
joining these tables based on that field. Using these relationships, the hydrogeologist can
pull data from any table through a select or cross-tab query. Relationships are not required
elements of a geodatabase; however, without relationships, a database is no different from
a collection of flat files. A flat file is a single table containing all relevant information in a
series of records (Databasedev.co.uk 2007). Flat-file geodatabases have obvious limitations,
as information from different tables cannot be joined and queried and there is often gross
redundancy in data storage.

FIGURE 3.22
Example relationship diagram from a real-life project geodatabase. Fields used for the relationships include
“loc_id,” “samp_num,” and “con_id.” (Courtesy of Ted Chapin, Woodard & Curran.)
Data Management, GIS, and GIS Modules 141

3.4.1.1 Select Query


A select query is used to select data from any number of tables and apply filters as needed.
For example, a hydrogeologist may create a query designed to extract water-level eleva-
tions from a subset of wells gauged semiannually in 2007 and 2008. To perform this query,
the hydrogeologist may pull information from a well-location table and a water level–
elevation table. The Well ID would likely be the unifying data field making this query
possible. A schematic of screen shots illustrating the execution of this query, and the origi-
nal and resulting tables, is provided in Figures 3.23 through 3.26 with the selected subset
representing intermediate bedrock aquifer measurements from 2007.
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.23
Table of monitoring well–location information. Aquifer field corresponds to the screened interval of the moni-
toring wells.

FIGURE 3.24
Table of water-level elevation data. Note that this table does not have well-coordinate information or spatial
attributes and, therefore, cannot be used independently in external spatial analysis programs.
142 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.25
Screen shot of select query design, establishing a relationship between the two tables through the “Well_ID”
field. Only data measured on October 12, 2007, from wells screening the intermediate bedrock will be returned.

FIGURE 3.26
Table of queried water-level elevation data with well coordinates. This table can be used in external spatial
analysis programs to create groundwater contour maps.
Data Management, GIS, and GIS Modules 143

3.4.1.2 Cross-Tab Query


A secondary major query type in Microsoft Access is a cross-tab query, which can be used to
perform simple mathematical operations on data from different tables in a geodatabase. Cross-
tab queries are similar to pivot tables in Microsoft Excel, where data fields are shifted from
being observations in a column to being column headers with underlying data (similar to the
incorrect database format example in Figure 3.21). The output from a cross-tab query is used
for data analysis in table format or in external spatial analysis programs. It is not a raw data
table in the geodatabase, and, therefore, the revised formatting is acceptable.
For the above water-level elevation querying example, a cross-tab query can be used to
compute an average annual elevation for each well. A schematic of screen shots illustrating
the execution of this query and the resulting table is provided in Figures 3.27 through 3.32.
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.27
Screen shot of cross-tab query table selection in Microsoft Access. Note that other queries can also be included
in the cross-tab.

FIGURE 3.28
Well IDs are selected as the row headings for the cross-tab query.
144 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.29
Dates are selected as the column headings for the cross-tab query.

As hydrogeological studies are often concerned with the maximum, minimum, and
average values of myriad data fields, cross-tab queries are valuable tools and can sig-
nificantly reduce time spent manually organizing data in spreadsheet programs such as
Microsoft Excel or in text files. Groundwater modeling, in particular, is a discipline requir-
ing data averaging for steady-state calibration and the establishment of boundary condi-
tions. Therefore, mastery of the cross-tab query will serve the hydrogeologist well in his
or her project work.

FIGURE 3.30
Dates will be grouped by year for the query such that all results within one calendar year will be consolidated.
Data Management, GIS, and GIS Modules 145
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.31
Average water-level elevation at each well will be displayed for each calendar year. Row sums will be included
to provide an overall annual average water-level elevation.

The results of select and cross-tab queries can be easily exported into Excel tables or text
files for import into data analysis programs (or for incorporation into report tables and
figures). For the above listed example queries, the hydrogeologist may export the results
as Excel tables for import into a computer program used for generating surfaces and con-
tour maps. Select and cross-tab queries can also be converted into tables in a geodatabase
through a make-table query. This is necessary if one wishes to display the results of a
query in ArcMap, for example.

FIGURE 3.32
Table of cross-tab query results showing the average water-level elevations for 2007 and 2008 by well. The
“Total of Water_Level_Elevations” field represents the average water-level elevation for 2007–2008. The term
“Total” instead of “Average” is used in the column heading because the field is termed a row sum in cross-tab
nomenclature.
146 Hydrogeological Conceptual Site Models

3.4.1.3 Forms
While often not essential to project work, forms can be created in Microsoft Access to help
nonexperts enter data into or export data from a personal geodatabase. On the data-entry
side, a one-page form with easy-to-understand instructions and input fields can be used to
automatically populate data into multiple different geodatabase tables. This is particularly
useful for field applications such as borehole logging or water-quality monitoring, where
data can automatically and easily be entered into a database rather than written manually
into a field book and subsequently transcribed into multiple different database tables in
datasheet view, which is much less transparent. On the data export side, a one-page form
with simple buttons, selection boxes, or input fields can be used to execute complex queries
that would take considerable time and effort to perform individually. An example data
export form for a real-life geodatabase is presented in Figure 3.33.
Downloaded by [University of Auckland] at 23:40 09 April 2014

Because of their complexity, geodatabase forms are often created by GIS/database pro-
fessionals for nonexperts to use. The initial investment in easy-to-use forms is well worth
it, considering well-designed forms save considerable time and can prevent crippling data
entry and export errors. Furthermore, forms can be linked with Microsoft Access reports
such that, when prompted by the user, the form automatically populates a formatted table
for inclusion in a formal report. The use of this approach saves even more time typically
spent manually adjusting cell formats in Excel or other spreadsheet or word processing
software.

FIGURE 3.33
Example Microsoft Access form used to simply execute complicated queries through lists, buttons, and pull-
down menus. (Courtesy of Ted Chapin, Woodard & Curran.)
Data Management, GIS, and GIS Modules 147

3.4.2 Data Linkage with GIS


The biggest advantage of designing, constructing, and using a geodatabase is the ability
to link data directly to visual GIS processors such as ArcMap by Esri. No other data pre-
sentation mechanism works as well as the visual GIS processor/geodatabase system. To
reiterate an earlier point, visual GIS processors such as ArcMap visually translate digital
data stored in a geodatabase. The addition of a geodatabase to an ArcMap file could
not be any easier: it can be imported through the Add Data button or simply dragged
into the layer window from ArcCatalog, Esri’s application for managing geographic
data (Ormsby et al. 2004). Once in the ArcMap document, the user can display any of
the contained feature classes or rasters or display points from tables that have specified
coordinates (using the Display X, Y Values command). The power of this system is only
truly appreciated by those who have attempted to analyze spatial data using alternative
Downloaded by [University of Auckland] at 23:40 09 April 2014

mechanisms such as hand-drawn maps, manually edited site plans, or verbal discus-
sions. Once a geodatabase is set up, the display of data is the icing on the cake and makes
for rapid visualizations that save time and money and lead to engaging discussion and
informed decisions. Figure 3.34 presents a screen shot illustrating the inclusion of a geo-
database in ArcMap.
The relationship between the geodatabase and ArcMap is not a one-way street. Edits
can be made within ArcMap to the geodatabase itself. This is especially powerful when
plotting locations manually in ArcMap (often done during work-plan generation for pro-
posed sampling locations). These locations can be stored in the geodatabase for future use.
Queries can also be performed within ArcMap, which will be briefly discussed subse-
quently. In summation, all projects in hydrogeology would benefit tremendously by using
the geodatabase-GIS system. It is rapidly becoming (and arguably already is) the industry
standard for data management and visualization.

FIGURE 3.34
Screen shot of ArcMap table of contents showing the inclusion of the Microsoft Access database entitled
Database.mdb. Well Locations layer is displayed directly from the geodatabase. Esri® ArcGIS ArcMap graphical
user interface. Copyright © Esri. All rights reserved.
148 Hydrogeological Conceptual Site Models

3.4.3 Errors in Geodatabase Design


While very simple to implement, there are many pitfalls the novice user will undoubtedly
encounter while working with geodatabases and linking them to GIS. This list of things to
avoid is compiled largely from the authors’ experience as formal documentation of these
trials and tribulations is hard to come by.

3.4.3.1 Fields with Wrong Data Type


When designing a table, it is of the utmost importance to assign the correct data type (or
format) to each data field (or column). For example, numeric results should be contained in
fields formatted as numbers, and text should be contained in fields formatted as text. This
may seem obvious, but all practicing professionals have encountered “numbers stored as
Downloaded by [University of Auckland] at 23:40 09 April 2014

text” errors and their associated frustrations. This error will also lead to a breakdown in
numeric queries, such as cross-tabs. A good way to circumvent this problem is to double
check field formats in the design view of tables.

3.4.3.2 Spaces in Data Field Names


A novice user may look at a geodatabase and ask Why all the underscores? The answer to
this question is that ArcMap cannot properly read data fields (or columns) with spaces in
the name. For example, “X Coordinate” is an improper name, but “X_Coordinate” would
work just fine. In general, it is best never to use spaces in any file names or folders contain-
ing geodatabase and GIS material.

3.4.3.3 Misspellings and Format Discrepancies


When working with a geodatabase system that relies on common identifying fields that join
different tables, it is paramount that consistent spelling and formatting be used each and every
time a specific value is used. For example, if a well ID is misspelled or formatted differently for
an individual sample result, that result will never be picked up by a query for all results for that
particular well. When looking for “MW-100” the query will never return “MW100,” “MW-100,”
or “MW-100 ” (note the extra space after the last zero before the end of the string). This problem
is often caused by field personnel who assign incorrect sample IDs when collecting samples
and sending them to a laboratory. However, this problem is rarely the fault of field personnel as
often they are not included in the discussion of the data management framework and are not
provided with a consistent methodology for assigning sample IDs. This breakdown illustrates
the importance of a life-cycle approach to data management.

3.4.3.4 Cross-Tab Query Difficulties


All users of Microsoft Access, at one point in time, will have a mental roadblock when
attempting to design and execute a cross-tab query. Often the cause is the absence of a
data field/column that provides the common identifier used to calculate the minimum,
maximum, or average of the data in question. For example, if trying to calculate average
water-level elevations by season from a table of water-elevation data, the hydrogeologist
would need a data field that only contains the season of the measurement in question.
Therefore, the hydrogeologist may need to add a field labeled “Season” and populate it
with “fall” or “spring” according to the date of the gauge (which is likely reported in the
Data Management, GIS, and GIS Modules 149

table). Sometimes a hydrogeologist may need to use a dummy field (populated by “1,” for
example) to create a dummy header for the field to be averaged in the query. This would
be necessary if a table only had two columns, for example, Well_ID and Water_Level_
Elevation. To perform a cross-tab query, the hydrogeologist needs a column header, and
with only two fields, this is not possible. Inserting the dummy column enables Well ID to
be on the left-hand side reported in succeeding rows and the dummy value to be the col-
umn header for the average water-level elevations calculated for each well.
Query breakdowns also occur when a data field is present in multiple different units
(e.g., mg/L and µg/L). The query likely will not know this. The easy solution to this prob-
lem is to use consistent units throughout an entire table and to (please!) label units in each
table.
Downloaded by [University of Auckland] at 23:40 09 April 2014

3.4.3.5 Inconsistent or Unknown Coordinate Systems


There have been countless instances in the authors’ careers when they have been pro-
vided with a database with inconsistent or unknown coordinate systems. This is a major
challenge to reconcile as data must be assessed in an exploratory manner when imported
into GIS to determine what coordinate system everything is in. Tables with inconsistent
coordinates may have to be separated. Even worse, often no coordinate system identify-
ing column (or label) is provided when a custom, not easily discernible projection is used.
To circumvent this problem, always have a text field for Coordinate System, which labels
the coordinates of any X, Y values stored in a table. Notes with the correct coordinate sys-
tem can also be placed elsewhere in metadata. In general, a consistent coordinate system
should be used for all project data. Coordinate systems and projections are described in
more detail in the following section.

3.5 Working with Coordinate Systems


The most important attribute of any piece of hydrogeological data collected in the field
is its three-dimensional spatial location. The real-world location of data is arguably the
most critical variable in the CSM. This may seem like an obvious fact, but unfortunately
in many cases, the hydrogeologist is faced with the lack of accurate and/or meaningful
spatial information for key data. It follows that these limitations often lead to incorrect
decisions and/or project failure. The required level of accuracy of location data is entirely
dependent on the scale of the study in question. For example, for a basin-scale ground-
water model, the location of a well with water-level and geological information may only
need to be accurate to the nearest hundred feet. Figure 3.35 illustrates this fact. However,
for a detailed remedial investigation regarding multiple different properties, precise loca-
tion data with less than a foot of error for each and every soil sample may be critical in
determining which properties are contaminated and require remedial action (this exam-
ple has considerable legal implications). Figure 3.36 is an example remedial investigation
figure where exact sample locations are of the utmost importance. Even for the coarser
basin-scale example, the hydrogeologist would benefit from data that are located as accu-
rately as possible given the budgetary restraints of the project.
Numeric spatial locations are assigned to real-world hydrogeological data through use
of coordinate systems. A coordinate system is best defined as a system of points, lines,
150 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.35
Map showing boundary of a hypothetical groundwater flow model domain. At this scale, the accuracy of the
well location does not need to be to the nearest foot, for example.

and/or surfaces with an associated set of rules to define spatial positions in two or three
dimensions. There are two primary types of coordinate systems: geographic coordinate
systems and projected coordinate systems. A geographic coordinate system uses latitude
and longitude to define a position on a three-dimensional spherical surface (Esri 2010f).
Latitude and longitude are angles, not distances, and are therefore commonly presented
in units of degrees, minutes, and seconds (Ormsby et al. 2004). There are innumerable geo-
graphic coordinate systems used in different places around the world with varying levels
of accuracy, but most assume the Earth is a spheroid, and they all have these common ele-
ments (Ormsby et al. 2004; Esri 2010f):

• Datum: A frame of reference defining the origin and orientation of latitude


and longitude lines. The most commonly used data are NAD 83 and the World
Geodetic System of 1984 (WGS 84).
• Prime meridian: The zero-degree line of longitude that defines east and west (typ-
ically the prime meridian is the line of longitude intersecting Greenwich, U.K.)
• Angular unit: Typically degrees or grads.
Data Management, GIS, and GIS Modules 151
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.36
Map showing soil boring locations in a hypothetical residential area. As the soil sampling results will be used
to determine which residential properties are part of the hazardous-waste site and require remediation, boring
locations need to be as accurate as possible. Locations will have significant implications with respect to residen-
tial property values—a common occurrence in site remediation.

Common geographic coordinate systems used in modern mapping include the NAD 83
coordinate system for North America and the WGS 84 coordinate system for the world.
One important thing to remember when working with geographic coordinate systems is
that latitude and longitude measurements can be negative. Latitude values are zero at the
equator, increase to 90° moving to the north pole, and decrease to –90° moving to the south
pole (i.e., the northern hemisphere is positive, whereas the southern hemisphere is nega-
tive). Longitude values are zero at the prime meridian (passing through Greenwich, U.K.),
increase to 180° moving eastward, and decrease to –180° moving westward (180° and –180°
are the same meridian; Ormsby et al. 2004). Any location in the continental United States
will therefore have a positive latitude and a negative longitude.
The second type of coordinate system, the projected coordinate system, defines positions
on a flat, two-dimensional projection of the earth’s surface, akin to a grid with uniform lengths
and angles across its extent. As defined by the USGS, “A map projection is a systematic rep-
resentation of all or part of the surface of a round body, especially the Earth, on a plane”
(USGS 1987). In a projected coordinate system, points in the grid system are represented by
152 Hydrogeological Conceptual Site Models

X coordinates (eastings) along latitude lines (parallels) and Y coordinates (northings) along
longitude lines (meridians; USGS 1987). Projected coordinate systems are always based on
a geographic coordinate system (what is being projected), and therefore, knowledge of the
underlying geographic coordinate system and its datum is of critical importance (Esri 2010g).
Projected coordinate systems are more commonly used in hydrogeological practice than
raw geographic coordinate systems because of the advantages of the grid system. Data can
be projected onto a grid in distance units of meters or feet, which has major practical advan-
tages when making distance/area/volume calculations and performing geostatistical analy-
ses. Longitude and latitude are not uniform units of measure because the distance covered
by longitude lines gets smaller and smaller moving towards the poles. Projections make
advanced quantitative analysis of spatial data, such as finite-difference modeling, feasible.
Unfortunately, distortion of shape, distance, and direction is inherent to any map projec-
tion because of the impossibility of perfectly flattening a spheroid surface. Esri uses this com-
Downloaded by [University of Auckland] at 23:40 09 April 2014

mon analogy to describe projection inaccuracy: “A spheroid can’t be flattened to a plane any
more easily than a piece of orange peel can be flattened—it will rip” (Esri 2010h). This phe-
nomenon also explains why Greenland may appear gigantic on flat maps of the world as in
the plate carrée projection in Figure 3.37. Similarly, the Mercator projection makes Greenland
appear much larger than Brazil, which is not the case in reality (Ormsby et al. 2004). The
Robinson projection as shown in Figure 3.38 does a better job projecting the world near
the poles and has been used by Rand McNally since the 1960s and by National Geographic
between 1988 and 1998 for general and thematic world maps (Esri 2010i).
In the authors’ experience, the most common projected coordinate systems used in the United
States are the Universal Transverse Mercator (UTM) grid and the State Plane Coordinate System
(SPCS). UTM zones for the United States are presented in Figure 3.39. In general, boundaries
between grid zones within a coordinate system are set such that distortion errors within each
distinct zone are held below some fixed threshold (USGS 1987). The SPCS further minimizes
errors by using different projections by state based on the shape of the state in question. For
example, the ellipsoidal transverse Mercator projection is used for states with predominant

FIGURE 3.37
Map of the North Atlantic using the plate carrée projection. Greenland is unrealistically large because of distor-
tion near the poles. Data layers from Esri®, ArcWorld Supplement.
Data Management, GIS, and GIS Modules 153
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.38
Map of the North Atlantic using the Robinson projection. While distortion still exists, the degree of error is more accept-
able, and this projection is widely used for general and thematic maps. Data layers from Esri®, ArcWorld Supplement.

north–south extents, such as Vermont, whereas the Lambert Conformal Conic projection is
predominately used for other continental states (USGS 1987).
To reinforce the above concepts and minimize confusion, it is helpful to dissect the spa-
tial reference properties of data layers typically used in common hydrogeological mapping.
As stated in the above paragraph, the UTM grid and SPCS are the two most common X, Y
coordinate systems used in mapping. For a hypothetical site in southeast New Hampshire,
the most common projected coordinate systems would therefore be the NAD 83 UTM zone
19N projection in meters or the NAD 1983 StatePlane New Hampshire FIPS 2800 projection
in feet or meters. One advantage of the state-plane projections is the option to use units of
either meters or feet. Both of these coordinate systems are transverse Mercator projections
of the NAD 83 geographic coordinate system.

FIGURE 3.39
Map of UTM zones in the United States. (From USGS, The Universal Transverse Mercator (UTM) Grid, USGS
Fact Sheet 077-01, US Department of the Interior, US Geological Survey, 2001.)
154 Hydrogeological Conceptual Site Models

In summary, years and years of cartographic research and development have created
a system of projections that can be used to make maps and spatial calculations; however,
every map projection has some degree of embedded error. How does the hydrogeologist,
in many cases a relative novice cartographer, navigate the pitfalls of projection errors? As
stated by the USGS (1987), “The cartographer must choose the characteristic which is to
be shown accurately at the expense of others, or a compromise of several characteristics.”
In other words, accuracy should be maximized for those measurements most important
to the objectives of the analysis. In most modern-day applications conducted at the rela-
tively small scales (e.g., watersheds for tributaries of major rivers), projection error is often
ignored. As with most forms of qualitative and quantitative analysis, distortions and inac-
curacies increase at larger scales. Common-sense rules therefore apply when mapping at
larger scales. For example, one should not depict multiple states on a map when using the
coordinate system for one singular state. Similarly, one should not perform coordinate-
Downloaded by [University of Auckland] at 23:40 09 April 2014

based calculations in UTM coordinates for one singular UTM zone when data inputs to
the calculation are found across multiple UTM zones. Simply put, the selected coordinate
system for mapping and associated analysis should match the scale of that analysis.
The previous discussion on coordinate systems is applicable to positions on the surface of
the Earth with associated latitude (or Y coordinate) and longitude (or X coordinate) values.
Similar to horizontal coordinate systems, vertical coordinate systems are needed to con-
vey the elevation (or Z value) of data relative to the surface of the Earth. There are two
primary types of vertical coordinate systems: spheroidal or gravity-related (Esri 2010j). The
most widely used systems are gravity-related (geoidal) and use as a zero value a bench-
mark such as mean sea level. The most commonly seen vertical coordinate system is the
National Geodetic Vertical Datum of 1929.
The most important thing when working with coordinate systems is simply to know and
document the coordinate system associated with any X, Y, and Z data that one possesses.
It is hard to convey to nonprofessionals the frequency with which hydrogeologists obtain
data in an unknown/unspecified coordinate system. A decision must be made at the very
beginning of a project regarding which coordinate system will be used for every piece of
spatial data collected during project execution. When used properly, coordinate systems
endow the hydrogeologist with great power in terms of presenting and analyzing spatial
data. The hydrogeologist now has the ability to assign any number of time-­dependent
hydrogeologic characteristics to X, Y, Z coordinates representing a position in the real
world. Such characteristic data include water-table elevations, contaminant concentrations,
well yields, and many others. To reiterate, all hydrogeological data are spatially oriented
with positions in the real world. Coordinate systems make the transition from the real
world to the numerical electronic world possible, which, in turn, creates the opportunity
for quantitative spatial analysis and advanced data visualization.

3.6 Data Visualization and Processing with ArcGIS


3.6.1 Why Visualize Data?
When faced with the task of analyzing spatial data, the hydrogeologist has an innate
desire to plot those data on a map. The ability to visualize the results of field investigations
and associated data interpolations and extrapolations is essential to any hydrogeological
decision-making process. We may not always recognize this cartographic impulse, but we
Data Management, GIS, and GIS Modules 155

invariably find ourselves plotting data by hand in a field book, on printouts of site plans,
or even on the back of an envelope. In order to truly understand the associations intrinsic
to spatial data, we must see a visualization of data as they exist in the real world.
Data visualizations are invaluable when developing CSMs because they enable render-
ings of field conditions at different times, scales, and levels of detail. Tasks, such as delin-
eating potentially productive aquifer materials or assessing a groundwater contaminant
plume, would simply be impossible without data maps. Text can help describe the rationale
for a decision, but it lacks the obvious truths inherent to a map clearly illustrating the data
justifying the decision. For the above-listed example of delineating potentially productive
aquifer materials, one could explain in text, “Geologic exploration data indicate that the
high-yield aquifer encompasses the stratified drift deposits around the river in question.”
Yet this description is far too general on its own and must be refined in a somewhat absurd
manner to be useful in a technical sense. For example,
Downloaded by [University of Auckland] at 23:40 09 April 2014

“Geologic exploration data indicate that the high-yield aquifer is located along the river
east of the shopping mall and the sports complex, is approximately 1000 ft wide and
8000 ft long, is approximately bisected by Highway 99, and is approximately 4000 ft
west of the former landfill at its easternmost location near the unnamed pond.”

However, if the phrase “as demonstrated in the attached figure” is added to those sentences,
detailed results can be communicated to the target audience without long-winded, con-
fusing text that may confound even the most experienced hydrogeologist. Figure 3.40 is a

FIGURE 3.40
Simple visualization of productive aquifer materials in a watershed. USGS Color Ortho Imagery (2008/2009)
downloaded from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).
156 Hydrogeological Conceptual Site Models

simple visualization that clearly and concisely demonstrates the delineation of potentially
productive aquifer materials. No confusing, long-winded text is necessary.
Maps help communicate spatial data in an intangible way to technical and nontech-
nical audiences alike, activating a part of the brain that cannot be reached through text
alone. Furthermore, relatively recent advances made in computer GIS technology have
vastly improved the quality and reliability of hydrogeological maps, making them even
more useful in formulating and presenting study conclusions. This section provides an
overview of GIS tools available to the hydrogeologist and offers some mapping-standard
guidelines in the field of hydrogeology.

3.6.2 Analog versus Digital Data


Historically, mapping was limited to what could be transcribed on paper. A hand- or
Downloaded by [University of Auckland] at 23:40 09 April 2014

machine-drawn map can be related to an analog model of real-world data in which the
medium for data transmission is quite simply ink and paper. We, as humans, translate
these ink-and-paper renderings of data through our visual and cognitive abilities. The
obvious limitation of analog maps is that they are extremely difficult to reproduce by a
third party or even by the original author (especially before the advent of copy machines).
Furthermore, the underlying data represented in the ink and paper cannot be easily trans-
mitted to another party without the broader analog model (e.g., the map is often the only
means of data transmission). These shortcomings are caused by the absence of an under-
lying numerical structure. GIS has resolved this problem by replacing the analog model
with a digital model based on the binary code of zeroes and ones. This universal language
cannot be erroneously translated, and visual renderings of digital data are easily repro-
duced with computer software. Digital data transmission is seamless, particularly because
the Internet has become ubiquitous and essential to daily world operations.
Most importantly, the transition from the analog data model to the digital data model
has greatly reduced the incidence and extent of data translation and transmission errors.
Whenever data exist in analog form, such as hard-copy laboratory reports, there are count-
less opportunities for human error in the use of those data. For example, errors commonly
occur in the manual entry of hard-copy data into spreadsheets or text files, where results
can be mistyped, duplicated, or omitted entirely. The data-entry errors are compounded
when a hydrogeologist uses the flawed data set for quantitative analyses such as contour-
ing or numerical modeling. At some point, the house-of-cards system built on analog data
will collapse (typically when the errors are discovered) with catastrophic consequences for
the affected project.
The use of analog data methods persists in modern environmental consulting practice
despite widespread availability of better alternatives. Engineering-design drawing pro-
grams, such as AutoCAD, a computer-aided design (CAD) program produced by Auto­desk,
Inc., are often used as stand-alone visual processing programs for projects in hydrogeology
and environmental science. Without linkage to an underlying geodatabase or a referenced
coordinate system, CAD does not constitute a true GIS in the opinion of the authors, and
hydrogeological maps created in CAD are not too dissimilar from analog maps created in
hard copy. In the opinion of the authors, CAD is best suited to detailed engineering appli-
cations and should be reserved for that purpose.
When using CAD for hydrogeological mapping, data such as monitoring-well locations
and groundwater-sampling results are typically manually placed into drawings, creat-
ing a whole separate layer of human-error potential. The failure to use a GIS for project
mapping also results in gross inefficiency as hydrogeologists must communicate back
Data Management, GIS, and GIS Modules 157

and forth with CAD technicians for hours on end. Under this system, the hydrogeologist
would first hand-draw a sketch for the CAD technician to copy, including the data to be
manually placed and/or labeled in the CAD drawing. The manual placement of data and
labels obviously takes longer in CAD than in a GIS system where symbols and labels are
automatically created by accessing the geodatabase. After the first draft of the figure is
complete, the hydrogeologist must then review it for accuracy and inevitably return to the
CAD technician with additional edits and revisions. The do loop of edits and revisions
continues as CAD technicians do not understand principles of hydrogeological mapping,
and hydrogeologists do not understand how drawings are created in CAD. For the record,
however, the authors are well aware of similar experiences with endless do loops involv-
ing GIS operators that are not trained hydrogeologists and hydrogeologists that do not
understand GIS capabilities. The inefficiency of interactions between hydrogeologists and
their support professionals again speaks to the importance of educating hydrogeologists
Downloaded by [University of Auckland] at 23:40 09 April 2014

in the use of computer software such as ArcMap and AutoCAD.


As described in Section 3.3, digital data models in GIS are divided into two major cat-
egories: feature classes representing vector data and rasters representing continuous data.
The connection of the geodatabase with a GIS visual processor, such as ArcMap, is the
engine that drives computer mapping. Feature classes, rasters, and their many varietals—
Triangular Irregular Networks for surface morphology, terrain surfaces, dimensions,
multipoints, and multipatches—are visualized in ArcMap and turned into computer rep-
resentations of the real world. Most importantly, the seamless integration of visual pro-
cessor and geodatabase enables real-time labeling, editing, and annotating of data. The
following sections provide an introduction to data visualization processes in ArcMap
with emphasis on typical mapping activities required in professional hydrogeological
practice.
Before moving on to the next section, it is useful at this point to clarify ArcGIS terminol-
ogy. ArcGIS Desktop is a GIS software product line used for data creation, management,
and visualization (Ormsby et al. 2004). ArcGIS Desktop is not a singular program but
rather a suite of applications that includes ArcMap (used to display and analyze spatial
data and create maps), ArcCatalog (used to manage spatial data), ArcGlobe and ArcScene
(used in conjunction with the 3D Analyst extension to create, analyze, and visualize data
in three dimensions), ArcToolbox (used to process spatial data, termed geoprocessing,
in ArcMap or ArcCatalog), and ModelBuilder (a graphic design tool used to combine
geoprocessing tools into an executable model)—all of which have varying functionality
based on three user license levels: ArcView (lowest functionality), ArcEditor, and ArcInfo
(highest functionality) (Ormsby et al. 2004). Therefore, ArcGIS Desktop (the software
product line), ArcMap (the mapping application), and ArcView (the license level) are not
synonymous.
As of summer 2011, to help avoid the confusion described above, Esri has made sev-
eral name changes within ArcGIS. For example ArcView is now termed “ArcGIS for
Desktop Basic.” Other terminology changes are described at the Esri Web page (www​
.esri.com).
Figure 3.41 summarizes the relationships between spatial data, the geodatabase, the
above-described ArcGIS applications, and modules for spatial data analysis. Note that
some modules are found within the ArcGIS environment, such as the Spatial Analyst
and Geostatistical Analyst extensions and described in detail in Chapter 4, and others are
external standalone programs, such as the groundwater models discussed in Chapter 5.
An example of a three-dimensional visualization created using ArcScene is provided in
Chapter 8.
158 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.41
Flow chart produced by the authors depicting relationships and processing tools in the ArcGIS environment.
Data Management, GIS, and GIS Modules 159
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.41 (Continued)


160 Hydrogeological Conceptual Site Models

3.6.3 Data Visualization in ArcMap


3.6.3.1 Data View
An ArcMap file (with the .mxd extension) is divided into two primary view screens: data
view and layout view. In data view, the data layers in the map occupy the entire computer
screen, and the user may explore and query data in real-world coordinates and measure-
ments (Esri 2010k). Data can be added as layers to the map, visible on the left-hand side of
the computer screen. Figures 3.42 and 3.43 present screen shots of how layers are added
to the data view. Aside from individual data layers, basemap and/or Web data can be
added to ArcMap documents, including aerial photographs, topographic maps, and any
number of other useful mapping elements. Figures 3.44 and 3.45 illustrate the addition of
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.42
Add Data button in ArcMap. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www​.mass.gov/mgis/
colororthos2008.htm).

FIGURE 3.43
Data sets and layers, such as shapefiles (.shp), can be added to a map from folders in a GIS project directory on
one’s hard drive or network drive.
Data Management, GIS, and GIS Modules 161
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.44
Add Basemap button in ArcMap. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).

FIGURE 3.45
Different types of basemaps that can be instantly added to maps in ArcMap. Esri® ArcGIS ArcMap graphical
user interface. Copyright © Esri. All rights reserved.
162 Hydrogeological Conceptual Site Models

basemap data to a map, and Figures 3.46 through 3.48 illustrate the addition of Web data
to a map.
Zooming in and out on the screen changes the scale of the map in real time—akin to
changing the bird’s eye viewer’s elevation. Layers to the map can include feature classes
(e.g., well locations) and rasters (e.g., aerial photograph) from a geodatabase, standalone
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.46
Add Data from ArcGIS Online button in ArcMap. Esri® ArcGIS ArcMap graphical user interface. Copyright
© Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www​
.mass.gov/mgis/colororthos2008.htm).

FIGURE 3.47
Few of the many data layers available on ArcGIS Online after searching for “geology.” Esri® ArcGIS Online
graphical user interface. Copyright © Esri. All rights reserved.
Data Management, GIS, and GIS Modules 163
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.48
Example bedrock geology layer (Kentucky Geology and Faults) obtained from ArcGIS Online. Source: Kentucky
Geological Survey.

shapefiles, and CAD features. Layers can be displayed using any number of symbols sup-
ported by the given shape type (e.g., points, polylines, polygons). Example symbols com-
monly used in hydrogeological maps are shown in Figure 3.49 for point data types and
Figure 3.50 for polygon data types. Hundreds of other symbols are available, all of which
can be selected and applied to data at the click of a button.
Different feature class data types and example symbols are also shown in Figure 3.51
to demonstrate their incorporation into a groundwater contour map, one of the most com-
mon hydrogeological figures.
For each data layer in a map, features can be symbolized with unique colors, markers,
shapes, sizes, and other properties (Ormsby et al. 2004). This is performed through the
Symbology tab of the layer properties dialogue. Different symbols can be used for every
unique value, categories of unique values, or quantities in a field. This is advantageous when
displaying different types of wells on a map, for example. An illustration of this capability is
provided in the screen shots in Figures 3.52 and 3.53. An example application in site remedia-
tion would be the use of different symbols and/or colors to represent soil or groundwater
sampling locations based on the sampling results. In other words, wells could be displayed
as blue circles if the concentration of a relevant constituent were below the maximum con-
taminant level (MCL) or as red triangles if the concentration were above the MCL. This can
be used to prevent excessive labeling that renders maps illegible and is described further in
Chapter 7. Symbology settings such as these are automatic in the ArcMap environment; no
manual changes are required. Displaying hundreds of symbols with properties based on
concentration results in CAD may take many hours of manual manipulation.
Rasters can be displayed in many different ways using black and white or colored shading
techniques to depict elevation or any other feature modeled by a raster. The shading intervals
164 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.49
Example of “Environmental” point symbols in ArcMap. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.

FIGURE 3.50
Example of “Geology 24K” polygon symbols in ArcMap. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
Data Management, GIS, and GIS Modules 165
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.51
Example data types and symbols used in a hypothetical groundwater contour map.

FIGURE 3.52
Symbology tab of the Layer Properties dialogue. Different symbols can be selected for values in the Name field,
which represents a well type in this example. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri.
All rights reserved.
166 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.53
Data view screen shot of the symbology assignments. Esri® ArcGIS ArcMap graphical user interface. Copyright
© Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www​
.mass.gov/mgis/colororthos2008.htm).

can be specified by the user and do not have to be uniformly incremented (a major advantage
compared to the raster display functionality of other computer programs). Figure 3.54 is an
example raster display using color shading to represent surface topography in a watershed.
It is important to reiterate that in the current version of ArcMap (version 10), data in
multiple different coordinate systems can be shown in the correct positions on the same
map (projected on the fly) as long as the coordinate systems are specified in each indi-
vidual piece of data. This is an improvement over earlier versions, in which all data had
to be in the same coordinate system to exist in the same location on a map. If a data layer
does not have a defined coordinate system, ArcMap will warn the user and prompt him or
her to define a projection. As described in Section 3.4.3.5, this capability can lead to errors
in quantitative data analysis, such as contouring or modeling, if database managers are
not cognizant of the need to use one consistent coordinate system for these applications.
Therefore, it is best to select one coordinate system and use it for all data associated with a
specific project (note that many GIS operators are becoming more and more lazy to do so
and instead rely on the ability to project on the fly).
Data from a shapefile or geodatabase can be added to the map through the Add Data
button or by dragging files into the layer list from ArcCatalog. ArcCatalog is a sepa­
rate Esri program used to manage spatial data that also functions as an efficient GIS
browser or explorer (Esri 2010m). File geodatabases and shapefiles can be easily created
in ArcCatalog. ArcCatalog can also be used to copy, move, delete, and preview (visualize)
data (Ormsby et al. 2004). More importantly, ArcCatalog allows hydrogeologists to create a
well-organized GIS folder for each project on their company network. Data layers can also
Data Management, GIS, and GIS Modules 167
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.54
Map containing a raster representing surface topography in a watershed. Color-shaded topographic maps are
often easier for nontechnical audiences to understand than contour lines.

easily be converted between ArcGIS and CAD formats in ArcCatalog, most commonly
using the Export to CAD tool. A GIS directory with folder names is depicted in Figure 3.55
as viewed in ArcCatalog.
Standalone graphics (points, polylines, polygons, text, etc.) can be added to the data
view, which inherits scaled spatial properties inherent to the map. In other words, if a
50-ft length line is drawn in data view, that line will always be 50 ft in length regardless
of the chosen scale for viewing. Graphical additions are useful for marking up maps and
quickly adding elements that do not need permanent storage in a shapefile or geodatabase.
Additionally, any point, polyline, polygon, or annotation graphic created with the drawing
toolbar can be converted into a feature using the Convert Graphics to Features command.
This is a very powerful tool as the hydrogeologist can instantly convert hand-drawn
shapes into a shapefile or a geodatabase feature class. In other words, a simple drawing
can be converted into a digitally determined feature with prescribed spatial and other
attributes. A common use of this tool is to convert proposed monitoring-well locations
168 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.55
Example GIS directory as viewed in ArcCatalog. ArcCatalog is a better GIS browser than Windows Explorer as
ArcGIS files are consolidated for display, and data can be created, edited, or previewed. Esri® ArcGIS ArcCatalog
graphical user interface. Copyright © Esri. All rights reserved.

(drawn as point graphics) into a shapefile so that attributes such as well identifications,
construction information, and X, Y coordinates can be linked to that location.
A hypothetical example use of the Convert Graphics to Features command is illustrated
in Figures 3.56 through 3.58 to create a spatially referenced layer for a tailings pond at an
old industrial site.

FIGURE 3.56
As a first step, a red-hatched polygon is drawn around the tailings pond using the draw toolbar. Esri® ArcGIS
ArcMap graphical user interface. Copyright © Esri. All rights reserved. World Imagery Source: Esri, i-cubed,
USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User Community.
Data Management, GIS, and GIS Modules 169
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.57
After selecting the polygon, the Convert Graphics to Features option can be selected. Esri® ArcGIS ArcMap
Graphical user interface. Copyright © Esri. All rights reserved. World Imagery Source: Esri, i-cubed, USDA,
USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User Community.

FIGURE 3.58
Screen shot of the Convert Graphics to Features option window, where the drawings will be converted into a
shapefile (.shp extension). Note that the shapefile may assume the coordinate system for the entire data frame
or for any individual layer within the data frame. Esri® ArcGIS ArcMap graphical user interface. Copyright ©
Esri. All rights reserved.
170 Hydrogeological Conceptual Site Models

3.6.3.2 Layout View


Layout view is the platform for creating formal figures with a title block, legend, scale bar,
and direction indicator. In layout view, map elements are placed on a page in page space,
sized according to the paper to be used for printing (Esri 2010k). In general, the hydroge-
ologist should perform all data manipulation in data view and then select the final scale
for those data in layout view while editing the title block. The central component of layout
view is the data frame, which displays all the map layers and essentially functions as a
window displaying the data view (note that multiple data frames can be used in the same
layout to create a zoom box, for example). In layout view, one can zoom in and out of the
data frame (changing the scale of the map) or zoom in and out on the map itself—akin to
holding a map printout at different distances from one’s face. The scale and display win-
dow of layout view, and the overall coordinate system of the map, can be fixed in the Data
Downloaded by [University of Auckland] at 23:40 09 April 2014

Frame Properties dialogue. To reiterate, individual layers can be in different coordinate


systems. The chosen coordinate system of the data frame is applied as default to imported
or exported data that is devoid of a specified projection.
There are many user-friendly properties of the layout view that are not immediately
appreciated by the GIS novice. For example, scale bars and text for a map are automati-
cally adjusted when the user zooms in and out inside the data frame. This means that any
map printout will be in the correct scale as long as No Scaling is selected during the print
setup. Improper map scales are common problems with printouts from CAD as scale bars
and text are often manually inserted and not automatically linked to the scale of the data
frame. Data frames are easily rotated in layout view with automatic adjustment of north
arrows, and multiple data frames are easily added to the view to show a completely differ-
ent set of layers or a zoomed-in view of a specified section of the map.
Hydrogeologists should establish common layout templates by project with different sizes
and orientations (e.g., letter, legal, portrait, landscape, etc.). These templates can easily be
imported into a map document to standardize the appearance of maps. An example portrait
layout with requisite map elements (title block, legend, etc.) is presented in Figure 3.59.
Graphics can also be added to layout view, but their size will remain unchanged regardless
of the scale of the map. Layout graphics are similar to hand-markings on a printout of a map.
In summary, the process of adding and displaying data in ArcMap is surprisingly easy, and a
beginner will be able to make high-quality maps very quickly with a little bit of practice.

3.6.4 Geoprocessing in ArcMap


A surplus of tools for data digitizing, display, and manipulation exists within ArcMap,
many of which are provided by ArcToolbox. Even experienced users can stumble across
a previously unknown tool that makes a formerly complicated task as simple as pushing
a button. The full list of tools and their capabilities cannot easily be presented, and the
reader is referred to the robust and easy-to-use Esri help file for further reading. However,
the most commonly used ArcGIS tools in the field of hydrogeology are presented and
briefly described in the following sections for the reader’s benefit.

3.6.4.1 Querying Tools


In addition to the querying capabilities offered within an external database program,
such as Microsoft Access or Microsoft SQL Server, queries can also be constructed
within ArcMap to further refine display selections. In a sense, a query made in Microsoft
Data Management, GIS, and GIS Modules 171
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.59
Example completed figure for a groundwater contour map at a hypothetical hazardous-waste site.
172 Hydrogeological Conceptual Site Models

Access can be considered a front-end query, and an ArcMap query can be considered a
back-end query. Oftentimes they can accomplish the same thing, and it is left to the user
to decide where a query is most appropriate for the application at hand. The most com-
mon location for querying within ArcMap is in the Definition Query tab of the Layer
Property dialogue. In this window, the user can specify which data in the selected layer
should be displayed and which data should be hidden from the mapping. Complicated
queries involving math and logical operations can easily be created in the Query Builder
module. One highly beneficial feature of the Definition Query tool is that its queries
translate to any form of spatial data analysis conducted on the queried layer in the
future.
Figure 3.60 is a screen shot illustrating the use of the Definition Query and Query Builder
features in ArcMap. In this example, the only wells to be shown in the map are those in the
shallow bedrock (BRS). In addition, well BRW-2U will be excluded. If this layer is used to
Downloaded by [University of Auckland] at 23:40 09 April 2014

generate a groundwater contour map in ArcMap, the only wells included in the contouring

FIGURE 3.60
Example use of the Definition Query tab of the Layer Properties dialogue. Note that buttons on the bottom of
the Query Builder box can be used to verify query language logic, to load queries from other layers, or to obtain
help regarding query language. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved.
Data Management, GIS, and GIS Modules 173

algorithm will be those in the shallow bedrock. Complex queries using multiple data lay-
ers can be created through the Join and Relate tools, which connect the attributes of dif-
ferent tables (similar to a select query in Microsoft Access). A nonspatial data table can be
linked to the attribute table of a spatial data layer using the Join and Relate tools. The Join
tool combines the separate tables into one new, larger table, and the Relate tool links attri-
butes but keeps the tables separate (Ormsby et al. 2004).

3.6.4.2 Labeling Tools


One of the many advantages to using a geodatabase-linked visual processor such as ArcMap
is the availability of automatic labeling engines. Label properties in ArcMap are gener-
ally specified in the Properties dialogue of each map layer. The label size, font, placement
relative to the data point, and border format can all be modified by the user. Queries and
Downloaded by [University of Auckland] at 23:40 09 April 2014

custom expressions can be used to label individual data categories differently and with user-
specifie­d notes or additions. Labels are all placed automatically, such that no manual data
entry is required. However, it may be advantageous to convert labels into graphics in instances
of label overflow or overlapping. Advanced GIS users may consider using the labeling tool-
bar or the Maplex extension for highly labeled maps. In general, it is recommended to avoid
the use of excessive labeling to keep maps simple and focus on the most important elements.
Figure 3.61 is a screen shot illustrating the use of the labeling toolbar.

FIGURE 3.61
Labels tab of the Layer Properties dialogue. Note that different labels can be used for different queried data sets
by changing the setting on the Method tab. In this example, the Well_ID field is labeled with the resulting labels
visible in the map view behind the dialogue box. Esri® ArcGIS ArcMap graphical user interface. Copyright
© Esri. All rights reserved. World Imagery Source: Esri, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping,
Aerogrid, IGN, IGP, and the GIS User Community.
174 Hydrogeological Conceptual Site Models

3.6.4.3 Editing Tools


The editor toolbar is instrumental in populating geodatabases with manually entered data
and in modifying the location and size of spatial features. A hydrogeologist might use the
editor toolbar to populate a shapefile or feature class with proposed sampling locations for
an upcoming investigation. Using editor, the hydrogeologist would place the sample on
a map and then populate its associated data fields with the sample ID and other relevant
information, such as the sampling depth and planned laboratory analyses.
Figures 3.62 through 3.64 illustrate the use of the editing toolbar with dimensioning.
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.62
Screen shot illustrating the creation of a Water Supply Well feature using the editor toolbar (new feature
denoted by the small aqua circle). Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).
Data Management, GIS, and GIS Modules 175
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.63
Attribute definition using the editor toolbar. The new Water Supply Well is named EW (for “extraction well”)
and assumes the display properties for that symbol class. Note that if the layer being edited is stored in a geo-
database, the geodatabase itself is being edited. Esri® ArcGIS ArcMap graphical user interface. Copyright ©
Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass​
.gov/mgis/colororthos2008.htm).

FIGURE 3.64
Dimensioning features can be created in editor for display purposes or to help place features in their correct locations.
Snapping tools can also help ensure that features are placed in exact locations, such as the intersection of two lines.
Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved. USGS Color Ortho Imagery
(2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).
176 Hydrogeological Conceptual Site Models

3.6.4.4 Georeferencing Tools


Georeferencing is the process by which map coordinates are assigned to layers in a map.
Most often, features requiring georeferencing are scanned maps from historic investiga-
tions or imagery data where coordinates are unavailable. Scanned files should be saved
in .jpg or .tiff format prior to being added as layers in a map document. To georeference
a layer, three or more points are selected from that layer and then linked to correspond-
ing points in the map document in the spatially correct location. The layer is then trans-
formed into the corrected scale and spatial location based on these three linking points.
Alternatively, if the layer to be georeferenced has elements with known coordinates in the
real world, a text file called a world file (with the .wld extension) can be created with those
coordinates to automatically georeference the layer in question. Elements of the scanned
map can then be traced and added to feature classes and/or shapefiles with real spatial
Downloaded by [University of Auckland] at 23:40 09 April 2014

attributes. Practicing hydrogeologists mostly use georeferencing to trace historic contour


maps or to approximately place historic sampling locations in a map.
Example georeferencing exercises are provided in Chapter 7 in regards to the incorpora-
tion of historic and public-domain data.

3.6.4.5 Analysis Tools


Analysis tools are used to perform numerous graphical operations of significant value,
including data extraction, overlaying, and buffering. Extracting commands include clip-
ping graphics from one layer based on features in a different layer. This is useful when
clipping topographic or groundwater contour maps based on physical hydrogeological
boundaries or the absence of constraining data, as illustrated in Figures 3.65 and 3.66.

FIGURE 3.65
Input features for the clip tool. The hydrogeologist needs to clip the topographic contour lines to the watershed
boundary for incorporation into a groundwater model.
Data Management, GIS, and GIS Modules 177
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.66
After successful execution of the clip tool, a new layer is created with topographic contours limited to the extent
of the watershed.

Overlaying commands compute and create features representing the intersection, union,
or spatial join of separate layers. In hydrogeological practice, one example of overlaying
tool usage is calculating areas of soil impacted by multiple different contaminants. This
is demonstrated with the Intersection tool in Figure 3.67. Buffering tools calculate and
create polygon or polyline buffer rings around selected features. This is useful to the

FIGURE 3.67
Example use of the intersection tool to create a feature (red) representing the area where the two plumes (tan
and blue) overlap. Note that the intersect tool is considered an overlay tool, and the clip tool is considered
an extract tool. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www​.mass.gov/mgis/
colororthos2008.htm).
178 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.68
Example use of the buffer tool to create 1000 ft radii circles around two water-supply wells. Note that there is
also a multiple ring buffer tool, which can automatically generate multiple circles with different radii around
individual features. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface. Copyright © Esri. All rights
reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www.mass.gov/mgis/
colororthos2008.htm).

hydrogeologist when needing to show buffer zones around water-supply wells to protect
against contamination as illustrated in Figure 3.68.

3.6.4.6 Measurement Tools


Several rapid measurement tools exist in ArcMap that can greatly reduce calculation times
and improve calculation accuracy. The Calculate Geometry tool can be used to instantly
calculate the area, perimeter, length, X coordinate centroid, and Y coordinate centroid for
polygon or polyline features (as applicable). After executing this tool, the selected geo-
metric parameter measurements are added to the attribute table of the feature class in
question. Figure 3.69 illustrates the use of the calculate geometry tool to calculate geo-
metric properties of interest for the polygon representing the Middle Reach of the Muddy
River displayed in Figure 3.51 (note that this polygon is named swamp for the calculation
screen).
In addition to automated calculations, the user can manually measure geometric fea-
tures using the measure toolbar. The user defines the area of measurement manually by
drawing lines or polygons, and the results are outputted to the dialogue box as shown in
Figure 3.70.
Data Management, GIS, and GIS Modules 179
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.69
Example use of the Calculate Geometry tool to calculate the area, perimeter, and centroid coordinates. The dia-
logue box shown is to calculate the area, which can be done using either the coordinate system of the layer or of
the data frame (in this case, the same as should be always attempted). Also note that numerous unit options are
available. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.

FIGURE 3.70
Example output of the measure features toolbar when drawing a polygon around the boundary of the swamp
(Middle Reach of Muddy River) to measure its area in acres. Note that the measurement is a good approxima-
tion of the exact calculation shown in Figure 3.69. Esri® ArcGIS ArcMap graphical user interface. Copyright ©
Esri. All rights reserved.
180 Hydrogeological Conceptual Site Models

3.6.4.7 Data Management Tools


As the name implies, data management tools are primarily concerned with the manage-
ment of geodatabase content and have useful utilities:
• Populating data layers with extracted X, Y coordinates as shown in Figures 3.71
through 3.73
• Projecting data layers into different coordinate systems as shown in Figures 3.74
and 3.75
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.71
Attribute table of monitoring-well feature class that does not have X, Y coordinates. Esri® ArcGIS ArcMap
graphical user interface. Copyright © Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) down-
loaded from MassGIS (http://www.mass.gov/mgis/colororthos2008.htm).

FIGURE 3.72
Selection of the add X, Y coordinates tool. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface.
Copyright © Esri. All rights reserved.
Data Management, GIS, and GIS Modules 181
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.73
Following execution of the tool, X (POINT_X) and Y (POINT_Y) coordinates have been added to the attri-
bute table in the feature class’s projected coordinate system. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.

FIGURE 3.74
Screen shot of input parameters for the project tool. This tool can be used to define a projection for features
that do not have an associated projection or to reproject features from one coordinate system to another. In this
example, monitoring-well features in Massachusetts are being reprojected from the SPCS to the UTM coor-
dinate system. In addition, a geographic transformation is being used to convert the underlying geographic
coordinate system from NAD 83 to NAD 27. The reprojected feature class can be saved as a different file so
the original data layer is preserved. Esri® ArcGIS ArcMap and ArcToolbox graphical user interface. Copyright
© Esri. All rights reserved. USGS Color Ortho Imagery (2008/2009) downloaded from MassGIS (http://www​
.mass.gov/mgis/colororthos2008.htm).
182 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.75
Attribute table of the reprojected wells layer Wells_Reproject after using the add X, Y coordinates tool to popu-
late the table with the new UTM coordinates. Note the significant differences between the old coordinates
(SPCS) and the new UTM coordinates (POINT_X and POINT_Y). Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.

• Converting feature types from one format to another, such as converting a poly-
line to a polygon
• Miscellaneous data management tasks for tables, feature classes, and rasters,
including creating, copying, exporting, and compressing files

3.7 GIS Modules for Hydrogeological Data Analysis


As computer technology has steadily advanced, a large number of computer programs
have been specifically created to help the hydrogeologist perform technical work. These
programs are better termed modules as they are often integrated into larger computer
software programs designed for data visualization (e.g., ArcMap). These modules are
often companion programs to ArcMap and the geodatabase, with the common links being
X, Y coordinates and other data attributes. Through use of these modules, the hydrogeolo-
gist can rapidly analyze spatial data in complex ways and produce high-­quality graphi-
cal deliverables in a GIS environment. Simply put, the modern hydrogeologist must have
familiarity with these modules in order to satisfy clients and be competitive in the tech-
nological marketplace. This section briefly describes various modules for common GIS
programs that are developed by independent vendors specifically for displaying geologic/
hydrogeologic data. Emphasis is placed on outlining the types of analyses and graphics
production that can be accomplished through use of each software product. Hence, this
section is divided by hydrogeological task rather than by the module names themselves.
As groundwater modeling has become such an important aspect of modern hydrogeology,
an entire chapter is devoted to the subject (Chapter 5), and it is not introduced below.
Data Management, GIS, and GIS Modules 183

3.7.1 Statistical Analyses


Numerous modules exist within various programs to help the hydrogeologist calculate statis-
tical parameters for spatial data of interest. While the field of statistics is not limited to spatial
data, the simple calculation of parameters such as mean, maximum, minimum, standard devi-
ation, range, and histogram is an essential component of any hydrogeological investigation.
The utility of GIS-based statistical modules is the ease with which these computations can be
performed on data in geodatabase format. For example, rather than manually extracting the
data of interest from a database and then pasting it into Excel or another program, GIS mod-
ules allow direct querying of data followed by automated computation of the desired parame-
ters. More importantly, results are automatically exported and sorted into a tabular GIS object.

3.7.1.1 Visual Sample Plan


Downloaded by [University of Auckland] at 23:40 09 April 2014

One important statistical module commonly used in hazardous waste–site investigations


is Visual Sample Plan (VSP). VSP is a computer program in the public domain designed to
produce statistically defensible sampling plans to meet U.S. EPA DQOs. Specifically, VSP
helps the hydrogeologist determine how many samples are needed to meet prescribed
levels of decision error (e.g., type I and type II errors when evaluating a hypothesis test
determining whether or not the mean concentration of the parameter of interest exceeds
an action limit/threshold value). A type I error for a hypothesis test is the probability of
falsely rejecting the null hypothesis, and a type II error for a hypothesis test is the prob-
ability of falsely accepting (or failing to reject) the null hypothesis.
Numerous parametric and nonparametric methods of sample-size determination are
available in VSP, including the robust one-sample, nonparametric Multi-Agency Radiation
Survey and Site Investigation Manual sign test (Matzke et al. 2010). VSP easily links to
GIS software, and GIS data can be imported to and exported from the program. A screen
shot of proposed sample locations developed in VSP is presented in Figure 3.76. The major

FIGURE 3.76
Example hypothetical sampling design output from VSP. Green dots are historic shallow soil sampling loca-
tions, and yellow dots are additional sampling locations proposed by VSP to satisfy the decision error thresh-
olds prescribed by the user.
184 Hydrogeological Conceptual Site Models

advantage of this is exporting exact VSP-determined sample locations into an ArcMap


document. VSP also has its own visual processor for display without requiring access to
ArcMap, and more recent versions of VSP have incorporated additional data analysis tools,
such as geostatistical analysis and contouring (described in greater detail in Chapter 4).
An example exercise of calculating the number of samples needed for a site investigation
using VSP is presented in Chapter 7.

3.7.1.2 FIELDS Rapid Assessment Tool Software


Similar to VSP, the Rapid Assessment Tool (RAT) software developed by the U.S. EPA
Field Environmental Decision Support Team (FIELDS) is a standalone program in the
public domain designed to help environmental professionals design and implement sci-
entifically defensible sampling strategies. As with VSP, RAT software can be used to cre-
Downloaded by [University of Auckland] at 23:40 09 April 2014

ate random or grid-based sampling designs and is easily linked with Microsoft Access
and ArcMap to import basemaps and export data. RAT software can also be used to
perform simple statistical calculations, generate histograms and trend plots, and produce
data contour maps from rasters created with natural neighbor interpolation methods (U.S.
EPA 2009). RAT software goes beyond VSP in its utility as a data collection interface in
the field. The program can receive GPS and field-monitoring data and immediately store
them in a Microsoft Access database for presentation in its visual processor as depicted
in Figure 3.77. For this reason, FIELDS promotes RAT software’s use in real-time continu-
ous mapping applications during large-scale site investigations. The ability to efficiently
store, visualize, and analyze data in real time is particularly useful in contaminant delin-
eation or excavation projects where decisions are made continuously regarding where
next to sample or dig. Figure 3.78 shows an example visualization screen in RAT. The
use of data visualizations as a real-time decision tool is described further in Chapters 7
through 9.

FIGURE 3.77
RAT attempts to streamline the transmission of field data to the project database by interacting directly with field
equipment such as an X-ray fluorescence instrument used to screen soils for heavy metals, a GPS unit used to obtain
coordinates for sampling locations, and a multigas meter used to screen the breathing zone of the sampling area
(shown from top to bottom). (Modified from U.S. EPA, 2009b, Introduction to RAT Presentation. “RAT Introduction.
ppt.” Available at http://www.epaosc.org/site/doc_list.aspx?site_id=5208, accessed February 10, 2011.)
Data Management, GIS, and GIS Modules 185
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.78
RAT visualization screen with a sampling grid. (From U.S. EPA, Rapid Assessment Tool (RAT) User Guide
Version 3.02.07. SOP NO. C-ERT-O-004, Revision No. 0, 95 pp., 2009a.)

3.7.1.3 Spatial Analysis and Decision Assistance


Spatial Analysis and Decision Assistance (SADA) software produced by the University
of Tennessee greatly expands the data visualization and exploration capabilities of VSP
and FIELDS. SADA supports a robust two-dimensional visual processor in addition to
a fully functional three-dimensional viewer, which is unique for programs available in
the public domain. SADA’s two- and three-dimensional visual processing capabilities are
illustrated in Figures 3.79 and 3.80, respectively. SADA provides a platform to perform the

FIGURE 3.79
Image of contour map produced in SADA. (From University of Tennessee, Spatial Analysis and Decision
Assistance Documentation, 2008, www.tiem.utk.edu/~sada/documentation.shtml, accessed February 14, 2011.)
186 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.80
Example three-dimensional visualization of contamination over bedrock produced in SADA. (From University
of Tennessee, Spatial Analysis and Decision Assistance Software–Visualization, 2007a, www.tiem.utk​
.edu/~sada/visualization.shtml, accessed February 14, 2011.)

full life-cycle statistical evaluation of data, incorporating the followings steps (University
of Tennessee 2008):

• Initial visualization of historic data


• Development of a statistically based sampling strategy
• Exploratory statistical analysis of sampling results
• Human health and ecological risk assessment
• Geostatistical analysis and simulation
• Decision and cost-benefit analysis

The holistic, self-contained approach of SADA presents great opportunities to the hydro-
geologist to efficiently manage, analyze, and present data. Often in professional practice,
multiple different computer programs are used for each of the above steps, which can cre-
ate confusion and inefficiency. The GIS professional appreciates SADA because results and
conclusions from each site assessment stage can be seamlessly visualized. For example,
risk assessment conclusions are rarely visualized on a map as the calculations are typically
performed in external spreadsheets devoid of location data. An example risk assessment
visualization in SADA is presented in Figure 3.81. Clear presentation of risk assessment
results is particularly important because the majority of decisions made in the remediation
of hazardous waste sites are based on risk-assessment results.
The integrated, comprehensive nature of SADA also enables the hydrogeologist to effec-
tively blend the assessment stages such that decisions can be made in an iterative fash-
ion after revisiting earlier analyses. For example, SADA can be used to create sampling
designs integrating the results of earlier risk-assessment calculations.
Data Management, GIS, and GIS Modules 187
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.81
Image of contour map of cumulative risk produced in SADA. (From University of Tennessee, Spatial Analysis
and Decision Assistance Software–Risk Assessment, 2007b, www.tiem.utk.edu/~sada/humanhealth_risk​
.shtml, accessed February 14, 2011.)

3.7.1.4 ArcToolbox
Within ArcToolbox, the Statistics toolset is used to calculate summary statistics data from
map or geodatabase elements. ArcToolbox also contains a Spatial Statistics toolbox that
performs more complex calculations on spatial data, including geographic distribution,
pattern, and cluster analysis as shown in Figure 3.82. These tools may be used to calculate
the mean center of data (one example of which would be calculating the center of mass of
a groundwater contaminant plume) or to perform hot spot, cluster, and outlier analysis
on hydrogeological data. The Spatial Analyst extension to ArcGIS also contains functions
to calculate statistical parameters of rasters and then assign these values to cells in a new
raster layer (Esri 2010n).

3.7.1.5 ProUCL
More complicated statistical methods are often required for hydrogeological investiga-
tions. These methods are not limited to spatial data alone but are often used in hydrogeol-
ogy for analyses of spatial data stored in a GIS environment. Common advanced statistical
calculations performed in hydrogeological practice include confidence interval estimation,
hypothesis testing, distribution testing, and trend analysis. ProUCL is a statistical mod-
ule available in the public domain, which can be used to perform hypothesis testing (for
example, comparing the mean concentration of contaminant concentrations in soil to a
regulatory standard), distribution testing, and to calculate UCLs of a mean estimate (e.g.,
a 95% UCL; U.S. EPA 2007). UCLs are of particular interest for risk assessments for which
95% UCLs often represent exposure point concentrations.
188 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.82
Screen shot of available spatial statistics tools in ArcToolbox. Esri® ArcGIS ArcToolbox graphical user interface.
Copyright © Esri. All rights reserved.

3.7.1.6 Monitoring and Remediation Optimization System


Advanced spatial statistics for hydrogeological data can be calculated with another widely
used public-domain program, the Monitoring and Remediation Optimization System
(MAROS). MAROS helps the hydrogeologist optimize monitoring plans for groundwa-
ter contamination sites (e.g., what wells should be sampled for what parameters at what
frequency). Additionally, MAROS can be used to evaluate statistical and spatial changes
in a groundwater contaminant plume over time. Example tabular and graphical output
from MAROS is presented in Figures 3.83 and 3.84, respectively. Statistical evaluation is
Data Management, GIS, and GIS Modules 189
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.83
Example Mann–Kendall trend analysis summary table produced in MAROS for benzene concentrations in
groundwater at a hypothetical hazardous-waste site. Note that MAROS gives the user a qualitative judgment
that can be referenced when assessing trends (e.g., decreasing).

FIGURE 3.84
Example plot of the center of mass (first moment) of a hypothetical groundwater plume at a hazardous-waste
site. Locations of the center of mass are shown for 10 sequential years to demonstrate how the plume has
changed over time.
190 Hydrogeological Conceptual Site Models

performed through Mann–Kendall trend analysis, which can be used to help estimate
when cleanup will be achieved. MAROS evaluates spatial changes in groundwater con-
tamination by calculating the zeroth, first, and second spatial moments of groundwater
data. The zeroth moment is simply the total mass of a contaminant in groundwater, the
first moment is the center of mass of the groundwater plume (X, Y position), and the sec-
ond moment is the spread of the plume about the center of mass (Aziz et al. 2006).
Spatial moment analysis can demonstrate successful application of monitored natural
attenuation of groundwater contamination or can be used to justify removing redun-
dant monitoring wells from a sampling network. The results of MAROS analyses can be
exported into a visual processor (e.g., ArcMap) to create powerful graphics demonstrating
changes in the spatial orientation of groundwater plumes. Central to MAROS is the use of
X, Y coordinates, which cements its place as an important GIS module.
Downloaded by [University of Auckland] at 23:40 09 April 2014

3.7.1.7 Other Tools


Trend analysis and numerous other advanced statistical analyses are best performed with
robust statistical programs such as Statgraphics, JMP, Minitab, SAS, and R. Trend analyses
are often conducted as part of hydrogeological studies, being used to evaluate changes
in concentration of contaminants over time, evaluate changes in well-yield or water-­supply
demand over time, or correlate different pieces of data.

3.7.2 Geostatistics and Contouring


One of the most important tasks carried out by the hydrogeologist is developing scien-
tifically defensible contours of spatial data. The vast majority of hydrogeological data has
embedded spatial correlation, which means that measurements taken at points located
close together are more similar than those taken at points located far away from each
other. Water-table elevations, contaminant concentrations in groundwater, sediment thick-
nesses, layer and bedrock topography, and surface topography are just a few of the many
geological features exhibiting spatial correlation. Therefore, nearly every project in hydro-
geology will require contour maps of some kind.
Geostatistics is broadly defined as the statistics of spatial, or geographical, data. Hence,
it is a natural fit in the GIS-based computer environment of modern-day hydrogeology.
The science of geostatistics is what empowers the hydrogeologist to generate scientifically
defensible and reproducible contour maps with quantifiable levels of uncertainty. This is
a major advance over the subjective, manual maps of yesteryear that are difficult to repro-
duce and will invariably lead to linear interpolation vis-à-vis the three-point problem. There
are numerous excellent geostatistical programs in the commercial marketplace that can be
used to generate contour maps, most notably the Geostatistical Analyst extension to ArcGIS,
Surfer by Golden Software, and Rockworks by Rockware. Fundamental to these programs
is the use of variogram models and kriging. The public-domain programs VSP and SADA
also have substantial contouring capabilities. Because contouring is very important in the
professional practice of hydrogeology, the subject is covered in detail in Chapter 4.

3.7.3 Boring Logs and Cross Sections


Another common task required of the hydrogeologist is to produce high-quality boring
logs and cross sections. The majority of projects in professional hydrogeology involve
subsurface drilling through unconsolidated and consolidated deposits. Drilling can be
Data Management, GIS, and GIS Modules 191

used to collect lithology data in the subsurface, to install monitoring or production wells,
and to collect geophysical data. All these data must be presented in a professional, easy-
to-­understand manner. Typically, preliminary data collected in the field are recorded on
paper in field manuals. These data can then be entered into a geodatabase or into spe-
cific programs created to facilitate boring log production. The use of handheld electronic
devices or laptop computers in the field to enter data, such as soil classifications, directly
into a geodatabase (real-time data entry) is described further in Chapter 7.
In the authors’ experience, the most widely used module for boring log production is gINT,
an Access-based relational database program. Within gINT, the hydrogeologist creates tables
of data collected in the field and automatically places the data in a customized boring log
report with user-defined symbology. Example data fields stored and displayed by gINT
include soil type with USCS classification and symbology, cone penetrometer test data, pho-
toionization screening data and graphs, well-construction data, and/or geophysical data.
Downloaded by [University of Auckland] at 23:40 09 April 2014

gINT can be considered a GIS module because it uses a database to store and query data
and because the addition of coordinates enhances its capabilities. When supplied with
coordinates for the individual boring logs, gINT can be used to create fence diagrams and
the shell of a geological cross section.
Another commercial software product commonly used to generate boring logs is
RockWorks by RockWare. RockWorks is a program specifically designed for subsurface
data visualization and has numerous applications in the geotechnical, environmental, and
oil/gas industries. Subsurface data can be imported into RockWorks in many different
formats, and the data can be analyzed and visualized to produce the following maps and
diagrams (RockWare 2011a):

• Boring logs
• Cross sections
• Fence diagrams
• Contour maps (using both geostatistical and deterministic grid interpolation
methods; see Chapter 4)
• 3D models
• Geochemical plots (piper, Stiff, and Durov diagrams)
• Structural diagrams (rose, lineation, and arrow maps)

RockWorks has both 2D and 3D visual processors and provides a platform for organizing
and gridding subsurface geologic, hydrogeologic, and geochemical data. The user can also
create composite images and animations. An example 3D composite image that illustrates
many of RockWorks’ capabilities is presented as Figure 3.85. Additionally, data, graphics,
and models can be imported from and exported to ArcGIS, AutoCAD, Surfer, and other
software environments. The companion product RockWorks GIS Link allows the user to
create cross sections, fence diagrams, and other graphics within ArcMap (RockWare 2011b).
Geologic cross sections are developed for nearly all hydrogeological applications in pro-
fessional practice. Cross sections are critical in both developing and visualizing the CSM
(Chapter 2) as myriad data can be displayed, including, but not limited to, stratigraphy,
lithology, geotechnical data, water-level data, field-screening data, and laboratory-analytical
data. Full computer automation of cross-section generation is very difficult as extensive pro-
fessional interpretation is needed to develop layers and annotate figures. While programs,
such as RockWorks, may create an excellent skeleton for a geologic cross section, it is likely
192 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.85
Example 3D composite image produced in RockWorks showing boring logs, a fence diagram, and fate and trans-
port pathways. (From RockWare, RockWorks Features, 2011, www.rockware.com/product/featureCategories​
.php?id=165, accessed May 19, 2011.) Property of RockWare, published with permission.

that manual edits will be necessary—often done in independent computer programs such as
Canvas, AutoCAD, or ArcMap. Many hydrogeologists prefer to develop cross-section drafts
entirely by hand on graph paper and to then have these drafts digitized.

3.7.4 Proprietary Environmental Database Systems


In the commercial marketplace there exist a number of proprietary data management and
visualization systems specifically tailored to the environmental consulting/hydrogeology
field. In general, the goal of these programs is to help environmental professionals (not
necessarily database experts) securely manage, transfer, and visualize data with prefabri-
cated templates through a graphical user interface (GUI) platform. One example program
is HydroDaVE, which has been successfully used to help manage and visualize data for
multiple groundwater basins in California. HydroDaVE consists of the following elements:

• A relational SQL database with Web-enabled data import


• A Web-enabled GUI for data processing and visualization
• A Web service that facilitates messaging and transporting of information

An example data visualization in HydroDaVE is presented in Figure 3.86. An alter-


native example of a proprietary database management system is GIS/Key, which is an
Data Management, GIS, and GIS Modules 193
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 3.86
Example data visualization in HydroDaVE with topographic base map and groundwater elevation contours.

environmental-­specific relational database with user-friendly reporting tools and the abil-
ity to use AutoCAD as a visual processor.
The primary drawback of proprietary database systems, as opposed to databases cre-
ated manually by the user through an Access/SQL platform, is the loss of flexibility to
create customized, project-specific solutions for data management problems. However,
proprietary systems are becoming increasingly robust and mindful of project needs. Web-
enabled programs, such as HydroDaVE, are becoming adept at incorporating data beyond
numeric values, including critical CSM data elements, such as Google maps, text, images/
photographs, animations, and scans of historical documents. Proprietary systems are also
becoming better at integrating into other programs, such as ArcGIS, for data analysis and
visualization. Therefore, it is likely that the role of these systems in professional practice
will expand in the coming years.

3.7.5 General Notes on Computer Modules


Often in professional practice, project managers or clients will ask the hydrogeologist to
“see what the computer says” when it comes to producing contour maps, cross sections,
or other deliverables. This can be a frustrating question to the experienced hydrogeologist
as the above modules are not simply exercises in mindless button pressing. The inherent
danger is that they certainly could be used in that fashion as anyone with basic computer
literacy can run these modules and produce technical deliverables. The hydrogeologist
must always explain to project managers and clients that these modules are to be executed
with care and, above all, with a conceptual understanding of the underlying calculations
and assumptions. The output of a computer module is only as good as the quality of the
194 Hydrogeological Conceptual Site Models

input data, the quality of the required assumptions, and the quality of the technical review
of the output. The familiar maxim garbage in = garbage out certainly applies. A non­
hydrogeologist can produce a contour map with a computer program that appears reason-
able on paper but has major conceptual errors. An example of button-pressing contouring
gone wrong is provided in Figure 3.87.
In general, GIS modules are tools to help the hydrogeologist perform and display com-
plex analyses efficiently. The hydrogeologist must always rigorously review the input and
output and make revisions as needed to maintain consistency with the CSM. A computer
module is not a substitute hydrogeologist. Rather than asking to see what the computer
says, project managers and clients should ask to see what the hydrogeologist produces
through use of computer modules.
Because of the button-pressing perception of computer modules, many environmental
professionals are wary of their application and prefer to rely solely on conceptual infer-
Downloaded by [University of Auckland] at 23:40 09 April 2014

ences that can be made on raw data alone or on hand calculations made using that raw
data (e.g., hand contouring, analytical modeling). However, this mind-set ignores the
many benefits of computer modules in hydrogeology. The following list explains why GIS
modules are invaluable to the field of hydrogeology and comprise excellent responses to
the question: “Why should we use these computer programs anyway?”
GIS modules are efficient. Any hydrogeologist who has experienced the frustration of per-
forming manual analysis and then having that analysis scanned and traced into a com-
puter visual processing program can appreciate the efficiency gains of GIS modules. The
GIS language of these modules enables seamless integration of output with visual proces-
sors. Furthermore, the computational efficiency of these modules saves hours of time. A
contour map that may take an hour or more to draw manually can be created in minutes.
Even more powerful is that many different versions of the same contour map can be cre-
ated in minutes with different input parameters to provide interpretive options to the

FIGURE 3.87
Example of groundwater contour map produced in a computer program that makes no conceptual sense.
Instead of interacting with the river in a realistic manner, water is shown to flow toward an unknown sink for
which there is no explanation (i.e., no pumping wells or other groundwater extraction mechanisms). There is a
problem with either the data quality or with the way the data was contoured in the computer program.
Data Management, GIS, and GIS Modules 195

hydrogeologist. Even when a contour map created by a GIS module has to be adjusted
manually by the hydrogeologist, which is often the case, the time involved is significantly
shorter compared to drawing the entire map by hand, from scratch.
Computer modules are nonsubjective. Any form of hand contouring or hand interpolation
will be entirely subjective by definition. Linear interpolation will often be used without
any justification for doing so. Conversely, computer modules produce output based on
scientific quantitative analysis (statistics, geostatistics). This nonsubjective output can then
be assessed by the hydrogeologist and adjusted based on fundamental concepts in hydro-
geology (e.g., surface water–groundwater interaction). As a result, the analysis process
becomes more transparent and focuses attention on the key conceptual assumptions made
in the analysis. However, it cannot be stressed enough that no GIS module or computer
program can replace the final professional interpretation of a hydrogeologist. There will
be real-world situations where underlying hydrogeology, heterogeneity, and other factors
Downloaded by [University of Auckland] at 23:40 09 April 2014

cannot be interpreted correctly by nonsubjective means.


Output from computer modules is reproducible. As computer module output is based on
numeric input parameters, it can be easily reproduced by third parties. This is critical in
litigation projects or other circumstances where transparency is paramount. The hydroge-
ologist can clearly outline the steps taken to produce any deliverable and provide the input
files and parameters to third parties. Experienced hydrogeologists can also use computer
modules to perform the conceptual tweaking often required as a final step. An example of
this is the use of auxiliary (false) points in groundwater contour mapping to correct con-
ceptual errors. Even manual changes to module output can be quantified in a manner that
is entirely reproducible and transparent.
Computer modules quantify uncertainty. Arguably, the greatest benefit of computer modules
is that they can be used to quantify uncertainty in technical analyses. All hydrogeological
calculations require fundamental assumptions that generate uncertainty. This translates to
uncertainty in the ultimate decision based on the data. The ability to quantify uncertainty
helps the hydrogeologist explain to clients, project managers, and regulators the confi-
dence with which decisions are made. Example questions that can be answered through
uncertainty analysis include What is the likelihood of the groundwater plume reaching
the residential wells? How reliable are the flow-rate projections for this water-supply well?
What is the probability that a particular area of the site has soil contamination?
A classic example of uncertainty analysis is cross-validation of contour maps. Both
Surfer and Geostatistical Analyst can be used to perform cross-validation, which consists
of removing individual points from the data set and determining how the predicted value
at that location compares to the actual value (which was removed). Statistics on these resid-
uals can be used to assess the reliability of the interpolated surface and, hence, quantify
the uncertainty associated with decisions made using contour maps of that surface. Cross-
validation also helps the hydrogeologist calibrate the kriging model to generate a more
statistically accurate surface (see Chapter 4).
Indicator kriging is another form of contouring that can be used to quantify uncertainty.
When performing indicator kriging, measured values are replaced with either a 1 or a 0
depending on the value’s relationship to a threshold standard (e.g., a soil cleanup level).
This kriged surface is often interpreted as a probability of exceedance of that standard (see
also Chapter 4).
Obviously, outputs such as boring logs or cross sections do not have applicable forms
of uncertainty analysis (unless layers in a cross section are kriged). However, all modules
that use calculations will have some means of quantifying error and uncertainty. It can
also be argued that the speed with which technical analysis is performed using computer
196 Hydrogeological Conceptual Site Models

modules intrinsically contributes to uncertainty evaluation as multiple iterations with dif-


ferent parameters can be used to perform sensitivity analysis. For example, boring logs
or cross sections can be rapidly displayed five different ways, giving the hydrogeologist
and other involved parties the opportunity to visualize uncertainty in data interpretation.
Statistical or quantitative uncertainty (sensitivity) analysis leads to informed decisions
that balance error probability with the costs of additional data collection. All decisions are
made with this cost/benefit framework, and computer modules greatly assist the process.

References
Downloaded by [University of Auckland] at 23:40 09 April 2014

Aziz, J., Vanderford, M., Newell, C. J., Ling, M., Rifai, H. S., and Gonzales, J. R. 2006. Monitoring
and Remediation Optimization System (MAROS) Software Version 2.2 User’s Guide. Air Force
Center for Environmental Excellence, GSI Job No. 2236, 309 pp.
Data Management Association (DAMA), 2007. Home–DAMA International. Available at www​.dama​
.org/, accessed February 13, 2011.
Databasedev.co.uk, 2007. Database Solutions for Microsoft Access. Available at www.databasedev​.co​
.uk, accessed February 14, 2011.
de Smith, M. J., Goodchild, M. F., and Longley, P. A. 2006–2011. Geospatial Analysis—–A Comprehensive
Guide to Principles, Techniques and Software Tools, 3rd Edition. The Winchelsea Press, 560 pp. Web
Edition. Available at http://www.spatialanalysisonline.com/output/. Accessed February 14,
2011.
Esri, Inc., 2010a. What Is a Geodatabase? ArcGIS 10 Help Library.
Esri, Inc., 2010b. What Is a File Geodatabase? ArcGIS 10 Help Library.
Esri, Inc., 2010c. Table Basics. ArcGIS 10 Help Library.
Esri, Inc., 2010d. Feature Class Basics. ArcGIS 10 Help Library.
Esri, Inc., 2010e. Raster Basics. ArcGIS 10 Help Library.
Esri, Inc., 2010f. What Are Geographic Coordinate Systems? ArcGIS 10 Help Library.
Esri, Inc., 2010g. What Are Projected Coordinate Systems? ArcGIS 10 Help Library.
Esri, Inc., 2010h. About Map Projections. ArcGIS 10 Help Library.
Esri, Inc., 2010i. Robinson. ArcGIS 10 Help Library.
Esri, Inc., 2010j. Vertical Datums. ArcGIS 10 Help Library.
Esri, Inc., 2010k. Displaying Maps in Data View and Layout View. ArcGIS 10 Help Library.
Esri, Inc., 2010l. What Is a Shapefile? ArcGIS 10 Help Library.
Esri, Inc., 2010m. What Is ArcCatalog? ArcGIS 10 Help Library.
Esri, Inc., 2010n. Raster Dataset Statistics. ArcGIS 10 Help Library.
Golden Software, Inc., 2004. Gridding Overview. Surfer Version 8.05 Help.
Massachusetts Department of Environmental Protection (MassDEP), 2007. MCP Representativeness
Evaluations and Data Usability Assessments. Commonwealth of Massachusetts, Executive
Office of Energy & Environmental Affairs, Policy #WSC-07-350.
Matzke, B. D., Nuffer, L. L., Hathaway, J. E., Sego, L. H., Pulsipher, B. A., McKenna, S., Wilson, J. E.,
Dowson, S. T., Hassig, N. L., Murray, C. J., and Roberts, B., 2010. Visual Sample Plan Version
6.0 User’ Guide. United States Department of Energy, PNNL-19915, 255 p. Available at http://
vsp.pnnl.gov/docs/PNNL%2019915.pdf.
Ormsby, T., Napoleon, E., Burke, R., Groessi, C., and Feaster, L., 2004. Getting to Know ArcGIS Desktop,
Second Edition. Esri Press, Redlands, CA.
RockWare, 2011a. RockWorks Features. Available at www.rockware.com/product/featureCategories​
.php?id=165, accessed May 19, 2011.
Data Management, GIS, and GIS Modules 197

RockWare, 2011b. RockWare GIS Link 2. Available at www.rockware.com/product/overview​


.php?id=166, accessed May 19, 2011.
University of Tennessee, 2007a. Spatial Analysis and Decision Assistance Software–Visualization.
Available at www.tiem.utk.edu/~sada/visualization.shtml, accessed February 14, 2011.
University of Tennessee, 2007b. Spatial Analysis and Decision Assistance Software–Risk Assessment.
Available at www.tiem.utk.edu/~sada/humanhealth_risk.shtml, accessed February 14, 2011.
University of Tennessee, 2008. SADA Version 5 User’s Guide. Available at http://www.tiem.utk​
.edu/~sada/documentation.shtml, accessed February 14, 2011.
United States Environmental Protection Agency (U.S. EPA), 2006. Guidance on Systematic Planning
Using the Data Quality Objectives Process. Office of Environmental Information, Washington
DC, 110 pp.
U.S. EPA, 2007. ProUCL Version 4.00.02 User Guide. Office of Research and Development, 239 pp.
U.S. EPA, 2009a. Rapid Assessment Tool (RAT) User Guide Version 3.02.07. SOP NO. C-ERT-O-004
Revision No. 0, 95 pp.
Downloaded by [University of Auckland] at 23:40 09 April 2014

U.S. EPA, 2009b. Introduction to RAT Presentation. “RAT Introduction.ppt.” Available at http://
www.epaosc.org/site/doc_list.aspx?site_id=5208, accessed February 10, 2011.
United States Geological Survey (USGS), 1987. Map Projections—A Working Manual. USGS
Professional Paper 1395, US Department of the Interior, US Geological Survey, 394 pp.
USGS, 2001. The Universal Transverse Mercator (UTM) Grid. USGS Fact Sheet 077-01, US Department
of the Interior, US Geological Survey, 2 pp.
USGS, 2007. Geographic Information Systems. Available at http://egsc.usgs.gov/isb/pubs/gis_
poster/, accessed January 23, 2012.
USGS, 2010. USGS High Plains Aquifer WLMS: Water-Level Data by Water Year (October 1 to
September 30). US Department of the Interior, US Geological Survey. Available at http://ne.water​
.usgs.gov/ogw/hpwlms/data.html, accessed October 2, 2010.
Downloaded by [University of Auckland] at 23:40 09 April 2014
4
Contouring

4.1 Introduction
Contouring is one of the most important tasks performed during all phases of conceptual site
model (CSM) development. It is a two-dimensional visual presentation of a spatially distrib-
uted CSM element. Because most such elements are below land surface and not directly vis-
ible, their spatial distribution, or shape, is estimated from data collected in the subsurface at
discrete locations, the so-called data points. Locations of unique data points are defined with
three spatial coordinates, X, Y, and Z. Lines that connect locations with the same value of a
parameter of interest are called contours. As explained later in more detail, the contours can
be drawn using various methods but are always the result of interpolation between known
discrete values and therefore do not represent a true spatial distribution of the parameter. In
other words, depending on the number of field data and distances between data points, as
well as the applied contouring method, contours are more or less accurate.
Mathematically, a contour line (or isoline) represents a function of two spatial coordi-
nates along which it has a constant value. The best-known example is a two-dimensional
map view of the topographic (land) surface where the land surface elevation is a function
of X and Y coordinates. A topographic contour line connects all points of equal elevation
above a given level (called datum, usually mean sea level). Successive contour lines on a
topographic map are drawn for equal difference in elevation between them. This differ-
ence is called contour interval. Figure 4.1 illustrates the concept of contour lines applicable
to any surface. In general, contour lines can be thought of as intersections of stacked hori-
zontal planes with a real or hypothetical surface. Common examples of surfaces of hydro-
geological interest include the water table of an unconfined aquifer (real physical surface),
the potentiometric surface of a confined aquifer (imaginary surface), and the top surface of
a confining layer (real physical surface).
In addition to surfaces, where the main element of interest is elevation, contour lines
are routinely used to represent equal values of other spatially varying parameters, such
as contaminant concentration in the vadose and saturated zones, hydraulic conductivity,
percentage of clay in the sediment, and many others. However, because these parameters
are distributed within certain volumes of porous media, contour lines are of limited use
because they, by definition, can represent values of a parameter only in planar views as
shown schematically in Figure 4.2. Quite a few sets of contours in both horizontal (map
view) and vertical (cross-sectional view) planes would have to be constructed to repre-
sent the true three-dimensional nature of the contaminant distribution in this case. More
appropriate would be to represent contaminant distribution using a mathematical function
of three spatial coordinates, X, Y, and Z, or the so-called isosurfaces along which the con-
centration values are constant. One of the common reasons why this visualization method
is disproportionately less utilized than the two-dimensional contouring is availability of
data at different depths (i.e., along the vertical or Z coordinate) and costs associated with
obtaining such data at a sufficient vertical resolution. Three-dimensional contouring with

199
200 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

(a)

(b)

FIGURE 4.1
(a) Two views of a terrain with horizontal geologic layers dissected by surface drainage. (b) Contour map of the
topographic (terrain) surface. Individual contour lines are intersections of horizontal planes with the terrain
surface. Contour interval is the vertical distance between the successive horizontal planes (contour lines).
Contouring 201
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.2
Vertical cross section through groundwater contaminant plume showing contours of equal contaminant con-
centration. Contours are interpolated from concentrations measured at discrete vertical locations in multiport
monitoring wells.

geostatistical or other models is also exceedingly difficult because three-dimensional data


have strong anisotropy, which means spatial dependence of data differs with distance
and direction (Krivoruchko 2011). More on three-dimensional visualizations, including
rendering of volumes, is provided in Chapter 6.

4.2 Contouring Methods


4.2.1 Manual Contouring
Manual contouring is frequently utilized in groundwater studies, either as the only method
or in conjunction with computer-based methods. A complete reliance on computer pro-
grams could, in some cases, lead to erroneous conclusions because they are not always able
to interpret (recognize) features apparent to a hydrogeologist. This includes, for example,
presence of geologic boundaries, heterogeneous porous media, influence of surface water
bodies, or principles of groundwater flow. Thus, manual contouring or manual adjustment
of computer-generated maps is an integral part of hydrogeologic studies.
Manual contouring is often based on triangular linear interpolation (the three point
problem) as illustrated in Figure 4.3, combined with the hydrogeologic experience of the
202 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.3
Left: Finding the position of the water table in three dimensions using data from three monitoring wells (num-
bers are water levels in meters or feet above sea level). Right: Construction of water table contour lines by trian-
gulation with linear interpolation.

interpreter. The first draft manual map is not necessarily an exact linear interpolation
between data points. Rather, it is an interpretation of the hydrogeologic conditions with
contours that roughly follow available numeric data. An important but often ignored fact
when manually drawing contour maps is that most, if not all, parameters that are con-
toured do not change linearly from one location to another. In other words, natural and
anthropogenic processes that shape spatial distribution of a CSM element (parameter) can
rarely be described with linear equations. Some notable exceptions include smooth, undis-
turbed contacts between sediment layers deposited during slow, long-lasting, regional
transgressions and the hydraulic gradient in a homogeneous confined aquifer of uniform
thickness from which there is no addition (recharge) or withdrawal (pumping) of ground-
water at the scale of interest.
An obvious limitation of manual contouring is that there is no possible way for the
hydrogeologist to quantify the uncertainty associated with his or her interpolated surface.
Additionally, the inferred spatial relationship between the data is entirely subjective and
is not based on quantitative analysis of spatial dependence with distance or direction.
Therefore, the slope of manually drawn contour lines between data points is arbitrary.

4.2.2 Contouring with Computer Programs


Even though the first computer-generated contour map may be somewhat inaccurate
because relevant hydrogeological features such as rivers, faults, or exposed bedrock may
not be characterized by the data set, it is always desirable to have the final contour map
as a (digital) computer file. This will allow its use in other applications, including for vari-
ous quantitative analysis, modeling, and visualization purposes. For example, having X,
Y, Z files of the water table and potentiometric surface of a confined aquifer and its top
and bottom hydrostratigraphic layers will significantly simplify preparation of a numeric
groundwater model.
Contouring 203

If the available computer program cannot produce a satisfactory contour map (for
example, there is a complicated mixture of impermeable and equipotential boundaries of
groundwater flow), and it cannot be forced to do so by the interpreter, the solution is to
digitize a manually drawn map (or draw the contours manually in a computer program
such as ArcMap). This, however, may be a lengthy process, and it is better to acquire an
appropriate software package for contouring. Quite a few computer programs today offer
a wide range of contouring methods, allow the interpreter to adjust the generated con-
tours, and can display contour maps in a variety of formats. Some of the most powerful
and widely used commercial programs include Surfer (Golden Software 2002) and the
Geostatistical Analyst extension (and to a lesser extent the Spatial Analyst extension) to
ArcGIS (Esri 2003). There are also several versatile programs in the public domain, such
as Visual Sample Plan (VSP; Matzke et al. 2010) and SADA (Institute of Environmental
Modeling, University of Tennessee 2008), which include both contouring and GIS capa-
Downloaded by [University of Auckland] at 23:40 09 April 2014

bilities. Graphical User Interface (GUI) software packages developed to support popular
groundwater modeling programs, such as Modflow, can also be used to create contour
maps from field data and export them to other applications. Examples of commercial
software include Groundwater Vistas by Environmental Simulations, Inc. and Processing
Modflow by Simcore Software, Inc. More detail on GUIs for groundwater modeling is
provided in Chapter 5.
Contouring programs require that individual data points be presented with two spatial
coordinates and the value of the parameter to be contoured (more detail on data prepara-
tion and coordinate systems is provided in Chapter 3). Common to all programs is divi-
sion of the two-dimensional space of interest into equally spaced vertical and horizontal
lines, that is, the creation of a contouring grid. In Surfer, the user can either specify the
grid spacing or let the program automatically determine it from the range of distances
between individual data points. The two basic requirements common to all contouring
methods, namely, data organization and creation of the grid, are shown schematically
in Figure 4.4. What separates different contouring methods is the mathematical equa-
tions (i.e., the model) used to calculate the parameter values (e.g., water table elevation or
contaminant concentration) at locations where it was not directly measured. During this
process, called spatial interpolation, the calculated values of the parameter are assigned
to the grid either at intersections of the grid lines or at the centers of the cells (squares)
formed by the grid lines. This means the basis for any contour map that will be eventu-
ally drawn by a program is a numeric matrix of equally spaced rows and columns called
a raster file (see also Chapter 3). Contour lines, for any contour interval specified by the
user, connect identical numeric values in the grid. Some programs, such as Surfer, include
options for smoothing the initial contours to give them a more natural look. Selecting a
finer contouring resolution (i.e., smaller cell size or grid spacing) will generally also result
in smoother contours.
One of many advantages of computer-based contouring using software programs is that
the interpolated grid, or raster file, can be displayed in a variety of ways as illustrated in
Figures 4.5 and 4.6. Observing the interpolated surface with and without contours super-
imposed, rotating and viewing it from different vertical angles, or using different colors
and resolution enables the user to better interpret various features of the surface and their
significance for the ongoing project. The most important benefit of having raster files (i.e.,
numeric values) is that they can be effortlessly transferred between various programs and
used for quantitative analyses. This includes calculation of volumes between surfaces,
areas between contours, or surface gradients (slopes) for example. These calculations are
204 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.4
Portion of a contour map created using a computer program. Top: Eight discrete values measured in the field are
shown with blue circles and black numbers with no decimal digits. The model calculates interpolated values and
assigns them to all grid nodes. Several interpolated values (red numbers with five decimal digits) are shown with
small red circles at intersections of dashed grid lines. Contour lines, with the contour interval of five, connect the
same parameter values in the grid. Bottom: Same map with the contour interval of two; it is advisable to create sev-
eral maps with different contour intervals as they may better reveal local variations in the parameter values.

commonly referred to as grid math or grid calculus and have numerous applications in
professional hydrogeology:

• Calculating aquifer volume and contaminant mass in soil or dissolved in


groundwater
• Calculating changes in surface properties, such as changes in contaminant con-
centration or aquifer elevations and volumes (see Figures 3.1 through 3.3 for use of
grid math to calculate elevation changes in the High Plains Aquifer system)
Contouring 205
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.5
Left: Contour map of a water table influenced by three pumping wells near a river. Dashes on contour lines
indicate groundwater flow (gradient) direction. Orange circles are locations of wells with field measurements.
Right: The same map shown as a colored raster. The inset reveals the visual advantage of color raster maps in
emphasizing details (differences).

• Creating layer surfaces for groundwater models


• Creating three-dimensional visualizations

The use of grid math to support documentation of monitored natural attenuation (MNA)
processes at hazardous-waste sites is described in Chapter 8.
Figure 4.7 shows contours created by different interpolation methods using the same
data set of the top surface of an actual clay aquitard. It is obvious that some of the maps
look very different from others, and some are quite similar. While, in most cases, it would
be rather easy to decide which maps (contouring methods) do not make much sense, it
will be more challenging to select the right one based solely on the first visual impres-
sion even if a degree of professional judgment is involved. Therefore, it is desirable to use
spatial interpolation models that also statistically describe the accuracy of the interpolated
surface. Spatial interpolation models and uncertainty analysis in contouring are described
in detail in Section 4.2.3.
206 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

(a)

(b)

FIGURE 4.6
(a, b) Two 3D views of the water table surface map shown in Figure 4.5.
Contouring 207
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.7
Contours of the top of a clay aquitard created by various contouring methods available in Surfer.
208 Hydrogeological Conceptual Site Models

4.2.3 Spatial Interpolation Models


As described earlier, spatial interpolation is the process of predicting values for a variable
of interest at unsampled locations based on surrounding measurements. Mathematical
equations comprising a model perform this interpolation and produce a grid of inter-
polated values, which can then be contoured at any prescribed interval. The underlying
assumption common to all spatial interpolation models is that the measured values are
spatially related; that is, they are not spatially independent or random. In other words, the
value measured at one location is related to the value measured at another, and values that
are closer to one another will be more related than values that are farther away from one
another (Webster and Oliver 2001).
Interpolating between independent data points that are equally weighted does not
make sense because the value at one location has nothing to do with the value at another
Downloaded by [University of Auckland] at 23:40 09 April 2014

location, and therefore, it is not scientifically appropriate to estimate values in between.


Fortunately, the vast majority of data collected in hydrogeological practice are spatially
related. Physical processes create boundary conditions and migration pathways that lead
to spatial correlation at the scale applicable to hydrogeology for the parameters of great-
est interest, including, but not limited to, geologic structure, stratigraphy and lithology,
groundwater and surface-water elevations, and contaminant concentrations. However, it
is not uncommon for measurements to appear independent because of low spatial and/
or temporal sampling density and extremely high variability. Additionally, spatial corre-
lation may not be visible at the scale of the investigation at hand. A well-developed CSM
can assist the hydrogeologist in determining whether spatial correlation is a reasonable
assumption for the data set in question.
Spatial interpolation models can either be exact or inexact interpolators. Exact interpola-
tors predict values exactly equal to the measurement values at sampling locations (i.e., the
values used to generate the contour map). Inexact interpolators account for uncertainty
in the data and are free to predict values at sampling locations that are different from
the exact measurements (note that inexact interpolators are also referred to as smooth-
ing interpolators). The use of exact interpolation models assumes that the data have no
measurement or locational errors, which is rare in hydrogeology. Laboratory analytical
data have considerable uncertainty (measurement error), and heterogeneity in subsurface
geology can cause significant local variation in hydraulic head (locational error). Exact
interpolation can cause discontinuities in predictions and in prediction standard errors
and simply cannot be used when data are imprecise or if different values are measured at
the same sampling location (Krivoruchko et al. 2000). A case study comparing exact versus
inexact interpolation in kriging is presented in Section 4.3.2.
Spatial interpolation models can also account for trend and/or anisotropy in the data set,
which is described further in Section 4.2.3.3. Metrics and methods for evaluating model
uncertainty and error are presented in Section 4.2.3.4.
Spatial interpolation models are divided into two primary classes, deterministic mod-
els and geostatistical models, which are further described in Sections 4.2.3.1 and 4.2.3.2,
respectively.

4.2.3.1 Deterministic Models


Deterministic models interpolate between measured values through prescribed mathe-
matical formulas that vary the smoothness of the interpolated surface. The spatial correla-
tion of the data is not considered in the interpolation algorithm. As a result, deterministic
Contouring 209

methods do not measure the uncertainty of interpolated predictions, with the notable
exception of local polynomial interpolation (LPI; Krivoruchko 2011). Despite this obvi-
ous limitation, deterministic models may be useful where spatial correlation cannot be
discerned within a data set. For example, soil or sediment concentrations of hydrophobic
organic contaminants, such as polychlorinated biphenyls, can have extremely high local
variability and disperse disposal locations that make data appear random and ill-suited
for geostatistical modeling. However, quite frequently inadequate data exploration is con-
ducted before choosing a spatial interpolation model, and the hydrogeologist may not
recognize that data transformations and/or detrending may satisfy statistical assump-
tions and justify the use of geostatistical methods (see Section 4.5).
Deterministic models are commonly used in professional practice, and it is likely they
will persist in contouring software indefinitely because of their speed and simplicity.
Consultants often prefer a model that is easier to understand and explain/document to
Downloaded by [University of Auckland] at 23:40 09 April 2014

stakeholders than one that requires more technical rigor, thereby avoiding a costly (or per-
ceptively costly) “science project.” Deterministic models commonly applied in hydrogeol-
ogy and earth sciences are briefly discussed below.

Triangulation with Linear Interpolation


Triangulation is the exact linear interpolation between three neighboring points as shown
in Figure 4.3. Because the original data points are used to define the triangles, they are
preserved (honored) very closely on the map. In general, when there are few data points
(e.g., less than 20) and/or data are not evenly distributed, triangulation is not effective
because it creates triangular facets and holes in the map. However, trends indicated by
triangulation may allow easy manual filling of the holes and completion of the contours.
This also means that further adjustments are usually needed to obtain the final X, Y, Z
computer file. Triangulation is fast and accurate with 200 to 1000 data points evenly dis-
tributed over the map area. Note, however, that this amount of original data is rarely avail-
able in groundwater investigations. To alleviate the problem with data availability, one can
estimate additional auxiliary data and add them to the data set for contouring. If enough
data are available, the advantage of triangulation is that it emphasizes breaks in contours
that may be attributable to natural causes such as faults, geologic boundaries, and streams
(Golden Software 2002).

Minimum Curvature
Contouring with minimum curvature is widely used in the earth sciences because this
method generates the smoothest possible surface while still closely honoring the original
data points. Minimum curvature is not an exact interpolator, however, and often does
not show expected breaks in contours that may be the result of natural influences. Some
programs, such as Surfer, alleviate this shortcoming by allowing the user to introduce so-
called breaklines and fault lines into the gridding process (note that these features can also
be used with most other contouring methods in Surfer; Golden Software 2002).

Inverse Distance to Power


This method is usually available in most contouring packages because it is very fast and
can handle large data sets. However, it does not produce good results when used to con-
tour small data sets representing potentiometric surfaces and relatively smooth geologic
contacts. Inverse distance to a power, or inverse distance weighted (IDW) interpolation,
weights data points during interpolation so that the influence of one data point relative
210 Hydrogeological Conceptual Site Models

to another decreases with the power of distance. The greater the weighting power is, the
less effect points far from the grid node have during interpolation. As the power increases,
the grid node value approaches the value of the nearest point (i.e., less averaging). For
a smaller power, the weights are more evenly distributed among the neighboring data
points (i.e., more averaging).
Normally, inverse distance to a power behaves as an exact interpolator. When calculat-
ing a grid node, the weights assigned to the data points are fractions, and the sum of all the
weights are equal to 1.0. When a particular observation is coincident with a grid node, the
distance between that observation and the grid node is 0.0, and that observation is given
a weight of 1.0, while all other observations are given weights of 0.0. Thus, the grid node
is assigned the value of the coincident observation. Although using a smoothing param-
eter somewhat buffers this behavior, the resulting maps often have a bull’s-eye pattern
around the positions of data points as shown in Figure 4.8 and also previously in Figure
Downloaded by [University of Auckland] at 23:40 09 April 2014

4.7. Software smoothing of contours can only slightly reduce this effect but does not elimi-
nate unnatural depressions and mounds (Golden Software 2002).
As there is no rational basis for choosing the weighing power for IDW interpolation, the
contouring parameters should be selected based on minimizing cross-validation error (see
Section 4.2.3.4). When data are dense and spatially correlated, the optimal power will be
close to 3, whereas when data are spatially independent, the optimal power will be close
to 1 (Krivoruchko 2011). If data are truly independent, spatial interpolation cannot be used
to make conclusions as there is no justification for basing new predictions on existing data
(in this case, interpolation should be used for display purposes only). As explained in the
work of Krivoruchko (2011), Geostatistical Analyst does not allow specification of a weigh-
ing power less than 1 because this can cause data points far away from prediction locations
to influence predictions more than those that are close. This contradicts the fundamental
assumption of spatial correlation and is nonsensical. Surfer does allow specification of
powers between 0 and 1, and the user is thus cautioned against this practice.

FIGURE 4.8
Contours of the top of an aquitard created using an IDW interpolation with default settings in Geostatistical
Analyst. Measurement locations indicated by blue circles.
Contouring 211

Natural Neighbor Method


The Natural Neighbor method is quite popular in some fields and interpolates using a
weighted average of neighboring measurements related to the borrowed area of Thiessen
polygons. Natural Neighbor does not extrapolate beyond the convex hull of the data loca-
tions (i.e., the limit of the Thiessen polygons). Natural Neighbor often produces an inter-
polated surface that is smoother and more visually appealing than other deterministic
methods, such as IDW (Golden Software 2002).

Modified Shepard’s Method


This method is similar to the inverse distance to a power interpolator but uses an IDW
least squares method, which eliminates or reduces the bull’s-eye appearance of the gen-
erated contours. The modified Shepard’s method can be either an exact or a smoothing
interpolator (Golden Software 2002).
Downloaded by [University of Auckland] at 23:40 09 April 2014

Global and Local Polynomial Interpolation


Global and local polynomial interpolation fit polynomial equation(s) to the data to gen-
erate a smooth (inexact) interpolated surface. Global polynomial interpolation (GPI) is
also known as trend surface interpolation and fits one polynomial to the entire data set.
Conversely, local polynomial interpolation (LPI) fits many polynomial equations to over-
lapping data neighborhoods with attributes specified by the user. More simply, LPI divides
the entire data set into moving windows around prediction locations, each of which has an
associated polynomial equation. LPI also weighs data closer to prediction locations higher
than data farther away. As a result, LPI can handle data sets with more variation than GPI
(Krivoruchko 2011).
The order of the polynomial for GPI, or for all polynomials for LPI, is specified by the
user and is most commonly either zero order (constant), first order (linear), second order
(quadratic), or third order (cubic). Higher order polynomials of greater complexity have
less physical meaning in the real world than zero- through third-order polynomials. For
example, a second-order polynomial conceptually resembles a surface with a bend in it,
such as a valley (Esri 2010a). Because LPI is a geographically weighted regression, it is
possible to estimate prediction errors, although there may be problems with these esti-
mates. Geostatistical Analyst provides a diagnostic, termed the spatial condition number,
to help assess the validity of the LPI model. A small spatial condition number, indicates a
stable solution, and a large spatial condition number indicates an unstable solution. Rule-
of-thumb spatial conditions threshold values above which solutions are unreliable are 10
for zero-order LPI, 100 for first-order LPI, or 1000 for third-order LPI (Esri 2010a).
Parameters that can be modified by the user while performing LPI are the kernel (weigh-
ing) function used to fit the surface (exponential and Gaussian are two common exam-
ples), the size of the kernel (termed bandwidth), the search neighborhood or degree of
smoothing, and anisotropy. LPI plays a critical role in kriging as it can be used to remove
large-scale data variation (i.e., trends) to make predictions more accurate. Trend removal
in kriging with LPI is demonstrated in Section 4.5.

Radial Basis Functions (Splines)


A diverse group of radial basis functions (RBFs), also called splines, creates surfaces that
pass through measurement locations with the minimum amount of curvature, as shown
in Figure 4.7. The kernel functions used to determine interpolation weights are analogous
to variograms in kriging (Section 4.3; Golden Software 2002). Like IDW interpolation, RBFs
212 Hydrogeological Conceptual Site Models

are all exact interpolators and closely preserve original data. However, unlike IDW inter-
polation, RBFs can predict values higher and lower than the maximum and minimum
measured values, respectively. RBFs are well suited for gently varying data sets, such as
hydraulic head, but do not produce good results for highly variable data sets (Krivoruchko
2011). In terms of the ability to fit data and produce a smooth surface, the multiquadric
method is considered by many to be the best. A smoothing factor can be introduced to all
the methods in an attempt to produce a smoother surface (Golden Software 2002). More
detail on contouring with RBFs can be found in the work of Carlson and Foley (1991).

4.2.3.2 Geostatistical Models


Geostatistical models are based on knowledge of the spatial correlation of the data set.
Rather than assigning interpolation weights based on arbitrary mathematical formulas
Downloaded by [University of Auckland] at 23:40 09 April 2014

(as is the case in deterministic models), weights are determined from the observed data
through the semivariogram model (see Section 4.3.1). Interpolation weights dictate how
each measured value contributes to the prediction at an unsampled location (Webster and
Oliver 2001). Geostatistical models produce predictions and prediction error estimates at
all unsampled locations.
The primary geostatistical model in existence is kriging, which is one of the most robust
and widely used methods for interpolation and contouring in many scientific fields. Figure
4.9 shows a contour map created by Geostatistical Analyst using default kriging and the
same data set of the top-of-clay aquitard shown in Figure 4.7. Kriging is known as the
optimal interpolation method because it minimizes the mean square error of predictions
and is statistically unbiased—predicted values and measured values coincide on average
(Webster and Oliver 2001). The statistical assumptions behind kriging and the overall geo-
statistical modeling process are described in detail in Section 4.3.

FIGURE 4.9
Contours of the aquitard created using kriging with default settings in Geostatistical Analyst. Note significant
differences versus the IDW surface depicted in Figure 4.8.
Contouring 213

4.2.3.3 Trend and Anisotropy


Natural phenomena are created by physical processes. Often these processes lead to sys-
temic changes in physical, biological, or chemical parameters. For example, contaminant con-
centrations in groundwater often exhibit trend resulting from the groundwater flow field.
Systemic changes in data over relatively large scales, or large-scale data variation, is termed
trend. Trend leads to nonrandom, deterministic error, and removing trend from geostatis-
tical models (i.e., kriging) generally produces more accurate predictions. Beyond improv-
ing models, detrending may be necessary to satisfy kriging assumptions. For example, data
with trend have a variable mean, which violates stationarity (see additional discussion in
this section and in Section 4.3). However, there must always be a conceptual justification
for trend removal in geostatistical modeling as unnecessary detrending of data can yield
worse results (Kitanidis 1997). Robust contouring programs such as Geostatistical Analyst
Downloaded by [University of Auckland] at 23:40 09 April 2014

and Surfer can detrend data before or after variography and contouring using LPI.
Natural physical processes can also have preferred orientations. For example, coarse ero-
sion material brought from land settles out fastest near sea shoreline, and the finer mate-
rial takes longer to settle and travels farther. Thus, the closer one is to the shoreline, the
coarser the sediments (higher hydraulic conductivity) become, and the further from the
shoreline, the finer the sediments (lower hydraulic conductivity). When interpolating at a
point, an observation 100 m away but in a direction parallel to the shoreline is more likely
to be similar to the value at the interpolation point than is an equidistant observation in a
direction perpendicular to the shoreline (Golden Software 2002). The property exhibited by
this example, in which spatial dependence varies in different directions at small to moderate
distances between points, is termed anisotropy. Data sets in which the range of spatial corre-
lation changes with direction exhibit geometric anisotropy, and data sets in which variance
changes with direction exhibit zonal anisotropy. As kriging assumes a constant variance
(see Section 4.3), kriging models cannot account for zonal anisotropy (Krivoruchko 2011).
However, any robust contouring program can account for geometric anisotropy by enabling
the user to change the range (distance) of correlation with direction.
The kriging algorithm accounts for anisotropy by converting the searching neighbor-
hood into an ellipse. The ellipse is specified by the lengths of its two orthogonal axes and
by an orientation angle. The elliptical shape of the searching neighborhood is depicted in
the lower left corner of the images in Figure 4.10. Different contouring programs may use
different approaches in defining and specifying the orientation and shape of the ellipse.
In Surfer, the lengths of the axes are called Radius 1 and Radius 2, and in Geostatistical
Analyst, they are termed the Minor Range and the Major Range. The orientation angle is
defined as the counterclockwise angle between the positive X axis and Radius 1. The rela-
tive interpolation weighting is defined by the anisotropy ratio, which is the maximum axis
of the ellipse divided by the minimum axis of the ellipse (Radius 2 divided by Radius 1 in
Surfer). An anisotropy ratio less than 2 is considered mild, and an anisotropy ratio greater
than 4 is considered severe. Typically, when the anisotropy ratio is greater than 3, its effect
is clearly visible on grid-based maps (Golden Software 2002).
It is important to note that contouring programs also enable incorporation of anisotropy in
deterministic models such as IDW. This may seem counterintuitive as deterministic models
are not based on spatial correlation, and therefore, they should not be able to account for direc-
tional changes in spatial correlation. Geostatistical Analyst and Surfer simulate anisotropy for
deterministic models by warping the coordinate system such that the distance in one direction
changes faster than the distance in another. This transformation creates an ellipse similar to
the correct use of anisotropy in geostatistical interpolation (Krivoruchko 2011).
214 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.10
Example directional semivariograms for water-level data in an aquifer. The top image shows a search angle of
140°, and the bottom image shows a search angle of 50°. Note that the separation distance on the X axis has been
adjusted by a factor of 10 –2. The Y axis shows the calculated semivariance (γ) for each data pair, which changes
significantly based on search direction. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright
© Esri. All rights reserved.
Contouring 215

In many applications, it is often difficult to distinguish large-scale trend from small to


medium scale anisotropy. Both trend and anisotropy can be visualized on a semivario-
gram and semivariogram map (see definitions in Section 4.3.1), as shown in Figure 4.10.
In the semivariogram graph, the semivariance (represented by the red dots on the graph)
decreases significantly when the search angle is changed from 140° on the top to 50° on the
bottom. This directional dependence is similarly visualized on the semivariogram map
in the lower left corner of each graphic. The question therefore arises: is this strong direc-
tional dependence an example of trend, which should be removed from the data set, or
anisotropy, which can be accounted for in the geostatistical model?
In order to answer this question, the hydrogeologist must have a conceptual under-
standing of the data being modeled. The data presented in Figure 4.10 are water-level
data collected in an aquifer. The potentiometric surface of the aquifer is determined by
large-scale hydraulic boundary conditions, such as high-elevation recharge areas and dis-
Downloaded by [University of Auckland] at 23:40 09 April 2014

charge boundaries like surface water features. These boundary conditions create systemic
changes in groundwater elevation as water moves from high head to low head (i.e., water
flows downhill). Regional potentiometric head measurements, in particular, demonstrate
significant trend (Kitanidis 1997). Therefore, it is appropriate to consider the strong direc-
tional dependence of the data an example of trend rather than anisotropy and the data
should be detrended prior to kriging. If directional dependence is still observed in the
semivariogram after detrending, it is likely that anisotropy also exists. A major cause of
anisotropy in groundwater elevations is variations in hydraulic conductivity that follow
preferred depositional orientations. To summarize, the large-scale variation (i.e., trend)
observed in this example is caused by far-field hydraulic boundary conditions, and the
small-scale variation is caused by preferred depositional patterns (i.e., anisotropy).
A more empirical approach to distinguishing trend from anisotropy on a semivario-
gram is related to the concept of stationarity, which is a required condition for kriging
estimates to be valid. Data are stationary if there is a constant mean and variance across
the data domain (with a small variance compared to the size of the domain), and data cor-
relation depends only on separation distance and direction rather than absolute location.
The omnidirectional semivariogram shown in Figure 4.11 (top) is indicative of nonstation-
arity as the semivariogram increases exponentially and does not approach an asymptote
that represents the variance of the data (termed the sill, see Section 4.3; Kitanidis 1997).
This implies that there is an infinite range of correlation within the data, which is likely
caused by a large-scale, deterministic trend. As trend is causing data to be nonstationary
in this case, it must be removed prior to kriging to establish a separation distance at which
data values are no longer correlated. Figure 4.11 (bottom) depicts the semivariogram of
residuals after removing a second-order trend. A finite range of correlation and a sill are
now observed, indicative of stationary behavior.
The Geostatistical Analyst extension to ArcGIS has a very useful trend analysis tool
that can also assist in the decision-making process. A screen shot of the trend analysis
tool analyzing the water-level data presented in the Figures 4.10–4.11 semivariograms is
presented in Figure 4.12 and demonstrates the presence of a strong second-order trend.
The bene­fits of detrending these data to create a stationary variable are clear when com-
paring Figure 4.13, created without detrending, to Figure 4.14, created with second-order
detrending. Both figures are kriged surfaces of the water-level data presented on the semi-
variograms in this section. The kriged surface in Figure 4.14 better captures the heteroge-
neity of the true groundwater contours and produces much better estimates in unsampled
locations. This occurs because detrending eliminates large-scale bias and focuses the krig-
ing model on local variation.
216 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.11
Omnidirectional semivariograms for the water-level data presented in Figure 4.10. The top semivariogram shows
infinite correlation, indicating the presence of trend, causing a variable mean and nonstationarity. The bottom semi-
variogram is produced after second-order detrending and the residuals have a definitive sill and are stationary. Esri®
ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
Contouring 217
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.12
Trend analysis tool in Geostatistical Analyst with data points plotted along X, Y, and Z axes and a second-
order regression line successfully fitted to the data. Esri® ArcGIS Geostatistical Analyst graphical user interface.
Copyright © Esri. All rights reserved.

FIGURE 4.13
Kriged potentiometric surface without detrending. Black lines and labels represent the actual water-level eleva-
tion contours (created synthetically using a numeric groundwater model), and the colored filled contours repre-
sent the kriged surface interpolated with point measurements displayed and labeled in purple.
218 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.14
Kriged potentiometric surface after removing a second-order trend from the data. Detrending significantly
improves the match between simulated (observed) and kriged contours, most notably in areas with low sam-
pling density.

4.2.3.4 Error and Uncertainty Analysis


The most compelling reason to use computer programs to perform spatial interpolation
is the ability to quantify prediction error. Manually drawn contour maps are completely
subjective and rely entirely on the professional judgment of the hydrogeologist. It is there-
fore not possible to determine the error of manual spatial interpolation. This is a problem
because it can be shown mathematically that all interpolated predictions have error, no
matter how accurate the measurements on which the predictions are based. Without thor-
oughly assessing prediction error, the hydrogeologist will make uninformed decisions
devoid of risk analysis. The most popular model diagnostic tool in computer software is
termed cross-validation, which can help evaluate both the quality of the data and the qual-
ity of the spatial interpolation model.
Cross-validation of the interpolated grid is a quantitative and useful tool that can help
with selection of the right contouring method for a particular set of data or the best com-
bination of gridding parameters within the same contouring method. It allows assessment
Contouring 219

of the relative quality of the grid by computing and investigating the gridding errors, also
referred to as residuals. The gridding errors are calculated by removing the first observa-
tion from the data set of N values and using the remaining data and the specified algo-
rithm to interpolate a value at the first observation location. Using the known observation
value at this location, the interpolation error is computed as

error (residual) = interpolated value − observed value.

Then, the first observation is returned into the data set, and the second observation is
removed from the data set. Using the remaining data (including the first observation)
and the specified algorithm, a value is interpolated at the second observation location.
Using the known observation value at this location, the interpolation error is computed as
before. The second observation is returned into the data set, and the process is continued
Downloaded by [University of Auckland] at 23:40 09 April 2014

in this fashion for the third, fourth, fifth observations, etc.—all the way up to and includ-
ing observation N. This process generates N interpolation errors (Golden Software 2002).
After computing the cross-validation errors, the mean error (same units as the data,
measuring the prediction bias) and the root-mean-square error (measuring prediction
accuracy) can be calculated for all spatial interpolation models. As deterministic mod-
els (e.g., IDW) cannot estimate prediction uncertainty, these are the only two statistical
metrics that can be calculated. Because geostatistical models also provide the prediction
standard error or prediction standard deviation (same units as data) of each prediction
location, three additional metrics can be calculated: mean standardized error (dimension-
less), average standard error (analogous to root mean square error), and root-mean-square
standardized error (measuring the assessment of prediction variability). The concept of
prediction standard error in kriging is described further in Section 4.3.2.
In a sense, two separate sets of model diagnostic statistics are available. The first, com-
posed of the mean error and root-mean-square error, reflects the absolute ability of the
model to make accurate predictions. The second set, composed of the standardized sta-
tistics only available in kriging, reflects the accuracy of the model considering the vari-
ability of the data itself. A good model will have similar statistics between these two sets
of metrics, indicating that the model makes good predictions that accurately reflect the
variability of the data. In summation, cross-validation diagnostics provide a quantitative,
objective measure of quality for the interpolation model and its comparison with other
models.
Figure 4.15 is a screen shot comparison of cross-validation results displayed in
Geostatistical Analyst for the top of aquitard grids created with the default IDW and krig-
ing models, displayed in Figures 4.8 and 4.9, respectively. When interpreting the cross-
validation results, one should have in mind the following general rules (Esri 2003):

• Predictions should be as close to the measurement values as possible, that is, the
scatter along the straight line on the graphs in Figure 4.15 should be minimal;
the smaller the root-mean-square prediction error, the better. The default kriging
model performs better than the default IDW model in this case.
• Predictions should be unbiased (centered on the measurement values). If the pre-
diction errors are unbiased, the mean prediction error should be near zero, and
the slope of the regression lines in Figure 4.15 should be 1:1. However, this value
depends on the scale and units of the data, so it is better to look at standardized
prediction errors, which are given as prediction errors divided by their prediction
220 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.15
Cross-validation comparison between default IDW and kriging models. Note that statistics involving predic-
tion standard error are only available for the kriging model. Esri® ArcGIS Geostatistical Analyst graphical user
interface. Copyright © Esri. All rights reserved.

standard errors (available only in the kriging model). The mean of these should
also be near zero.
• It is important to assess the variability of predictions, that is, the validity of predic-
tion standard errors. If the average standard error is close to the root-mean-square
prediction error, the variability in prediction is correctly assessed, and the root-
mean-square standardized error should be close to one. If the average standard
error is greater than the root-mean-square prediction error or if the root-mean-
square standardized error is less than one, then the variability of predictions is
overestimated. If the average standard error is less than the root-mean-square
prediction error or if the root-mean-square standardized error is greater than one,
then the variability of predictions is underestimated. Based on these guidelines,
the default kriging model in Figure 4.15 is underestimating variability and needs
a higher partial sill and/or nugget.

A good geostatistical model has a standardized mean near zero, a small root-mean-square
prediction error, an average standard error approximately equal to the root-mean-square
prediction error, and a standardized root-mean-square prediction error close to one (Esri
2003).
Another form of model diagnostic analysis that is less used in professional practice is
validation. Validation goes further than cross-validation in testing the model as an entire
subset of data is removed from the input. Interpolated predictions at these locations can
then be compared to the actual measured values, and statistical analysis of the errors can
be conducted. Validation is similar to verification of a groundwater model as it tests how
well the model can predict values in unknown areas as opposed to individual locations as
Contouring 221

is the case with cross-validation. Geostatistical Analyst has a validation toolset to assist in
removing data subsets and analyzing the associated errors.
Both unadjusted cross-validation or validation errors (residuals) and the prediction
standard errors (for kriging only) can be displayed in the form of a map, which enables
analysis of possible reasons for a specific spatial distribution of errors. Contours of cross-
validation errors for the IDW and kriging models of the aquitard surface are presented
in Figures 4.16 and 4.17, respectively. Together with the original contour maps and error
statistics, the error maps are used to compare different gridding methods and aid in
selecting the most appropriate one for the project at hand. In terms of both the visual
appearance of the contours (Figures 4.8 and 4.9) and the statistical performance of the
interpolation (Figures 4.15 through 4.17), kriging is superior to IDW in this particular
case. This is not coincidental because kriging is also superior to all other methods when
applied appropriately (i.e., it is the optimal interpolator). Statistical model comparison is
Downloaded by [University of Auckland] at 23:40 09 April 2014

additionally explained in Section 4.5. However, for various reasons, including perceived
complexity, many hydrogeologists still do not use kriging, which is described in detail in
Section 4.3.
It is also important to note that selection of the appropriate spatial interpolation model
should not solely rely on statistical diagnostics. Cross-validation and validation are highly
sensitive to measurement error and can poorly reflect stable features of the data. Statistics
can also give the impression that a model is more accurate than it truly is. Krivoruchko
(2011) illustrates this phenomenon through a synthetic problem. First, 150 data points are
randomly generated using a synthetic kriging model. Then two kriging models are fit to

FIGURE 4.16
Cross-validation error (residual) map for the default IDW model. The greatest error is approximately 40 ft.
222 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.17
Cross-validation error (residual) map for the default kriging model. The greatest error is approximately 26 ft.

the data set—the first with the same parameters as the synthetic model, and the second
with estimated model parameters. Using both cross-validation and validation, the esti-
mated model performs better statistically than the synthetic model (i.e., better than the
model that was used to create the data to begin with).
Faced with this uncertainty, the hydrogeologist must take into consideration his or her
knowledge of the processes that contributed to the current spatial distribution of the data
(i.e., the CSM). Additionally, cross-validation statistics may be more important in one part
of the data set than others; therefore, the overall statistics may not be as important as the
errors in specific locations (e.g., around potential receptors in groundwater contamination
studies). Beyond statistics, the qualitative appearance of the interpolated surface is impor-
tant in its own right, especially as it pertains to the CSM. Therefore, it is acceptable to select
a geostatistical model that is less accurate statistically but better reflects the physical and
chemical aspects of the CSM.

4.3 Kriging
As described earlier, kriging is a geostatistical method that takes into consideration spatial
variance, location, and sample distribution. While often erroneously ignored in profes-
Contouring 223

sional practice, kriging has several underlying assumptions that should be assessed to
ensure proper application and evaluate model limitations (Krivoruchko 2011):

• The data to be kriged are stationary, which means that there is a constant mean
and variance across the data domain, and data correlation depends only on sepa-
ration distance and direction rather than absolute location.
• The semivariogram or covariance model (see Section 4.3.1) describing spatial cor-
relation is known exactly and is consistent throughout the data domain.
• Sample locations are independent of the data values (i.e., the sample locations are
random).
• While not specifically stated in the classical derivation of kriging, an assumption
generally accepted by statisticians is that kriging is a Gaussian predictor, which
Downloaded by [University of Auckland] at 23:40 09 April 2014

means that the data should follow a normal distribution or be transformed to


approximate a normal distribution. Only a normally distributed data field will be
completely described by the mean and covariance.

It is acknowledged that, in practice, real data will never follow all of the above assump-
tions; however, some practical recommendations are listed below:

• Thorough data exploration should be conducted prior to kriging to evaluate how


mean and variance change across the data set. For example, a contamination hot
spot will undoubtedly have a different mean and distribution than background
areas. If these areas are included in the same data set to be kriged, the hydrogeolo-
gist should be aware of potential problems related to nonstationarity. For example,
if the measurement density is greatest in the hot spot, the hot spot will dominate
the semivariogram model, and predictions in background areas will have too
much variability and be inaccurate (Krivoruchko 2011).
• Where applicable, detrending and data transformation should be conducted to
remove systemic error and approximate stationarity and the normal distribution.
This has important implications with respect to the kriging standard error, dis-
cussed in Section 4.3.2 and demonstrated in Section 4.5 with an example.
• Where justifiable, outliers should be removed from the data set as they can inflate
the semivariance (Webster and Oliver 2001). However, if an outlier represents an
accurate measurement that reflects a key area of concern (such as a contaminant
hot spot), it is not advisable to remove the outlier as these data are likely most
important from a regulatory perspective.

There are various interpolation (gridding) models within the kriging method, and the
most applicable one to the existing data set can be chosen after determining a theoreti-
cal semivariogram or statistical function (model) that best describes the underlying field
data. Although all major contouring programs on the market include kriging, few allow
for user-friendly, visual generation of a semivariogram by interactively adjusting all its
key parameters. Therefore, in order to select an appropriate kriging method, it may be
necessary to generate the semivariogram using some external program. Both Surfer and
Geostatistical Analyst include very powerful and simple-to-use options for generating
experimental and theoretical semivariograms and creating contours maps with various
kriging methods.
224 Hydrogeological Conceptual Site Models

Figure 4.18 shows the top of a clay aquitard map created with default kriging in Surfer
using the same data set applied with default kriging in Geostatistical Analyst to produce
the map shown in Figure 4.9. As expected, both programs produce similar maps because
the applied geostatistical theory is the same. However, minor discrepancies exist as the
default semivariogram and kriging choices in these two programs are different, and the
user should make every effort to understand kriging methods implemented in either pro-
gram and learn basic principles of creating semivariograms interactively. For example, the
default model in Surfer does not incorporate a nugget effect, which makes the model an
exact interpolator and produces a surface (Figure 4.18) that is less smooth than that shown
in Figure 4.9. This has important conceptual and statistical implications (see Section 4.3.2
for additional discussion regarding the nugget effect).
In Surfer, the default option for kriging is a linear semivariogram model with the pro-
gram automatically fitting all experimental data using the least squares. This automated
Downloaded by [University of Auckland] at 23:40 09 April 2014

fitting may or may not result in a nugget effect, but in any case, the user is left in the dark as
to what is being implemented. Consequently, the created map may sometimes make very
little sense, and the user may blame kriging as an inappropriate method without examin-
ing the experimental variogram of the data first. An example of one such automated map
of dissolved contaminant concentration in groundwater created by Geostatistical Analyst

FIGURE 4.18
Contours of the aquitard created using kriging with default settings in Surfer. Note minor differences versus
the Geostatistical Analyst surface depicted in Figure 4.9.
Contouring 225

ND ND

ND 34
570

8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3

73
50
10 7.8 8 5.2

ND
Downloaded by [University of Auckland] at 23:40 09 April 2014

1.5

2.9 6.9
0.2

Kriged Concentration (ug/L)


5.0 4.2
0.05 - 1
1-2 3.3
2-5
ND
5 - 10
0.4
10 - 50 ND 2.3
50 - 100 2.0
100 - 500 0.2
ND
500 - 1,000 1.1
1,000 - 5,000 ND 1.0
0.7

FIGURE 4.19
Nonsensical contaminant concentration contour map created using default kriging in Geostatistical Analyst.
Black lines and concentration labels represent the actual concentration contours in micrograms per liter (created
synthetically using a numeric groundwater model), and the colored filled contours represent the kriged surface
interpolated with the point measurements displayed and labeled in purple.

is shown in Figure 4.19. Comparison of this colored plume map with the theoretical con-
taminant plume contours shown on the same figure clearly demonstrates why default
choices in any contouring program (i.e., mindless button pressing) are not recommended.
In contrast, when hydrogeologists use the full power of interactive variography and apply
their knowledge of the underlying physical processes, the results are much more favorable
as demonstrated in Section 4.5 for the same data set.

4.3.1 Variography
Variography is the term often used to describe the process of evaluating spatial correlation
between data measured in the field. If such correlation is evident, it includes determin-
ing which geostatistical model is the most appropriate to describe it quantitatively. The
selected model is then used to predict (interpolate) the variable in question at the unsam-
226 Hydrogeological Conceptual Site Models

pled locations, which is part of another process called kriging. Variography includes the
following steps:

• Data exploration, including general statistical parameters, probability distribu-


tion, identification of possible trend in data, and need for data transformation.
• Testing spatial correlation between the field measurements of the same variable
(spatial autocorrelation).
• Computing spatial covariance (autocovariance) between the data, including calcu-
lating semivariance for each pair of observations.
• Presenting the calculated semivariances graphically as a variogram cloud and/
or semivariogram. Note that the terms variogram and semivariogram are being
interchangeably used throughout the geostatistical literature, and the only real
Downloaded by [University of Auckland] at 23:40 09 April 2014

difference is the factor of two: semivariogram is one-half of the sample variogram,


just like semivariance is one-half of the sample variance.

The covariance (SXY) between two variables X and Y that are not regionalized variables
(i.e., not spatially correlated) is

SXY =
1
n− 2 ∑ (x − x
i=1
i av ) ⋅ ( y i − y av ) (4.1)

where n is the number of paired data, xi and yi are individual values of each variable, and
xav and yav are average values of each variable.
Semivariance (γi) for each pair of observation points of the spatially correlated (regional-
ized) variable is (see Figure 4.20)

1
γi = ( zi ,head − zi ,tail )2 (4.2)
2

Semivariance is calculated for all possible data pairs, which is often a fairly large number:

n(n − 1)
total number of pairs = (4.3)
2

FIGURE 4.20
Semivariance is calculated for each pair of observations, separated by distance h.
Contouring 227

where n is the number of measured data points. This information is then presented in two
ways (see Figure 4.21):

• As a semivariogram cloud, where all semivariances for all pairs are plotted against
their respective individual separation distances. This graph contains a large num-
ber of data and does not reveal much in terms of spatial correlation between the
data.
• As an experimental semivariogram, where all semivariances within one separa-
tion distance or lag (also called bin) are averaged and plotted against the average
separation distance between the data pairs within that lag.

Figure 4.22 illustrates the process of calculating semivariances within one lag and then
moving to the next lag. The average variance for one lag is
Downloaded by [University of Auckland] at 23:40 09 April 2014

i= n

2 γ * ( h) =
1
n
⋅ ∑[ g(x) − g(x + h) ]
i=1
i i
2
(4.4)

where
γ*(h) is the experimental semivariance for given distance (lag) h
h is the separation distance between the two data points
g is the value of the sample (data point)
x is the position of one sample in the pair
x + h is the position of the other sample in the pair
n is the number of pairs in calculation

FIGURE 4.21
Semivariogram in Geostatistical Analyst where red circles represent semivariances that have been binned but
not averaged (similar but not exactly equal to the cloud), and blue crosses represented the binned, averaged
values where the spatial correlation is readily apparent. Note that the semivariance (γ) labels on the Y axis are
scaled by a factor of 10 –1. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All
rights reserved.
228 Hydrogeological Conceptual Site Models

5 0 ft 5 0 ft

5 0 ft
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.22
Calculation pattern for omnidirectional (isotropic) variogram, looking at one data point at the time. Left:
Separation distance between the black data and the other data inside the shaded circle is between 0 and 50 ft,
that is, the lag width is 50 ft. This smallest separation distance (chosen by the user arbitrarily) is called Lag 1
(Bin 1). As the calculation window moves throughout the sampled area from one point to another, each new
step produces a number of pairs, which are all added to Lag 1. Right: Separation distance between the black data
and the data inside the shaded area is between 50 and 100 ft, that is, the lag width is also 50 ft. This separation
distance is called Lag 2 (Bin 2). As the calculation window moves throughout the sampled area from one point
to another, each new step produces a number of pairs, which are all added to Lag 2.

The calculated value (divided by two) is the measure of the difference between the two
data points, which are the distance h apart. The semivariogram measure plotted on the
vertical graph axis is in squared units of data (in the case of the hydraulic head, it is
squared distance or ft2). The lag is usually given a tolerance (say, 50 ft ± 10 ft) so that
more spatial information is included; note that most groundwater information in the
field is collected from irregularly spaced sampling points, so the exact lag distance (say,
50 ft) may lead to fewer calculated values for plotting the experimental semivariogram
graph. This tolerance should not be greater than one-half the basic distance. For example,
if the basic distance h is 50 ft, the tolerance should be about 20 ft. This means all pairs
that fall between 30 and 70 ft will be included in the calculation of the semivariogram.
The process is then repeated for as many new lags as possible. Each new distance (lag)
is increased by the basic interval (say, h = 50 ft + 50 ft = 100 ft; h = 150 ft; h = 200 ft, etc.),
and the results are plotted on the same graph. The maximum separation distance should
not be larger than one-half the distance between two points in the field data set that are
farthest apart.
Figure 4.23 shows one experimental semivariogram with the number of data pairs per
one lag (also called bin) and the variance of all data in the sample. The sample variance
(s2) is given as

i= n

s2 =
1
n−1
⋅ ∑(g
i=1
2
i
2
− g av ) (4.5)
Contouring 229
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.23
Example of experimental semivariogram.

where gi are the values of individual data, and gav is the sample mean given as

i= n
1
g av = ×
n ∑g
i=1
i (4.6)

Note that the x coordinates of the individual plotted points are not regularly spaced
because each coordinate is the average separation distance for all pairs included in the
individual bin; these average numbers vary from bin to bin by default because the data
are irregularly spaced. Semivariograms can be plotted for various directions (so-called
directional semivariograms), which is recommended, especially if a certain degree of
anisotropy in data is expected. Directions that produce noticeably different (while still
meaningful) semivariograms may indicate actual anisotropy in the data. Note, however,
that there is a substantial degree of subjectivity in interpreting semivariograms. For a
statistically valid semivariogram, there should be at least 30 pairs of data, which is often
not the case in groundwater studies. It is up to the user to know his or her data well and
make an evaluation as to the reasonable lower limit of data per bin. When the data set is
small, the directional calculation (e.g., all pairs falling within a 30° window ± x degrees
of tolerance) may not produce a meaningful semivariogram even if there is an underlying
anisotropy. In such cases, the calculation is, by default, performed for all possible direc-
tions and for all sample pairs that fall within the calculation interval. This will produce a
global, omnidirectional experimental semivariogram.

4.3.1.1 Semivariogram Curve-Fitting


Once the experimental semivariogram shows a certain recognizable structure, the next
step is to fit a theoretical curve that best describes the experimental data. These curves are
based on semivariogram models that dictate the weighing function used in interpolation.
Common models used in contouring programs are depicted in Figure 4.24.
230 Hydrogeological Conceptual Site Models

Linear Exponential Gaussian

Logarithmic Spherical Quadratic


Downloaded by [University of Auckland] at 23:40 09 April 2014

Pentaspherical Rational Quadratic Power, 0<n<1

Power, 1<n<2 Cubic Wave (Hole Effect)

FIGURE 4.24
Some of the most common theoretical semivariograms. (From Golden Software, Inc., Surfer 8 User’s Guide:
Contouring and 3D Surface Mapping for Scientists and Engineers. Golden Software, Inc., Golden, CO, 2002.)

Models with true ranges (data are not spatially correlated beyond the specified range):

• Circular
• Spherical
• Tetraspherical
• Pentaspherical

Powered exponential models:

• Gaussian
• Exponential
• Stable (a hybrid between Gaussian and exponential available in Geostatistical
Analyst)
• K-Bessel and Rational Quadratic (similar models)
• J-Bessel (wave/hole effect)

The shape of the experimental semivariogram should indicate which theoretical model
is appropriate for the given data (with data at small separation distances, i.e., close to
the origin, being most important). Note, however, that multiple models can be used to fit
Contouring 231

the same semivariogram. After selection of the appropriate model, the user must deter-
mine what the values are of the three important parameters of the semivariogram (see
Figure 4.25):

a range of influence
C sill
C0 nugget effect

The range of influence (a) is the distance at which data become independent of one another.
After this point, the graph is horizontal, and the corresponding value of γ is called the sill
of the semivariogram (C). In Surfer, the difference between the sill and the nugget is called
the scale, and in Geostatistical Analyst, it is called the partial sill.
Downloaded by [University of Auckland] at 23:40 09 April 2014

The scale (or partial sill), range, and nugget (where applicable) should be selected such
that the sill of the theoretical function is close to the sample variance. Figure 4.25 shows
a semivariogram with the nugget effect, which is the graph’s intercept at the vertical axis.
Its presence may indicate a potential error in data collection or any other component in
the data at the scale smaller than the separation distances between the field data. The
nugget effect is a constant that raises a theoretical semivariogram C0 units along the
vertical:

γ(h) = C0 + γ′(h) (4.7)

where γ′(h) is one of the common theoretical semivariograms shown in Figure 4.24.
In Surfer, the nugget effect is the sum of the error variance and the micro variance. The
error variance is a measure of the direct repeatability of the data measurements. A good
example is the collection of duplicate samples when analyzing contaminant concentrations
in groundwater. Experience shows that the duplicate sample is often not exactly the same as
the first measurement. The error variance values take these variances in measurement into
account. A nonzero error variance means a particular observed value is not necessarily the

FIGURE 4.25
Example of theoretical function fitted to experimental data, showing three main elements: range, sill, and
nugget.
232 Hydrogeological Conceptual Site Models

exact value of the location. Consequently, kriging tends to smooth the surface and does not
behave as an exact interpolator when using a nugget effect.
The micro variance is a measure of variation that occurs at separation distances of less
than the typical nearest neighbor sample spacing. For example, a parameter of interest
may show two spatial structures that can be described with a nested variogram in which
both models are spherical. The range of one of the structures is 100 m, and the range of the
second structure is 5 m. If the closest sample spacing were 10 m, it would not be possible
to see the second structure (5-m structure). The micro variance allows for specifying the
variance of the small-scale structure. It is for this reason of identifying possible small-scale
variations of the parameter of interest that some field sampling plans include collection of
a number of colocated samples, that is, samples that are very close to each other.
In general, specifying a nugget effect causes kriging to become more of a smoothing
interpolator, implying less confidence in individual data points versus the overall trend of
Downloaded by [University of Auckland] at 23:40 09 April 2014

the data. The higher the nugget effect, the smoother the resulting grid (Golden Software
2002). This means that kriging tends to underpredict large values and overpredict small
values. The nugget effect has important implications with respect to both the appearance of
the interpolated surface (kriging prediction) and the statistical performance of the model
(kriging prediction standard error), a case study of which is presented in Section 4.3.2.
Note that curves can also be fit to a covariance model as opposed to a semivariogram
model. Covariance is the expected product of the deviations of two random variables from
their mean. A covariance model looks like an upside-down semivariogram, and a simple
mathematical relationship can convert covariance functions to semivariogram functions.
In professional practice, the semivariogram model is used much more often because it
does not require a specified mean, and it is better for estimating data correlation at small
distances (Krivoruchko 2011). Covariance models are required for cokriging applications,
described in Sections 4.3.3.2 and 4.5.

4.3.1.2 Search Neighborhood


One other important component of kriging is specifying the search neighborhood over
which the semivariogram parameters are applied to determine the weighting scheme
for spatial interpolation. A refined search neighborhood emphasizes the importance of
nearby measurements over far-field measurements when predicting values at individual
locations. Measurements falling outside the search neighborhood for a given location are
effectively excluded from the interpolation algorithm at that location. The primary ele-
ments of the search neighborhood are the axes of the search ellipse (determined by the
anisotropy ratio as described previously); the number of search sectors, or quadrants,
within each ellipse; and the minimum and maximum number of data values to include
within each sector.
There are important distinctions between Geostatistical Analyst and Surfer in terms
of how the search neighborhood is handled. In Surfer, the default setting is to include
all data in the search neighborhood for each interpolated location—in essence creating
one big neighborhood. Conversely, Geostatistical Analyst establishes a default, selective
search neighborhood for the user. The user can manually change the search neighborhood
in both Surfer and Geostatistical Analyst to see how the resulting surface changes (note
that in Geostatistical Analyst a visualization screen enables real-time assessment of how
the search neighborhood changes the model, whereas in Surfer, grids must be created
and assessed iteratively). Geostatistical Analyst also has a very useful search neighbor-
hood smoothing tool with which the neighborhood is changed by adjusting a smoothing
Contouring 233

parameter rather than specifying sectors and the number of included data values. This is a
more user-friendly way to change the search neighborhood and see how the resulting sur-
face changes. Note that this is not the same as contour smoothing in Surfer, which simply
adjusts how contours are displayed; it is also not smoothing in the sense of using a nugget
effect to increase averaging and perform inexact interpolation.

4.3.1.3 Modeling Techniques


The landmark publication of Krivoruchko (2011) provides many practical tips for fitting
accurate semivariogram models, some of which are summarized below:

• Change the lag size and the number of lags so the range of data correlation (i.e.,
the part of the graph with a positive slope before the asymptote) occupies three
Downloaded by [University of Auckland] at 23:40 09 April 2014

quarters of the graph space.


• Use the covariance graph to estimate the model range; use the semivariogram
graph to estimate the nugget and model shape.
• Always check for anisotropy.
• Try different model shapes (spherical, stable, etc.) to find the best one, and see if
using multiple models yields better results. Use cross-validation to assist in find-
ing the best model.

Both Surfer and Geostatistical Analyst have an Optimize button that determines the best
semivariogram model parameters through regression analysis. The user is cautioned
against using the tool because, in the experience of the authors, it tends to weight all semi-
variogram pairs equally and does not give priority to data at small separation distances
near the origin of the graph. Therefore, using this button can result in an erroneously high
nugget and introduces too much variability into a model.

4.3.2 Kriging Prediction Standard Error


An excellent feature of kriging is its ability to estimate error associated with interpolated
predictions. This error is termed prediction standard error or prediction standard devia-
tion. Because the actual error at prediction locations is unknown (we do not know the
actual value), kriging estimates prediction variance by grouping and averaging pairs of
data locations grouped by distance and orientation. This method of calculating prediction
standard error obviously speaks to the importance of having data that follows, or is trans-
formed to follow, an approximate normal distribution. A rule of thumb is that if data are
normally distributed, the true value at a prediction location will be within the interval of
the predicted value +/– two times the prediction standard error 95% of the time. The avail-
ability of the prediction standard error statistic enables assessment of error considering
the variability of the data, which is not possible using deterministic methods.
Intuitively, the prediction standard error or uncertainty of our estimates should be a func-
tion of both the density of measurements and the local variability of the data. Prediction
standard error should be lower in areas with a high measurement density and higher in
areas of sparse coverage. For example, Figure 4.26 shows that prediction standard error for
the kriged water-level surface presented in Figure 4.14 is lowest where measurement den-
sity is the greatest, namely, around the adjacent 296.05 and 296.07 measurements. Prediction
standard error increases significantly moving away from measurement locations. It should
234 Hydrogeological Conceptual Site Models

297.13 296.53

297.11 296.92
296.37

296.37
296.22
296.56 296.29
296.07 296.05 295.67
295.83

296.11 295.46

295.74

295.52 295.44 295.4


Downloaded by [University of Auckland] at 23:40 09 April 2014

295.49
295.13

295.25 295.11
294.94

294.91

294.82
294.52
294.89
294.95 294.71

294.77
294.39
294.52
294.63
294.45
Prediction Standard Error
15

02 02

45

0. .07
3

0. .25
0. 0.1

15 15

0. 0.4

5
03 0.0

.6
.0

0. 0.0

.
-0

-0

-0
-0

-0
-

-
-0

07

25
-
5

4
01

01

04
0.

0.
0.

0.

0.

FIGURE 4.26
Prediction standard error map for the surface presented in Figure 4.14. Units are same as measurement
value (feet).

also be lower in areas with small changes in data values (low variability) and higher in
areas with large changes in data values, such as a contaminant hot spot. However, when
using conventional ordinary kriging (the most commonly used model in professional
practice), only the error resulting from the configuration of the measurement data (i.e., the
relative position of the measurement locations) is reflected (Krivoruchko 2011). In other
words, the standard error at a prediction location is the same regardless of the actual data
values immediately surrounding that location. All that affects standard error are the mea-
surement locations and the semivariogram model (i.e., the variance of the entire data set).
This is a little-appreciated fact and has far-reaching implications because the majority of
users of contouring programs are likely misinterpreting uncertainty in their models.
This phenomenon is illustrated by the prediction standard error maps presented in
Figures 4.27 and 4.28. The two surfaces are created using the exact same ordinary krig-
ing model without any transformation or detrending with the same set of data values
Contouring 235

ND ND

ND 34
570

8300
7500
ND 670
1600 360 ND
3.8

ND 0.3

73

7.8 8 5.2
Downloaded by [University of Auckland] at 23:40 09 April 2014

ND
1.5

2.9 6.9
0.2

4.2

3.3
ND
0.4
ND 2.3

0.2
ND
1.1
ND
0.7

Prediction Standard Error


50

00

40

70

00

10

20

40

60

90
,9

,0

,0

,0

,1

,1

,1

,1

,1

,1
-1

-2

-2

-2

-2

-2

-2

-2

-2

-2
0

0
87

95

00

04

07

10

11

12

14

16
1,

1,

2,

2,

2,

2,

2,

2,

2,

2,

FIGURE 4.27
Prediction standard error map for a surface created using ordinary kriging without data transformation or
detrending. Error is lowest around measurement locations and highest in areas with sparse coverage.

(independent of location) and measurement locations (X, Y coordinates). However, the


relationship between data values and measurement locations is different between the two
maps. In other words, the same values are assigned to different locations in the two maps.
Despite significant differences in the spatial distribution of values, the two prediction stan-
dard error surfaces are exactly the same as the prediction standard error is independent of
the relative locations of the data values. This is especially concerning because Figure 4.28
has values differing by several orders of magnitude located adjacent to each other (see 7500
and 0.7 in the top left corner, for example).
There are two ways to fix this problem and make prediction standard error depend
not only on sampling density but also on the actual data values of nearby measurements.
One solution is to use moving-window kriging, in which spatial dependence is estimated
locally by using only measurements close to prediction locations. This is performed by
using different semivariogram models in each prediction area or window rather than
236 Hydrogeological Conceptual Site Models

ND ND

7500 0.7
6.9

ND
8300
4.2 34
8 0.4 ND
ND

7.8 1.5

3.8

3.3 1600 ND
Downloaded by [University of Auckland] at 23:40 09 April 2014

670
1.1

2.9 ND
73

ND

360
5.2
0.3
ND 0.2

0.2
ND
570
2.3
ND

Prediction Standard Error


50

00

40

70

00

10

20

40

60

90
,9

,0

,0

,0

,1

,1

,1

,1

,1

,1
-1

-2

-2

-2

-2

-2

-2

-2

-2

-2
0

0
87

95

00

04

07

10

11

12

14

16
1,

1,

2,

2,

2,

2,

2,

2,

2,

2,

FIGURE 4.28
Prediction standard error map for the same data set as that of Figure 4.27 but with values switched between
locations. The map is exactly the same, proving error is only related to measurement density and not the relative
locations of the data values.

using a global semivariogram for the entire data set. Moving-window kriging is an option
in Geostatistical Analyst, and for more information, the reader is referred to the work of
Krivoruchko (2011). The other, more practical solution is to perform data transformation
(and, if necessary, detrending) prior to kriging. This works because transformed data fol-
low a theoretical distribution (e.g., normal, lognormal), and the variance of transformed
data depends on local data values (Krivoruchko 2011). A graphical example of this solu-
tion is presented in Section 4.5 with a contaminant concentration data set with high local
variability.
There are several other cases where a standard deviation (prediction standard error)
grid is incorrect or meaningless. If the variogram model is not truly representative of the
Contouring 237

data, the standard deviation grid is not helpful to data analysis. In addition, the krig-
ing standard deviation grid generated when using a variogram model estimated with the
Standardized Variogram estimator or the Autocorrelation estimator in Surfer is not cor-
rect. These two variogram estimators generate dimensionless variograms, so the kriging
standard deviation grids are incorrectly scaled. Similarly, while the default linear vario-
gram model will generate useful contour plots of the data, the associated kriging standard
deviation grid is incorrectly scaled and should not be used. The default linear model slope
is one, and because the kriging standard deviation grid is a function of slope, the resulting
grid is meaningless (Golden Software 2002).

4.3.2.1 Nugget Effect and Prediction Standard Error


One kriging concept that is extremely difficult for the professional community to embrace
Downloaded by [University of Auckland] at 23:40 09 April 2014

is stated by Krivoruchko (2011): “A new prediction at the data location is more accurate
than the original noisy measurement.”
This is counterintuitive and people unfamiliar with geostatistics may ask, How can this
be? The answer is that kriging uses information from spatially correlated nearby measure-
ments to improve knowledge about measured values. Failing to understand this concept
leads to excessive concern in professional practice with honoring data that can have sig-
nificant measurement and microscale error. As a result, many project managers and clients
will reject contour maps that do not exactly interpolate between measurements (i.e., contour
lines do not exactly go through the same measured values) even when duplicate analysis
shows significant error. Additionally, laboratory analytical data of constituents, such as vola-
tile organic compounds, are subject to considerable uncertainty as results may be accepted
without qualification with surrogate recoveries as low as 50%. Even water-level data, which
are seemingly reliable, can exhibit significant variation over small distances because of soil
heterogeneity and/or temporal variations that are not apparent to the hydrogeologist. It is
hard to find any example in professional hydrogeology where data are entirely precise.
When using a nugget effect, kriging becomes an inexact interpolator and is empowered
to predict more accurate values at measurement locations, taking error into consideration.
In a sense, rather than honoring each individual measurement, kriging with a nugget
effect honors the entire data set. The nugget effect also has significant implications with
respect to the prediction standard error. When performing exact interpolation with no
nugget, there will be discontinuities in the prediction standard error surface, where the
error jumps to zero at measurement locations (Kitanidis 1997). This is nonsensical, and
forcing this condition can lead to a worse overall model by increasing prediction standard
error away from measurement locations.
This phenomenon is demonstrated in Figures 4.29 and 4.30, which are prediction stan-
dard error maps for two different kriged surfaces of the same data set. Figure 4.29 is the
prediction standard error map for kriging with default variogram parameters and the
default nugget (inexact interpolation). Figure 4.30 is the prediction standard error map
for default kriging without a nugget effect (exact interpolation). The discontinuities in the
prediction standard error are visible in Figure 4.30 as the error jumps to zero at prediction
locations. Note that some of the jumps at the measurement locations are so small that the
purple color is not visible, hidden behind the white sample location symbol. Conversely,
the prediction standard error surface is much smoother in Figure 4.29, and errors away
from prediction locations are smaller than those of the exact kriging scenario.
The model predictions and prediction standard errors at the monitoring locations
depicted on Figures 4.29 and 4.30 are presented in Table 4.1 and confirm the above
238 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.29
Prediction standard error map of a kriging model created in Geostatistical Analyst using default parameters
and the default nugget effect. Small white circles represent data values used in the kriging, and the labeled
crosshair symbols are target locations for prediction and prediction standard error analysis. MW-1 through
MW-11 have known values that are used in the contouring, and actual values at MW-12 through MW-18 are
unknown.

FIGURE 4.30
Prediction standard error map of the same data set using default kriging without a nugget effect. This is an
exact interpolation model, and as a result, prediction standard error jumps to zero at measurement locations.
Contouring 239

TABLE 4.1
Summary Table of Predictions and Associated Standard Errors at Target Locations Depicted in
Figures 4.29 and 4.30
Predicted
Location Predicted Value Std Error (No Value (Default Std Error
Measured Measured Value (No Nugget) Nugget) Nugget) (Default Nugget)
MW-1 290.00 290.00 0.00 297.88 2.16
MW-2 301.00 310.00 0.00 305.34 2.03
MW-3 309.00 309.00 0.00 314.95 1.95
MW-4 320.00 320.00 0.00 319.25 2.09
MW-5 331.00 331.00 0.00 320.86 2.07
MW-6 339.16 339.16 0.00 339.80 1.49
MW-7 344.86 344.86 0.00 342.55 1.83
Downloaded by [University of Auckland] at 23:40 09 April 2014

MW-8 352.00 352.00 0.00 349.75 1.98


MW-9 360.00 360.00 0.00 362.23 1.75
MW-10 370.00 370.00 0.00 370.10 1.80
MW-11 380.00 380.00 0.00 379.53 2.20
MW-12 -- 347.10 3.26 347.30 2.85
MW-13 -- 344.45 4.12 345.31 3.50
MW-14 -- 354.87 2.21 359.40 2.17
MW-15 -- 308.49 2.29 309.39 2.22
MW-16 -- 345.46 7.04 339.21 6.53
MW-17 -- 374.93 9.96 371.36 8.85
MW-18 -- 357.85 4.59 356.99 4.16

observations. The model without the nugget predicted measured values exactly at all loca-
tions with measurements (MW-1 through MW-11) with zero standard error. Conversely,
enabling the nugget allowed the model to create new estimates at MW-1 through MW-11,
taking into consideration the overall data variability. At the pure predicted locations where
no measurements exist (MW-12 through MW-18), the model with the nugget has less stan-
dard error and performs better. Unless there is compelling evidence that zero error exists
(which is less likely), the hydrogeologist should use a nugget (even if it is a very small
value) to perform kriging.

4.3.3 Types of Kriging


Just as there are different families of curves used to fit the semivariogram (e.g., power,
Gaussian), there are several different types of kriging, each with different underlying
assumptions and capabilities. The types of kriging available in common computer con-
touring programs are briefly described below.

4.3.3.1 Ordinary, Simple, and Universal Kriging


Ordinary, simple, and universal kriging are the most common types of kriging mod-
els and are all linear predictors. This means predictions at target points are based on a
weighted average of neighboring measured values (Webster and Oliver 2001). The primary
240 Hydrogeological Conceptual Site Models

difference between these kriging types is how they characterize the mean of the data val-
ues used in the interpolation, which is summarized in the following list:

• Simple kriging assumes a known mean value.


• Ordinary kriging assumes a constant, unknown mean value and estimates the
mean in the prediction neighborhood.
• Universal kriging models local means using polynomial equations.

In general, ordinary kriging should be used for most applications as it is rare that the true
mean of the data in question is precisely known. However, if the mean is definitively known,
then simple kriging will have the lowest mean-square error of the three methods and will
truly be the optimal interpolator (Krivoruchko 2011). Universal kriging is similar to ordinary
Downloaded by [University of Auckland] at 23:40 09 April 2014

or simple kriging when using detrending, although the underlying concept is different. Trend
as described in Section 4.2.3.3 represents an external, long-range deterministic process acting
on the data, whereas the trend modeled in ordinary kriging is more of an internal variation
over short distances. Short-range trend is also termed drift (Webster and Oliver 2001).
Each of the above three kriging methods will generally produce similar prediction
maps; however, the prediction standard error maps may be significantly different. Simple
kriging may produce the most accurate prediction standard error map at times, but it also
tends to underpredict standard error. Universal kriging has the paradoxical problem of
requiring information about the drift to estimate the semivariogram model while also
requiring information about the semivariogram to characterize drift. These uncertainties
highlight the benefits of using the more simplistic ordinary kriging model unless com-
pelling technical justification exists to choose otherwise. In part for this reason but also
because of high data variability and the high cost of obtaining measurements, ordinary
kriging is generally the preferred method in the geosciences. Conversely, simple kriging is
more often used in meteorology (Krivoruchko 2011).

4.3.3.2 Cokriging
Oftentimes in hydrogeological applications, more data exist for one variable than another.
For example, 100 monitoring wells may be gauged on a quarterly basis for water-level
elevation, but only 50 of those wells may be sampled on an annual basis for laboratory
chemical analysis. There are significantly more water-level data because they are easier
and less expensive to collect than chemical data. However, chemical data and water-level
data are often correlated because chemical transport is determined by the groundwater
flow field. Therefore, the greater spatial and temporal density of water-level data can be
used to improve chemical concentration predictions at unsampled locations. This can be
accomplished through cokriging.
Cokriging is a form of kriging where predictions for a variable of interest (the pri-
mary variable, equal to the chemical concentration in the above example) are supple-
mented with data from a subsidiary, correlated variable (water-level data in the above
example—although most often, in professional practice, a surrogate chemical would
be used; Webster and Oliver 2001). Multiple subsidiary variables may be used to fur-
ther improve predictions, although each addition further complicates the model. For
the instance where one primary and one secondary variable are used, a semivariogram
or covariance model can be applied and fitted to each variable independently while
a covariance model is used to correlate between the two variables. Each variable can
Contouring 241

also be transformed and detrended independently. The results of an example cokriging


exercise are presented in Section 4.5.2.6, with supplemental information included in the
companion DVD.

4.3.3.3 Indicator Kriging


To perform indicator kriging, measured values are first converted to binary variables, indi-
cating whether or not the measurements are above or below a prescribed threshold value.
For example, a measured contaminant concentration in groundwater could become a 1 if
the measurement is above its maximum contaminant level (MCL) or a 0 if the measure-
ment is below its MCL. Indicator kriging is the process of kriging these binary values to
produce a surface that can be interpreted as the probability of exceeding the applicable
threshold value. Continuing with the above example, the kriged surface would represent
Downloaded by [University of Auckland] at 23:40 09 April 2014

the probability that groundwater concentrations exceed the MCL. A probability map of
contaminant concentrations in sediment created using indicator kriging is presented in
Figure 4.31.
To make indicator kriging even more useful, multiple threshold values can be speci-
fied to create multiple indicator variables (Webster and Oliver 2001). Cross-indicator vario-
grams can then be modeled to evaluate relationships between the different variables in a
process similar to cokriging.
Indicator kriging is attractive to many professionals because exceedance probability
is often a significant element of risk assessment. Environmental consultants, for exam-
ple, may see indicator kriging as an excellent means of demonstrating to regulators that
there is a low risk of exceeding action levels at exposure point locations, such as private
drinking-­water wells. However, indicator kriging has numerous limitations that cast

FIGURE 4.31
Example probability map created with indicator kriging (left) compared with map of contaminant concentra-
tions for the same data set (right).
242 Hydrogeological Conceptual Site Models

serious doubt on its use in formal decision-making processes (Krivoruchko and Bivand
2009):

• Data detrending and transformation is not possible, which means that indicator
variables can significantly deviate from stationarity, and that prediction uncer-
tainty depends solely on measurement density. As a result, probability estimates
may be inaccurate, and prediction uncertainty will definitely be inaccurate.
• Kriging may not be the optimal interpolator for indicator variables.
• The semivariogram model may be inappropriate for discrete data.
• A nugget effect should not be used in indicator kriging as it contradicts the fun-
damental assumption of indicator kriging—namely, values are known exactly so
that comparisons with thresholds are exact. The absence of the nugget can result
Downloaded by [University of Auckland] at 23:40 09 April 2014

in nonsensical discontinuities in the interpolation.


• Information regarding the extent to which measurement values differ from
threshold values is lost completely, which means that a result several orders of
magnitude above the threshold has the same impact on nearby probability as a
result that is a fraction of a percent above the threshold (Webster and Oliver 2001).

Considering the above uncertainties, it is not justifiable to use indicator kriging as the pre-
dominant decision tool for a hydrogeological application, especially where risk is involved.
Indicator kriging is better suited as a data exploration tool similar to histogram or trend
analysis (Krivoruchko 2011).
A better, albeit more complicated, alternative to indicator kriging is disjunctive kriging.
Disjunctive kriging is the simple kriging of data transformed to a standard normal distri-
bution using Hermite polynomials. The estimated values can then be compared with the
normal distribution to create an accurate probability map that does not lose information
regarding the extent to which data deviate from threshold values (Webster and Oliver
2001). The most common form of disjunctive kriging is Gaussian disjunction kriging, which
assumes that the data in question (or the detrended, transformed data) follow a bivariate
normal distribution. Data are bivariate normal if linear combinations of paired values are
normally distributed with correlation coefficients that depend solely on separation distance
rather than absolute position. This assumption must be satisfied for disjunctive kriging to
produce reliable predictions. Geostatistical Analyst in ArcGIS supports disjunction kriging
and has an Examine Bivariate Distribution tool to help determine if disjunctive kriging is
justified (Krivoruchko 2011). When assumptions are satisfied, disjunctive kriging is a power-
ful tool that reliably informs the hydrogeologist about exceedance probability.

4.3.3.4 Point and Block Kriging


Point kriging estimates the values of the points at the grid nodes. Block kriging estimates
the average value of the rectangular blocks centered on the grid nodes. The blocks are
the size and shape of a grid cell. Because block kriging is estimating the average value
of a block, it generates smoother contours (block averaging has a smoothing effect).
Furthermore, because block kriging is not estimating the value at a point, block kriging
is not an exact interpolator. That is, even if an observation falls exactly on a grid node,
the block kriging estimate for that node does not exactly reproduce the observed value
(Golden Software 2002). In Surfer, ordinary (no drift) and universal kriging (linear or qua-
dratic drift) algorithms can be applied to both point and block kriging types.
Contouring 243

4.4 Contouring Potentiometric Surfaces


One of the most common tasks in professional hydrogeology is contouring the spa-
tial distribution of hydraulic head in groundwater, termed the potentiometric surface.
Potentiometric surface maps are also widely referred to as groundwater contour maps and
have countless applications, including, but not limited to

• Determining groundwater flow direction, volumetric flow rate, and linear velocity
• Modeling contaminant fate and transport
• Evaluating the impacts of groundwater extraction and discharge for water supply
purposes
Downloaded by [University of Auckland] at 23:40 09 April 2014

• Evaluating regional hydrologic cycles

This section offers specific guidance on contouring potentiometric surfaces, emphasizing


the importance of hydrogeologic concepts and providing practical advice regarding how
contours should look for different applications.

4.4.1 Importance of Conceptual Site Model


The most important prerequisite for successfully contouring potentiometric surfaces is
a thorough knowledge of general groundwater flow principles. For example, a novice is
often caught in drawing (or letting a computer program create) various depressions in the
potentiometric surface from which there is no escape of groundwater (see Figure 3.95).
Unless there is a valid hydrogeologic explanation (e.g., presence of a pumping well or
downward flow into an underlying aquifer through a window in the intervening aqui-
tard), these depressions are probably the result of inadequate interpretation or erroneous
data. Similarly, mysterious local mounds in the water table should be carefully examined
as they may represent perched groundwater or something more exotic, such as inflow
of water from leaky sewers or water lines. This all means that almost inevitable local irreg-
ularities in the potentiometric surface should not blur interpretations of the expected over-
all tendency of the groundwater flow in any specific case.
Measuring hydraulic heads and subsequently determining hydraulic gradients and
groundwater flow directions is by no means a straightforward task and requires good
planning by an experienced hydrogeologist. Ultimately, the number of monitoring wells,
their depths, screen locations, and frequency of water-level recordings will be based on
the final goal of the study. One common mistake is to apply the same approaches of
hydraulic head measurements in different types of aquifers. Fractured rock and karst
aquifers, in particular, present a great challenge even to more experienced professionals
when deciding how to collect and interpret the hydraulic head data. Because one portion
of the groundwater flow takes place in fractures or conduits and the other part within
the rock matrix, measurements of the hydraulic heads do not provide unique answers.
The interpretation of hydraulic head data should always be made in the overall hydro-
geologic context. For example, a group of closely spaced wells (say, at a meter to decame-
ter scale) may show a completely random distribution of the measured hydraulic head
as illustrated in Figure 4.32. In addition, one well may be completed in a homogeneous
rock block, without any significant fractures and with low matrix porosity, and may
even exhibit the so-called glass effect (no fluctuation of hydraulic head regardless of the
244 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.32
Relationship of river water level and piezometric levels at the hydrogeological cross section near Kocela, the
Trebisnjica River, eastern Herzegovina. (From Milanović, P., Karst istocne Hercegovine i dubrovackog priobalja
(Karst of Eastern Herzegovina and Dubrovnik Litoral). ASOS, Belgrade, 2006. With permission.)

recharge dynamics). On the other hand, a well about 10 m away may show the hydraulic
head fluctuations of several meters or more. Using only the hydraulic head informa-
tion for the purposes of assessing representative groundwater flow directions (hydraulic
gradients) in this case would obviously not be sufficient. Another example illustrating
the complexity of groundwater flow in fractured rock and karst aquifers is presented in
Figure 4.33. By looking at the hydraulic heads measured in piezometers P4, P3, and P2,
and not knowing the lengths and positions of the screen intervals, one could erroneously
conclude that groundwater flows away from the spring (note that P3 is screened in all
three karst conduits and, therefore, has the same water level as P2, which is screened in
the shallowest conduit; P1 is screened in the deepest conduit, and P4 is screened in the
middle conduit). In conclusion, the interpretation of hydraulic head measurements in
these types of aquifers should be combined with hydrogeologic mapping, dye tracing,
and, certainly, a thorough understanding of groundwater flow through fractures and
conduits (Kresic 2007).

P1 P2 P3 P4

Spring

FIGURE 4.33
Example of how closely spaced monitoring wells in karst may register very different hydraulic heads depend-
ing on the depth and length of well screens. Based on actual investigations near a large spring in Dinaric karst.
(Modified from Kupusović, T., Nas Krs, XV, 26–27, 21–30, 1989.)
Contouring 245

Potentiometric surface contour maps are traditionally used to determine hydraulic gra-
dients and groundwater flow directions. However, one should always remember that a
contour map is a two-dimensional representation of a three-dimensional flow field, and as
such, it has limitations. If the area (aquifer) of interest is known to have significant vertical
gradients, and enough field information is available, it is always recommended to create at
least two contour maps: one for the shallow and one for the deeper aquifer depth. As with
geologic and hydrogeologic maps in general, a contour map should be accompanied with
several cross sections showing locations and vertical points of the hydraulic head mea-
surements with posted data. Probably the most incorrect and misleading case is when data
from monitoring wells screened at different depths are lumped together and contoured as
one average data package. A perfect example would be a karst aquifer with thick residuum
(regolith) deposits and monitoring wells screened in the residuum and at various depths
in the bedrock. If data from all the wells were lumped together and contoured as one data
Downloaded by [University of Auckland] at 23:40 09 April 2014

set, it would be impossible to interpret where the groundwater is actually flowing for the
following reasons:

• The residuum is primarily an intergranular porous medium in unconfined condi-


tions (it has a water table), and horizontal flow directions may be influenced by
local (small) surface drainage features.
• The bedrock has discontinuous flow through fractures and conduits at different
depths, which is often under pressure (confined conditions), and may be influ-
enced by more regional features such as rivers or springs.

The flow in two distinct media (the residuum and the bedrock) may therefore be in two
different general directions at a particular site, including vertical gradients from the resid-
uum toward the underlying bedrock. Creating one average contour map for such a system
would not make any hydrogeologic sense.
A contour map of the hydraulic head is one of the two parts of a flow net, which is a
set of streamlines and equipotential lines, as shown in Figure 4.34 (top). A streamline (or
flow line) is an imaginary line representing the path of a groundwater particle as it flows
through an aquifer. Two streamlines bound a flow segment of the flow field and never
intersect, that is, they are roughly parallel when observed in a relatively small portion of
the aquifer. An equipotential line is the intersection of a horizontal plane and the equi-
potential surface—everywhere at that surface the hydraulic head has constant value, as
shown in Figure 4.34 (bottom). Note that the equipotential surface is curved, and does not
have to be, and usually is not vertical. Two adjacent equipotential lines (surfaces) never
intersect and can also be considered parallel within a small aquifer portion. These char-
acteristics are the main reasons why a flow net in a homogeneous, isotropic aquifer is
sometimes called the net of small (curvilinear) squares. However, as explained in Section
4.4.2, inevitable heterogeneity and anisotropy of porous media at realistic field scales cre-
ate various distortions of this ideal flow net.
A flow net does not change over time for steady-state flow conditions. In transient condi-
tions, a flow net represents the instantaneous flow field at a particular time, and it changes
for any other time. At least several data sets collected in different hydrologic seasons
should be used to draw groundwater contour maps for the area of interest. In addition to
recordings from piezometers, monitoring wells, and other water wells, every effort should
be made to record elevations of water surfaces in the nearby surface streams, lakes, ponds,
and other surface water bodies. One should also gather information on hydrometeoro-
logic conditions in the area for preceding weeks to months, paying special attention to
246 Hydrogeological Conceptual Site Models

∆Q Equipotential
line

∆Q

Flowlines
(Streamlines)
∆Q

Equipotential
Downloaded by [University of Auckland] at 23:40 09 April 2014

surface

∆Q

FIGURE 4.34
Example flow nets. Top: Map view. Bottom: Three-dimensional view. Flow in the segment between two flow-
lines, ΔQ, remains constant.

storm events (recharge episodes) and extended wet or dry periods. All of this information
is essential for creating correct contour maps and assessing the transient nature of the
flow net.

4.4.2 Heterogeneity and Anisotropy


Sediments and other rocks can be homogeneous or heterogeneous within some represen-
tative volume of observation. Clean beach sand made of pure quartz grains of similar size
is one example of a homogeneous rock (unconsolidated sediment). If, in addition to quartz
grains, there are other mineral grains, but they are all uniformly mixed without groupings
of any kind, the sediment is still homogeneous. At some limited scales (say, centimeter
to decameter), measurements are hardly ever representative of large volumes of an aqui-
fer or aquitard. For simplification purposes, and when different groupings of minerals or
sediment grains within the same rock behave similarly relative to groundwater flow, one
may consider such volume as homogeneous and representative. In reality, however, all
aquifers and aquitards are more or less heterogeneous, and it is only a matter of conven-
tion, or agreement between various stakeholders, which portion of the subsurface under
investigation may be considered homogeneous. At the same time, assuming homogeneity
of an aquifer volume that seems appropriate for general water-supply purposes may be
completely inadequate for characterizing contaminant fate and transport (Kresic 2007).
In general, anisotropy and heterogeneity are result of the so-called geologic fabric of rocks
comprising aquifers and aquitards. Geologic fabric refers to spatial and geometric relation-
ships between all elements of which the rock is composed, such as grains of sedimentary
rocks and the component crystals of magmatic and metamorphic rocks. Fabric also refers
Contouring 247

to discontinuities in rocks, such as fissures, fractures, faults, fault zones, folds, and bed-
ding planes (layering). Without elaborating further on the geologic portion of hydrogeol-
ogy, it is appropriate to state that groundwater professionals lacking a thorough geologic
knowledge (i.e., nongeologists) would likely have various difficulties in understanding the
many important aspects of heterogeneity and anisotropy.
One such aspect of heterogeneity is that groundwater flow directions change at bound-
aries between rocks (sediments) of notably different hydraulic conductivity, such as the
ones shown in Figure 4.35. An analogy would be refraction of light rays when they enter
a medium with different density, for example, from air to water. The refraction causes the
incoming angle, or angle of incidence, and the outgoing angle, or angle of refraction, to
be different (angle of incidence is the angle between the orthogonal line at the boundary
and the incoming streamline; angle of refraction is the angle between the orthogonal at
the boundary and the outgoing streamline). The only exception is when the streamline is
Downloaded by [University of Auckland] at 23:40 09 April 2014

perpendicular to the boundary—in which case, both angles are the same at 90º. The situa-
tion shown in Figure 4.35 applies to both map and cross-sectional views as long as there is
a clearly defined boundary between the two porous media.

Streamline K2 > K1
(Flowline) α2 > α1
Lines of equal
hydraulic head

α1
K1

K2 α2

K1 tan α1
=
K2 tan α2
K2 < K1
α2 < α1

α1
K1

K2

α2

FIGURE 4.35
Refraction of groundwater flowlines (streamlines) at a boundary of higher hydraulic conductivity (top) and a
boundary of lower hydraulic conductivity (bottom). Angle of incidence and angle of refraction are denoted with
α1 and α2, respectively. Hydraulic conductivity is denoted with K.
248 Hydrogeological Conceptual Site Models

One key parameter for various calculations of groundwater flow rates is the transmis-
sivity of porous media. For practical purposes, it is defined as the product of the aquifer
thickness (b) and the hydraulic conductivity (K):

T = b × K. (4.8)

It follows that an aquifer is more transmissive (more water can flow through it) when it has
higher hydraulic conductivity and when it is thicker. The knowledge of this relationship helps
in interpretation of hydraulic head data, possible reasons for changes in the hydraulic gradi-
ent (see Figure 4.36), and creation of potentiometric contour maps. An example of a hetero-
geneous flow field simulated by a groundwater flow model and the resulting contours of the
potentiometric surface is shown in Figure 4.37. The refraction of contour lines and changes
of the groundwater flow directions and the hydraulic gradients are caused primarily by the
Downloaded by [University of Auckland] at 23:40 09 April 2014

variations of the hydraulic conductivity. These changes happen over short distances and illus-
trate how relatively small differences in hydraulic conductivity may have a very significant
effect, which would have not been apparent if the aquifer were interpreted as homogeneous.
In hydrogeology, anisotropy specifically refers to groundwater velocity vectors. If the
groundwater velocity is same in all spatial directions, the porous medium is isotropic. If
the velocity varies in different directions, the porous medium is anisotropic. If the ground-
water velocity is anisotropic, the hydraulic conductivity is anisotropic as well. In fact,
when talking about anisotropy in hydrogeology, one usually refers to anisotropy of the
hydraulic conductivity rather than the groundwater velocity. Figure 4.38 illustrates, in two
dimensions, just some of the many possible causes of anisotropy. It is important to under-
stand that some degree of anisotropy can (and usually does) exist in all spatial directions.
It is for reasons of simplification and/or computational feasibility that hydrogeologists
consider only the three main perpendicular directions of anisotropy: two in the horizon-
tal plane and one in the vertical plane. In the Cartesian coordinate system, these three
directions are represented with the X, Y, and Z axes. Unfortunately, when the analyzed
groundwater system is quite complex, anisotropic, and influenced by various hydraulic
boundaries, it may be impossible to draw a correct contour map and determine likely

Map view: hydraulic head contour lines

Land surface Land surface

Water table Water table

K1 K3 K2

K1 > K2 > K3 K = const

FIGURE 4.36
Maps (top) and cross sections (bottom) showing how changes in aquifer transmissivity affect potentiomet-
ric contours. In general, lower transmissivities are associated with steeper hydraulic gradients (more closely
spaced contours), and higher transmissivities are associated with more widely spaced contours.
Contouring 249
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.37
Heterogeneous groundwater flow field created in a numeric groundwater model through extensive variation of
hydraulic conductivity (aquifer thickness is uniform). Lower hydraulic conductivity results in steeper hydraulic
gradients (example areas a and b); higher hydraulic conductivity results in more widely spaced contours, such
as in area c.

K1
K1 K2 K2
K1
K2

K1>>K2 K1>K2 K1>>K2

(a) (b) (c)

FIGURE 4.38
Some possible reasons for anisotropy of hydraulic conductivity. (a) Sedimentary layers of varying permeability;
(b) orientation of gravel grains in alluvial deposit; (c) two sets of fractures in massive bedrock.
250 Hydrogeological Conceptual Site Models

KY KY

KX KX = KY KX KX = 4 KY
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.39
Modeling output demonstrates how anisotropy affects particle transport in an aquifer. (From Kresic, N.,
Groundwater Resources. Sustainability, Management, and Restoration, McGraw Hill, New York, 2009. With
permission.)

groundwater flow directions based on a limited data set of hydraulic head measurements.
Ultimately, constructing a numeric groundwater model and testing assumptions about
various factors influencing the groundwater flow may be the only reasonable approach in
such case. Figure 4.39 shows output from a portion of a model used to test the influence of
anisotropy on tracks of particles released at certain locations in the aquifer.

4.4.3 Influence of Hydraulic Boundaries


It has become standard practice in hydrogeology and groundwater modeling to describe
the inflow and outflow of water from an aquifer with three general boundary conditions:
(1) known flux, (2) head-dependent flux, and (3) known head, where flux refers to the
groundwater flow rate and head refers to the hydraulic head. More detail on the meaning
and use of various boundary conditions as they relate to groundwater modeling concepts
is provided in Chapter 5. For the purposes of creating contour maps of the potentiometric
surface, either manually or with computer programs, it is important to remember the fol-
lowing simple rules regarding the influence of hydraulic boundaries:

• Contour lines must meet impermeable (no-flow) boundaries at right angles.


• Contour lines must parallel equipotential boundaries.

One of the most important aspects of creating contour maps in alluvial aquifers is to
determine the relationship between groundwater and surface water features. In hydraulic
terms, the contact between an aquifer and a surface water body is an equipotential bound-
ary. In case of lakes and wetlands, this contact can be approximated with the same hydrau-
lic head. In case of flowing streams, the hydraulic head along the contact decreases in the
downgradient direction (both the surface water and groundwater flow downgradient). If
enough measurements of a stream stage are available, it is relatively easy to draw the water
table contours near the river and to finish them along the river–aquifer contact. However,
often little or no precise data is available on a river stage, and at the expense of precision, it
has to be estimated from a topographic map or from the monitoring well data by extrapo-
lating the hydraulic gradients until they intersect the river. Figure 4.40 shows some of the
Contouring 251

(a) (b)

123

123
122

122
123
123

122
122
Downloaded by [University of Auckland] at 23:40 09 April 2014

(c) (d)

117
122

116
123
121

122

115

(e)
1 Map view for case 3
345

2
34
4
34

3
3
34
2

FIGURE 4.40
Basic hydraulic relationships between groundwater and surface water shown in cross-sectional views (top)
and map views using hydraulic head contour lines. (a) Perennial gaining stream; (b) perennial loosing stream;
(c) perennial stream gaining water on one side and losing water on the other side; (d) losing stream discon-
nected from the underlying water table, also called ephemeral stream; (e) contour lines following recent rise
(case #2) and then drop (case #3) in a river stage. (From Kresic, N., Hydrogeology and Groundwater Modeling, Second
Edition, CRC Press/Taylor & Francis, Boca Raton, FL, 2007. With permission.)
252 Hydrogeological Conceptual Site Models

examples of surface water–groundwater interaction represented with the hydraulic head


contour lines.
An illustration of how various boundary conditions can affect distribution of the
hydraulic head and groundwater flow directions is shown in Figure 4.41 with an example
of a basin-fill basin. Such basins, common in the semiarid western United States, may have
permanent (perennial) or intermittent surface streams and may be recharged by surface
water runoff and underflow from the surrounding mountain fronts. They can also be con-
nected with adjacent basins, thus forming rather complex groundwater systems with vari-
ous local and regional water inputs and water outputs. Availability of the hydraulic head
data at various locations within the basin, and at various times, will determine the accu-
racy of the hydraulic head contours, which therefore may or may not show the existence
or influence of various boundary conditions. Figure 4.41 (top) indicates general inflow of
groundwater from the east and outflow to the west with no other water inputs (i.e., all
Downloaded by [University of Auckland] at 23:40 09 April 2014

No-flow boundary
N
Flow out

Flow in

No-flow boundary

A
No-flow boundary
N

B No-flow boundary

FIGURE 4.41
Top: Hydraulic head contour lines in a basin-fill basin, assuming no influence of the surface stream flowing
through it. Arrows indicate general directions of groundwater flow. Bottom: Influence of two surface streams
(A and B) flowing into the basin from the surrounding bedrock areas and losing all water to the underlying
aquifer a short distance from the contact. Note wider contour lines in the central portion of the basin where
it is thicker and more transmissive. The main stream is hydraulically connected with the underlying aquifer;
the blue line indicates the gaining section of the stream; the dashed red line indicates the losing section of the
stream. (From Kresic, N., Groundwater Resources. Sustainability, Management, and Restoration, McGraw Hill, New
York, 2009. With permission.)
Contouring 253

other basin boundaries are assumed to be impermeable). Figure 4.41 (bottom) shows the
influence of two streams (A and B) entering the basin and losing water to the aquifer a
short distance from the boundary. It also shows a hydraulic connection between the aqui-
fer and the river flowing through the basin, including river reaches that lose water to or
gain water from the aquifer.
The most common mistake when creating potentiometric maps of unconfined aqui-
fers is to let a computer program ignore the presence of surface water features and
accept the results as is. Figure 4.42 shows a comparison between the theoretical (true)
potentiometric contours generated by a numeric model and the contours created from
a limited data set in which the model solution at certain model cells represents the
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.42
Comparison of the theoretical contour map of the water table (blue lines; elevation in feet above mean sea level)
and interpolated contours (brown lines) using default linear kriging in Surfer. The small brown circles simulate
monitoring wells and water-table elevations recorded in the field. The shaded area is, by default, defined by the
extent of X and Y coordinates of the data.
254 Hydrogeological Conceptual Site Models

hydraulic head recorded in the field at monitoring wells. Figure 4.42 was created in
Surfer using default linear kriging and ignoring the river. In comparison, the map in
Figure 4.43 takes the river stage into account by utilizing a breakline option in Surfer. It
is apparent that considering the river creates a much better map, even though all field
data points are relatively far from it, highlighting the importance of installing staff
gauges at field sites.
A breakline is a three-dimensional boundary file that defines a line with X, Y, and Z
values at each vertex. When the gridding algorithm sees a breakline, it calculates the Z
value of the nearest point along the breakline and uses that value in combination with
nearby data points to calculate the grid node value. Surfer uses linear interpolation to
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.43
Contour map created in Surfer when using the breakline function to incorporate the river, yielding much better
results. The theoretical contour map of the water table is shown with blue contours; the interpolated contours
using default linear kriging in Surfer are shown with orange-brown lines. The format of the breakline is shown
in the box. The brown dots simulate field data.
Contouring 255

determine the values between breakline vertices when gridding. Breaklines are not barri-
ers to information flow, and the gridding algorithm can cross the breakline to use a point
on the other side of the breakline. If a point lies on the breakline, the value of the break-
line takes precedence over the point. Breakline applications include defining streamlines,
ridges, and other breaks in the slope. Breaklines can be created in any text editor or
directly within Surfer using the Digitizer tool. The format of the breakline file is shown
in Figure 4.43.
Another example of using breaklines is illustrated in Figure 4.44, which shows contours
of a potentiometric surface created by three pumping wells located in the floodplain of a
slow-moving perennial river. The wells are pumping from the unconfined alluvial aqui-
fer. The left map shows contours where the program does not include the river stage (i.e.,
the interpreter ignores the hydraulic connection between the river and the aquifer), and
the right map shows contours created by Surfer where a breakline simulating the river
Downloaded by [University of Auckland] at 23:40 09 April 2014

is included. Figure 4.45 shows the final map where senseless contours in the areas with-
out data points are not displayed (Surfer does this automatically using blanking polygons
defined by the user).

FIGURE 4.44
Left: Contour map of a water table influenced by three pumping wells near a river when the hydraulic connec-
tion between the aquifer and the river is not accounted for. Dashes on contour lines indicate groundwater flow
(gradient) direction. Orange circles are locations of wells with field measurements. Right: The same map when
the river elevation is accounted for by using breakline in Surfer. Note nonsensical contours created by the pro-
gram in the areas without data points, including on the other side of the river.
256 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.45
Final contour map of alluvial aquifer after postprocessing of the results using grid blanking in Surfer (nonsensi-
cal contours in the area without data points are blanked, i.e., not displayed).

4.5 Contouring Contaminant Concentrations


Another important application of spatial interpolation in professional hydrogeology is contour-
ing chemical concentrations in groundwater, soil, and sediment. The most common example,
widely used in site investigation and remediation, is contouring contaminant concentrations
in groundwater. Groundwater contaminant contour maps are also referred to as plume maps
and, in homogeneous isotropic aquifers, typically resemble the classical jet engine exhaust
plume. Contaminant contour maps are often critical elements of the CSM and are used to

• Determine the overall extent of the contaminant plume


• Delineate areas with concentrations above applicable regulatory criteria (e.g., MCLs)
Contouring 257

• Evaluate risk to receptors, such as water-supply wells, and receiving surface water
features
• Estimate the mass of contaminants in the subsurface
• Evaluate groundwater remediation effectiveness, including monitored natural
attenuation processes, by assessing changes in plume architecture, composition,
and mass over time

Unfortunately, it is much more difficult to generate accurate and useful contaminant con-
tour maps than potentiometric surface maps, primarily because of significantly higher
data variability and cost of data collection. Faced with this difficulty, many hydrogeolo-
gists create plume maps that look completely unrealistic or are so coarse in resolution that
they more accurately resemble an area of contaminant detections rather than a spatially
Downloaded by [University of Auckland] at 23:40 09 April 2014

interpolated surface. Hydrogeologists may simply say there is too much variability to per-
form kriging on contaminant concentrations and resort to a manual map devoid of statisti-
cal interpolation and uncertainty assessment. This section illustrates how to successfully
create geostatistical groundwater plume maps through a synthetic example, highlighting
important concepts and demonstrating the kriging techniques described throughout this
chapter.

4.5.1 Importance of Conceptual Site Model


Creating defensible contaminant contour maps depends upon a solid CSM for contami-
nant fate and transport. First, it is critical to understand the nature of the contaminant
release to the environment. The questions below should be posed and answered by the
hydrogeologist:

• What chemicals were released to the environment?


• In what form were the chemicals during release [i.e., nonaqueous phase liquids
(NAPLs), dissolved in water, or as separate solids or gasses]?
• Did the release occur as a point source, such as a dumping location like a trench,
drain, pond, catch basin, or otherwise? Did the release occur as a diffuse source
spread discontinuously in time and space over a larger area, such as releases
occurring during flood events or land applications?
• Over what duration and in what quantity did the release occur?

Second, it is important to understand the chemical properties of the contaminant(s)


in question as they determine partitioning in the environment. In general, chemicals
released to the environment may dissolve into water; adsorb onto solid matrices, such as
soil; volatilize into air; or exist as separate NAPLs. Water solubility and the distribution
coefficients between soil and water (Kd) and water and air (Henry’s law constant) are the
most useful quantitative measures of contaminant partitioning. Conceptually, these coef-
ficients inform the hydrogeologist as to the preferential state of the chemical under site-
specific conditions. For example, based on chemical property analysis, the hydrogeologist
can expect perchlorate and 1,4-dioxane to readily dissolve in groundwater and migrate
long distances at the site in question. Conversely, a more hydrophobic contaminant such
as naphthalene can be expected to be present at lower concentrations in groundwater and
have its groundwater transport velocity retarded by partitioning onto the organic carbon
258 Hydrogeological Conceptual Site Models

content of soil. It is important for the hydrogeologist to possess reliable reference materials
for the chemical properties of common environmental contaminants.
Finally, after characterizing the release and the involved chemicals, the hydrogeolo-
gist must characterize the resulting groundwater plume based on existing data. This
involves evaluating the vertical and lateral extent of migration in the context of site-spe-
cific hydrogeologic data, such as stratigraphy, lithology, bedrock features, hydraulic con-
ductivity, hydraulic gradients, and the locations of groundwater recharge and discharge,
such as surface water features and wells. Heterogeneity plays an important role in this
assessment as it can substantially influence contaminant migration both vertically and
laterally. Figure 4.46 illustrates a modeled example where a low-permeability clay layer
in the saturated zone causes vertical plume bifurcation. Hydrogeologists should be cog-
nizant of the potential for these effects in real field conditions and design drilling and
sampling programs to evaluate heterogeneity. Whenever sufficient information exists,
Downloaded by [University of Auckland] at 23:40 09 April 2014

the hydrogeologist should also classify the groundwater plume as being increasing, sta-
ble, or decreasing as demonstrated in Figure 4.47. This informs the hydrogeologist as to
the strength of the remaining source and the extent of contaminant transformation and

FIGURE 4.46
Top: Simulated vertical plume bifurcation in the saturated zone as a result of heterogeneity (presence of a
clay lens). Effects such as this may cause contamination of water-supply aquifers at different depths. Bottom:
Development of the same plume assuming a homogeneous aquifer. Both views are cross-sectional.
Contouring 259

STABLE SHRINKING
PLUME PLUME
DETACHED
Great sorption Biodegradation PLUME
EXPANDING capacity rate greater than
PLUME loading rate Intermittent
Downloaded by [University of Auckland] at 23:40 09 April 2014

Biodegradation source
Sorption capacity rate equals Decrease in
exhausted loading rate loading
No biodegradation Decrease in Source removal
loading
Increase in loading

Baseline Intermediate Current

FIGURE 4.47
Influence of various fate and transport processes on plume development. While most fate and transport pro-
cesses may be present in any given case, the bullets list only those with the greatest possible net effect. (Modified
from United States Environmental Protection Agency, The Report to Congress: Waste Disposal Practices and
Their Effects on Ground-Water, EPA 570977001, 1977.)

decay (i.e., biodegradation) in the subsurface, which are the predominant determinants
of plume longevity. Figure 4.48 shows a commonly applied rule of thumb for assessing
the possible presence of NAPL phase in the subsurface, which greatly complicates the
overall characterization and prediction of contaminant fate and transport (from Kresic
2009).

0.005
0.1
10
>100

FIGURE 4.48
Delineation of potential aquifer zones with DNAPL trichloroethene (TCE) based on the 1% to 10% solubility
rule of thumb. TCE has aqueous solubility approximately between 1100 and 1400 mg/L. The aquifer area that
may contain residual DNAPL is assumed to be within the 100 mg/L concentration contour or approximately
8% of the pure phase solubility. Note that in the case of a DNAPL mixture, the effective solubility of TCE would
be less than the pure phase solubility. (From Kresic, N., Groundwater Resources. Sustainability, Management, and
Restoration. McGraw Hill, New York, 2009. With permission.)
260 Hydrogeological Conceptual Site Models

In summary, the hydrogeologist should be able to tell a story, describing the origins
of the contamination and explaining how current conditions came to be and how condi-
tions will potentially change in the future. Many complicated stories occur in real-world
conditions, involving comingled plumes, chain decay of contaminants, diffusion into low
permeability clay or bedrock (see the technical impracticability discussion in Chapter 8),
or plumes that move in different directions at different vertical intervals as illustrated in
Figure 4.49. Clear, information-rich data visualizations are essential to support difficult
contaminant fate and transport concepts, the most common and useful form of which are
contaminant concentration contour maps.
Downloaded by [University of Auckland] at 23:40 09 April 2014

58

54

52
56

50
54

48
52

46

60

Perchlorate (µg/L)
10

100

1,000

FIGURE 4.49
Example graphic demonstrating change in planar transport direction with depth. Solid black contours repre-
sent the potentiometric surface of shallow groundwater, and dashed red contours represent the potentiometric
surface of deep groundwater. Solid colored, filled contours represent the shallow perchlorate plume, and the
hatched filled contours represent the deep perchlorate plume. In this case, the direction of perchlorate transport
rotates approximately 60° at depth. Introduction of perchlorate to deep groundwater is caused by downward
vertical gradients. A conceptual justification must exist for this behavior, such as the presence of deep pumping
wells and/or thin or absent low-permeable layer between the shallow and deep water-bearing zone.
Contouring 261

4.5.2 Example Application


This example application is a synthetic problem created with a three-dimensional, numeric
groundwater model. A slug of contaminant was introduced into the model, and a plume
was allowed to develop. A highly heterogeneous distribution of hydraulic conductivity
was used to create a nonlinear flow field. The distribution of hydraulic conductivity and
the groundwater flow field created by the model are depicted in Figure 4.50. After simulat-
ing the plume transport in the model domain, contaminant concentrations were extracted
from locations representing monitoring wells for use in the contouring exercise. Modeled
contaminant concentration contours and measurements at the time of extraction (or moni-
toring) are presented in Figure 4.51.
The objective of this exercise is to use the extracted concentrations to recreate the contours
depicted in Figure 4.51. However, we are, in essence, pretending that we do not know how
Downloaded by [University of Auckland] at 23:40 09 April 2014

these contours should look; therefore, statistical evaluation of the spatial interpolation model
(kriging) will be important. A secondary objective is to use the spatial interpolation model to
estimate concentrations at five unsampled locations and to see how the estimates compare to
the real values calculated by the model. This is a form of model verification. Prediction loca-
tions are presented in Figure 4.52 in addition to contours of contaminant concentration created
by the groundwater model. Geostatistical analysis and contaminant contouring presented in
this section are performed using the Geostatistical Analyst extension for ArcGIS.
For all of the presented kriging models, a result equal to one-half the reporting limit was
substituted for nondetect observations. In other words, all near-zero values simulated by
the model are replaced with a value of 0.05 μg/L for the purposes of contouring. Censored
environmental data pose many challenges to the hydrogeologist, and proper methods for
accounting for nondetections do exist in descriptive (e.g., calculating means and standard
deviations) and inferential statistics (e.g., performing hypothesis testing). As ProUCL (see
Chapter 3) and other commercial statistical software packages incorporate these methods,
there is no justification for using data substitution methods (which can introduce significant

FIGURE 4.50
Distribution of hydraulic conductivity and the resulting simulated potentiometric surface used to generate the
synthetic contaminant concentration map. Note that transport vectors will be nonlinear.
262 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.51
Simulated concentration contours and point values extracted from the model for the contouring example. Point
values represent measurements from monitoring wells in real-life applications.

FIGURE 4.52
Polyline contours of contaminant concentration produced by the groundwater model, point measurements, and
unsampled locations (TW-1 through TW-5) where concentration estimates are required.
Contouring 263

bias) for descriptive and inferential statistics. The reader is referred to the work of Helsel
(2005) for further information regarding statistics for censored data. However, there are cur-
rently no computer programs available to the general public that incorporate censored data
for geostatistical applications (Krivoruchko 2011). Therefore, substitution is the only available
option, and the hydrogeologist must assess the possible impacts of these substitutions on the
resulting interpolated surface and uncertainty estimates.
Ordinary kriging was used for each spatial interpolation scenario presented in this sec-
tion as the true mean concentration is unknown.

4.5.2.1 Default Parameters


To illustrate a common practice in environmental consulting, an attempt was made to develop
concentration contours using Geostatistical Analyst’s default ordinary kriging model. No
Downloaded by [University of Auckland] at 23:40 09 April 2014

data exploration or variography was completed—instead, an exercise in button pressing was


performed to see what the computer could do. In consulting practice, default contouring is
often performed by GIS specialists (using Geostatistical Analyst or, worse, Spatial Analyst;
see related discussion in Section 4.6) or AutoCAD professionals. Project managers and project
hydrogeologists who allow or encourage this practice are ignoring the benefits of geostatistical
interpolation that have been outlined in this chapter. Even if the default contours look accept-
able, the hydrogeologist would be unable to mount a scientific defense if the contours ever
came under intense regulatory or legal scrutiny. Creating default contours and then tweaking
them manually is no different from manual contouring and is not a geostatistical approach.
The unadjusted, default semivariogram and semivariogram model (stable) for the con-
taminant concentration data are presented in Figure 4.53. As shown by the graph, there

FIGURE 4.53
Default semivariogram for the contaminant concentration data. Spatial correlation is unrecognizable, and the
default lag coverage (product of lag size and number of lags) encompasses the entire data set. Note that the
semivariance (γ) labels on the Y axis are scaled by a factor of 10 –6. Esri® ArcGIS Geostatistical Analyst graphical
user interface. Copyright © Esri. All rights reserved.
264 Hydrogeological Conceptual Site Models

is an extreme amount of variability in the data set as the highest calculated semivariance
is close to 10 × 106 (ten million) (μg/L)2. The semivariogram map, or surface, presented
in the lower left corner of Figure 4.53, plots the semivariogram values in polar coordi-
nates with a common center using the distance and angle geometries of the data pairs. An
expanded version of the semivariogram map is presented in Figure 4.54 and shows that
areas of extreme variability are aligned along the NW–SE axis (the direction of ground-
water flow—note that north is up on all figures in Section 4.5) because of the precipitous
decline in concentration moving downgradient of the source area. The default semivario-
gram and search neighborhood properties are accepted, and the resulting default kriging
contour map is shown in Figure 4.55. The default surface looks absurd as excessive averag-
ing was performed such that the contaminant source area (represented by the 5000 μg/L
contour line) is grossly underpredicted, and nondetect areas are grossly overpredicted.
The associated default prediction standard error map depicted in Figure 4.56 reflects this
Downloaded by [University of Auckland] at 23:40 09 April 2014

overaveraging as the standard error exhibits relatively small variation across the interpo-
lated extent. Also, note that the prediction error is clearly independent of the data values
as prediction standard error decreases in areas of greater sampling density independent
of concentration.
The default kriging model was blindly accepted in this case, representing a situation
where the computer user is completely ignorant of the kriging process. However, if the

High : 9.71785e+006

Low : 0

FIGURE 4.54
Default semivariogram map for the contaminant concentration data. Note that the greatest variance is located
along the axis of groundwater flow at large separation distances, where the contaminant source area (or hot
spot) is paired with nondetections.
Contouring 265

ND ND

ND 34
570

8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3

73
50
10 7.8 8 5.2
Downloaded by [University of Auckland] at 23:40 09 April 2014

ND
1.5

2.9 6.9
0.2

Kriged Concentration (ug/L)


5.0 4.2
0.05 - 1
1-2 3.3
2-5
ND
5 - 10
0.4
10 - 50 ND 2.3
50 - 100 2.0
100 - 500 0.2
ND
500 - 1,000 1.1
1,000 - 5,000 ND 1.0
0.7

FIGURE 4.55
Concentration contour map created using default kriging options. Filled, colored contours represent the kriging
output, and black polyline contours represent the real contours created by the groundwater model. The interpo-
lated surface is obviously unusable for any application.

computer user is a novice hydrogeologist or someone with a little knowledge about krig-
ing, it is entirely possible that the model could be rejected—although for the wrong reason.
The novice hydrogeologist may look at the noisy semivariogram in Figure 4.53 and con-
clude that spatial correlation does not exist and that a deterministic method such as IDW
must be used instead of kriging. This interpretation is wrong on many levels. For one, spa-
tial correlation clearly exists in the case of contaminant fate and transport in groundwater
with concentrations at closely located points being much more similar than concentrations
at points located farther away from each other. Second, even the use of a deterministic
model requires spatial correlation in the data, which is a fact not understood by the novice.
Third, no data exploration was conducted to determine if detrending and/or transforma-
tions were required to eliminate the extreme variability that is masking the spatial corre-
lation in the semivariogram. The first step in any successful geostatistical analysis is data
exploration, which is illustrated in the next section.
266 Hydrogeological Conceptual Site Models

ND ND

ND 34
570

8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3

73 50
10 7.8 8 5.2
Downloaded by [University of Auckland] at 23:40 09 April 2014

ND
1.5
2.9 6.9
0.2
Prediction Standard Error (ug/L)
400 - 430
5.0 4.2
430 - 460
460 - 490 3.3
490 - 520 ND
520 - 550 0.4
550 - 580 ND 2.3
580 - 610 2.0
0.2
610 - 640
ND
640 - 670 1.1
ND 1.0
670 - 700 0.7

FIGURE 4.56
Prediction standard error map for the default kriging model. The low variability in prediction standard error
reflects the overaveraging of the data set and the fact that local error is solely a function of sampling density.

4.5.2.2 Data Exploration


The three principal steps of geostatistical data exploration are

• Semivariogram analysis
• Distributional assessment
• Trend analysis

Semivariogram analysis consists of examining the shape and directional dependence of


the semivariogram and determining which pairs of data exhibit the largest variability. The
latter step is useful in identifying outliers. In Geostatistical Analyst, the exploratory semi-
variogram is interactive, which means that points selected in the graph will be highlighted
in the map display. In this manner, one can easily identify the data pair that comprises a
point on the semivariogram. For the contaminant concentration example, the exploratory
semivariogram is presented in Figure 4.57 and shows that a relatively small number of
points contribute to the majority of the variability in the data set. These are likely extreme
Contouring 267
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.57
Exploratory semivariogram for the contaminant concentration data. Six pairs with extremely high variability
at relatively low separation distances have been selected (highlighted in aqua) for identification. Esri® ArcGIS
Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.

values or outliers, which, in certain cases, may be removed from the model. However, when a
few of these pairs are selected on the semivariogram (highlighted in aqua color in Figure 4.57),
one can see that they are all related to the contamination hot spot or source area as shown in
Figure 4.58. Each of the selected pairs includes either the 8300 μg/L result or the 7500 μg/L
result. As these two data points are critical to the CSM, they cannot be removed from the data
set. It is likely that prediction error will be higher in the hot spot because of the outlier behavior
of these data points. Anisotropy can also be assessed with the semivariogram as searching can
be used to see if the range of correlation changes with direction. At this stage, however, it may
be difficult to discern large-scale trend from short-scale anisotropy.
As stated in Section 4.3, kriging performs best when input data follow a normal distri-
bution. Distributional assessment consists of analyzing histograms and quantile–quantile
plots to determine if the data are normally distributed and, if not, to determine if the data
can be transformed so that the transformed data follow a normal distribution. A histo-
gram is a bar graph where the bar width shows the range of values within a group, and the
bar height shows the frequency with which values fall in that range. A quantile–quantile
plot charts the quantiles of the data against those of the standard normal distribution. The
most common data transformations are logarithmic (log), box-cox, arcsine, and normal
score. Normal score is the only method that will always produce normally distributed
transformed data. However, normal score transformation can only be used with simple
kriging because it assumes a known mean, and it performs poorly when the data have
268 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.58
Graphic showing the individual measurements involved in the high-variability pairs.

many repeated values (Krivoruchko 2011). Additionally, normal score transformation is


the only method that occurs after data detrending, which can limit the efficacy of detrend-
ing when data have extreme values as is the case of the contaminant concentration data.
The default histogram for the contaminant concentration data is presented in Figure
4.59 and shows that the data strongly deviate from the normal distribution. Instead of

FIGURE 4.59
Histogram for unadjusted concentration data, which are skewed because of a high frequency of nondetect
results and a low frequency of samples with high concentrations (representing the source area/hot spot). Esri®
ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
Contouring 269

following the classical bell-curve shape, data are predominately grouped at low values,
with the exception of a few high value outliers, creating a tail shape. The mean and median
deviate significantly, which is another indicator of nonnormality. Figure 4.60 presents the
histogram of the data after they have been log-transformed. With the exception of an
abnormally high frequency of data values within the first, least-value bin (which contains
nondetect values), the chart bares more resemblance to the normal distribution with a
mean and median that are much closer than they previously were. While not ideal (the
data are still skewed), we can expect that kriging with log-transformed data, or lognormal
kriging, will perform significantly better than the default model. Analysis of the quantile–
quantile plot for log-transformed data, presented in Figure 4.61, confirms the histogram
result. If the data follow a lognormal distribution, the data points will fall exactly on a 45°
straight line. With the exception of the nondetect data, which are all assigned an arbitrary
value of 0.05 μg/L, the data better match a 1:1 line. It also is likely that the abnormally high
Downloaded by [University of Auckland] at 23:40 09 April 2014

frequency of values within the lower bin of the histogram is a result of the nondetections.
From this data exploration exercise, one can see how censored data can affect distribu-
tional assumptions and potentially influence kriging outcomes.
The final step in the data exploration process is determining whether deterministic
trends exist in the data. Figure 4.62 (top and bottom) depicts visualizations of Geostatistical
Analyst’s Trend Analysis tool depicting the contaminant concentration data (Z plane) in
the X and Y directions. First-, second-, or third-order polynomials can be fit to the data if
trend is observed. For the contaminant concentration data, there is a trend in both the X
and Y planes. The top graphic is a plot of the unadjusted concentration data, and while the
trend is visually apparent, the polynomial equation cannot accurately fit the data because

FIGURE 4.60
Histogram for log-transformed data, which more closely resembles the normal distribution, but is still skewed
because of numerous nondetect results. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright
© Esri. All rights reserved.
270 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.61
Quantile–quantile plot for the log-transformed data, which can be used to provide an additional line of evi-
dence in distribution assessment beyond histogram analysis. Alternatively, external programs such as ProUCL
can be used to perform statistical hypothesis tests to classify data distributions. Esri® ArcGIS Geostatistical
Analyst graphical user interface. Copyright © Esri. All rights reserved.

of the outlying behavior of the source area concentrations. The bottom graphic is a plot of
the log-transformed concentration data and demonstrates that log transformation before
detrending facilitates polynomial regression and enables a much better curve fit. This
will substantially improve detrending accuracy and reinforces the primary drawback of
simple kriging with normal score transformation—transformation can only be performed
after detrending.
Conceptually, a second-order trend is the best fit for the data based on the typical dis-
tribution of groundwater plumes. Moving from upgradient and side-gradient locations
through the contaminant source to the other side, concentrations will start low, increase,
and then decrease again, creating a hill shape akin to a second-order polynomial. The
deterministic process of groundwater flow creates this second-order trend that should be
removed from the data prior to contouring.
The following conclusions are reached based on this data exploration process:

• The high variance in the data set is caused by the contaminant hot spot, where
model uncertainty will potentially be high.
• A log transformation should be used prior to kriging so the transformed data
more closely follow a normal distribution.
• Nondetections are predominately responsible for deviation from the lognormal
distribution.
• A second-order trend should be removed from the data prior to kriging (but after
data transformation) so emphasis is placed on local variation.
Contouring 271
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.62
Visualizations of Geostatistical Analyst’s trend analysis tool using the contaminant concentration data. The top
graphic represents unadjusted data, and the bottom graphic represents log-transformed data. Log transforma-
tion improves the polynomial regression performance in areas of high contaminant concentration. In ordinary
kriging, log transformation can be conducted before detrending so the bottom graphic represents the poly-
nomial fit. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.

4.5.2.3 Lognormal Kriging with Anisotropy


To demonstrate incremental improvement to the kriging model and to represent the best pos-
sible outcome using computer programs that do not support detrending for ordinary kriging,
a contour map is first produced using log transformation without detrending. The resulting
semivariogram and semivariogram map are displayed in Figures 4.63 and 4.64, respectively.
272 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.63
Semivariogram with manually fit Gaussian model for the log-transformed data. Log transformation results
in a finite range of correlation and a partial sill that is clearly visible using the blue averaged semivariance
crosses. Anisotropy is used to account for the trend. Esri® ArcGIS Geostatistical Analyst graphical user inter-
face. Copyright © Esri. All rights reserved.

High : 31.5702

Low : 0

FIGURE 4.64
Semivariogram map for the log-transformed data. While the scale of variance has been reduced significantly
versus Figure 4.54, the relative distribution of variance is similar because of the presence of trend.
Contouring 273

As shown by the figures, the log-transformation significantly reduces the variance of the
data. The highest calculated semivariance is now approximately 40 (μg/L)2 for the lag cov-
erage displayed on the semivariogram (Figure 4.63), which is orders of magnitude below
the unadjusted maximum semivariance of approximately 10 × 106 (ten million) (μg/L)2.
For the semivariogram surface (Figure 4.64), the lag coverage was increased to include
the maximum distance between points such that the surface can be directly compared to
Figure 4.54. This is why the maximum semivariance shown in the surface does not exactly
match that of the semivariogram graph. Note that the relative distribution of variance
between Figures 4.64 and 4.54 is very similar as variability is still predominately related to
the contaminant hot spot or source area.
As a result of the log transformation, the spatial correlation of the data is now clearly
visible, and a Gaussian variogram model with a nugget effect can be fit to the data. Both
the semivariogram graph and surface show anisotropy because of the prevailing NW–SE
Downloaded by [University of Auckland] at 23:40 09 April 2014

direction of groundwater flow (once again, this is more likely an example of trend than
anisotropy). Therefore, anisotropy is added to the kriging model, and each of the blue
curves on the semivariogram corresponds to a particular direction. The search neighbor-
hood is smoothed, and the resulting interpolated surface is presented in Figure 4.65. The

ND ND

ND 34
570

8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3

73
50
10 7.8 8 5.2

ND
1.5

2.9 6.9
0.2
Kriged Concentration (ug/L)
0.05 - 1
5.0 4.2
1-2
2-5 3.3
5 - 10
ND
10 - 50
0.4
50 - 100 ND 2.3
100 - 500 2.0
500 - 1,000 0.2
ND
1,000 - 5,000 1.1
5,000 - 8,300 ND 1.0
0.7

FIGURE 4.65
Concentration contour map for lognormal kriging without detrending (colored, filled contours). The surface
is substantially improved over the default model, although concentrations are uniformly overpredicted and
instabilities exist in unsampled areas.
274 Hydrogeological Conceptual Site Models

lognormal kriging with anisotropy results are clearly better than the default model. The
overall shape of the plume is reasonably represented, and the contaminant hot spot is
not excessively averaged over a large area. However, areas with measured concentrations
between 1 and 100 μg/L are still overpredicted (in part because of the nugget effect), and
there are significant instabilities where the contours are unconstrained (see the discon-
nected 50–100 μg/L area adjacent to the simulated 5.0 μg/L contour line label, for example).
Therefore, this surface is still inadequate for most professional applications.

4.5.2.4 Lognormal Kriging with Trend Removal


To utilize all information gleaned from data exploration and to maximize the capabilities
of the Geostatistical Analyst extension, a kriging model with both log transformation and
detrending is created. In ordinary kriging, the transformation occurs before the detrend-
Downloaded by [University of Auckland] at 23:40 09 April 2014

ing, which greatly assists the LPI in this example. The default second-order trend surface
is depicted in Figure 4.66 and represents the classical hill shape. Note that the default trend
surface is produced by global polynomial interpolation, where all data are used together
in the same polynomial expression.
The trend surface can be improved and converted to a local model through regression
analysis by utilizing the optimization tool. Remember that blindly hitting the Optimize
button is not guaranteed to produce good results; however, it generally works better for
the deterministic LPI model (which is used for the detrending) than for kriging. The opti-
mization process varies the polynomial interpolation bandwidth, spatial condition num-
ber, and search neighborhood to minimize cross-validation error. The resulting optimized
trend surface is presented in Figure 4.67 and resembles the classical groundwater plume

FIGURE 4.66
Screen shot of concentration data fit with a second-order global polynomial equation. The use of all data (hence
“global”) in the polynomial equation results in an overly averaged surface that does not accurately reflect the
plume shape. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights reserved.
Contouring 275
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.67
Screen shot of optimized LPI model for use in detrending. The surface resembles a classical jet engine plume
shape and is overly smooth. Kriging the residuals of the detrended data will add local variability back into the
interpolation model. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights
reserved.

shape. This smooth, large-scale trend will be removed from the data prior to kriging so
that emphasis will be placed on local variability.
The log-transformed, detrended semivariogram and semivariogram map are displayed
in Figures 4.68 and 4.69, respectively. The detrending further reduces data variance to a
maximum value on the semivariogram graph (Figure 4.68) of less than 10 (μg/L)2. When
increasing the lag coverage to create the semivariogram surface (Figure 4.69), the maxi-
mum semivariance drops to approximately 1.5 (μg/L)2. More importantly, the detrending
changes the orientation of the semivariogram surface such that the maximum variance is
now found at small separation distances (i.e., close to the center of the map). This demon-
strates that the kriging model will now be based on local variability and will not be overly
influenced by the contaminant hot spot. Note also that the degree of anisotropy decreases
significantly as indicated by the narrower spread of curves in Figure 4.68. A Gaussian
model with a small nugget effect still provides the best fit to the semivariogram graph.
The resulting interpolated surface, after search neighborhood smoothing, is presented
in Figure 4.70. This model is clearly superior to the previous two iterations as interpolated
contours match simulated contours at both high concentrations and low concentrations.
The greatest discrepancy is now found around the 10 μg/L contour line, which overlies an
area of hydraulic conductivity and potentiometric surface transition (see Figure 4.50). The
spatial interpolation model is also unable to replicate the protrusion in the 1 and 2 μg/L
simulated contours in the right center of the map between the 0.3 and 1.5 μg/L values. This
276 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.68
Semivariogram with manually fit Gaussian model for the log-transformed, detrended data. The partial sill,
nugget, and degree of anisotropy have been significantly reduced compared to the default and log-transformed
(without detrending) models. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri.
All rights reserved.

High : 1.4956

Low : 2.02839e-005

FIGURE 4.69
Semivariogram map for the log-transformed, detrended data. Variability is now concentrated at small separa-
tion distances.
Contouring 277

ND ND

ND 34
570

8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3

73
50
10 7.8 8 5.2

ND
Downloaded by [University of Auckland] at 23:40 09 April 2014

1.5

2.9 6.9
0.2
Kriged Concentration (ug/L)
0.05 - 1
5.0 4.2
1-2
2-5 3.3
5 - 10 ND
10 - 50 0.4
ND 2.3
50 - 100
100 - 500 2.0
0.2
500 - 1,000 ND
1.1
1,000 - 5,000 1.0
ND
5,000 - 8,300 0.7

FIGURE 4.70
Concentration contour map for lognormal kriging with detrending (colored, filled contours). The surface is
usable for predicting concentrations at unsampled locations, generating graphics for inclusion in reports or
presentations, and calculating contaminant mass.

demonstrates that high localized heterogeneity is very difficult to represent with even the best
kriging model. Thorough, conceptual analysis of the groundwater flow field is necessary to
evaluate where additional sampling may be necessary to evaluate the effects of heterogeneity.

4.5.2.5 Model Comparison


While it is clear that lognormal kriging with trend removal is the optimal model with
respect to the visual appearance of the interpolated surface, it is also important to compare
the statistical performance of the models and their relative ability to predict concentrations
at unsampled locations. Remember that, in professional applications, the real contaminant
concentration contours are unknown, making reliance on appearance alone unjustified. At
this stage of the exercise, we must ignore the synthetic contours and focus on the fact that
the only data we have are the discrete monitoring-well results.
A cross-validation comparison of the detrended, log-transformed model to the default
kriging model is depicted in Figure 4.71. The root-mean-square error and prediction stan-
dard error are both significantly lower for the advanced model, and the predicted versus
measured graph is much closer to a 1:1 line than that of the default model. This indicates
that the advanced model makes more accurate predictions. The nearly horizontal line of the
278 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.71
Cross-validation comparison between the default kriging model and the log-transformed, detrended Gaussian
kriging model. Esri® ArcGIS Geostatistical Analyst graphical user interface. Copyright © Esri. All rights
reserved.

default graph reflects the fact that excessive averaging is used. Because of this averaging,
there is less underpredicting bias for the default model, and the root-mean-square stan-
dardized error is closer to 1. Therefore, if one were solely looking at the cross-validation
statistical results (i.e., ignoring the predicted versus observed graph and the surface itself),
it is possible that the default model could be selected over the advanced model with the
justification that it more accurately reflects the variability of the data. In this manner, the
cross-validation statistics are misleading as the two hot spot monitoring locations are outli-
ers that strongly influence the cross-validation statistics of the advanced model. The default
model merely averages these values away, correcting the bias by introducing more error.
While the default model is clearly erroneous in this case, the decision regarding which
model is best becomes more difficult when two reasonable models are created, where one
is superior statistically and the other is superior visually. An example of this situation is
presented on the companion DVD related to cokriging of the contaminant concentration
data. In general, the decision as to which model is better depends on the overall purpose
of the spatial interpolation exercise. If the contouring is being used to make predictions
that must be rigorously defended to regulators and/or litigators, it is beneficial to have a
strong statistical performance with reliable estimates of uncertainty. Alternatively, if the
contouring is being used predominately for graphical display purposes or for input into a
groundwater model (situations where smooth interpolation surfaces are desirable), visual
appearance may be more important. The CSM must also be considered in the evaluation
as the model with worse statistical performance may more accurately depict conceptual
processes and therefore better represent reality.
In addition to cross-validation statistics, the prediction standard error of the kriging
models should be compared. The prediction standard error surface for the detrended, log-
transformed model is presented in Figure 4.72. This map is dramatically different from
Contouring 279

ND ND

ND 34
570

8300 5,000
7500
ND 670
1600360 ND
3.8
1,000
ND 100 0.3

73 50
10 7.8 8 5.2

ND
Downloaded by [University of Auckland] at 23:40 09 April 2014

1.5

2.9 6.9
0.2

Prediction Standard Error (ug/L)


5.0 4.2
0 - 0.0217
0.0217 - 0.0998 3.3
0.0998 - 0.38
ND
0.38 - 1.39 0.4
1.39 - 5.01 ND 2.3
5.01 - 18 2.0
0.2
18 - 64.8
ND
64.8 - 233 1.1
ND 1.0
233 - 838 0.7
838 - 3,010

FIGURE 4.72
Prediction standard error map for the detrended, log-transformed model. Note that while the kriged prediction
surface (Figure 4.70) cannot replicate the contour bend between the 0.3 and 1.5 µg/L values on the right-center
of the map, the prediction standard error map does capture variability in this area.

the nonsensical default prediction standard error map presented in Figure 4.56. After
detrending and transforming the data, prediction standard error now depends on both the
sampling density and the locations of the data values, which is why error is significantly
higher in the contaminant hot spot than in the downgradient plume area. Pockets of rela-
tively high prediction standard error between 5 and 18 μg/L are caused by a sampling
density that insufficiently captures the observed heterogeneity of the data.
The data dependency of the prediction standard error estimates are also illustrated
through the histograms presented in Figure 4.73. The histogram at the top is for prediction
standard error estimates of the detrended, log-transformed data and shows that the error
estimates reasonably resemble a lognormal distribution, just like the data themselves.
Conversely, the default prediction standard error histogram at the bottom does not fol-
low any discernable distribution. In general, prediction standard error estimates are data
dependent when they follow the same distribution as the input data. While one should
be very happy that error is now data dependent, the novice user may once again be mis-
led by the fact that the advanced model has a significantly higher maximum prediction
standard error than the default model. It is hoped that hydrogeologists performing spatial
280 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.73
Histograms of log-transformed prediction standard error estimates at sampling locations for the detrended,
log-transformed kriging model (top), and the default kriging model (bottom). Esri® ArcGIS Geostatistical
Analyst graphical user interface. Copyright © Esri. All rights reserved.

interpolation will have enough knowledge to detect meaningless prediction standard


error surfaces and disregard their conclusions.
As a final step, the models are compared with respect to their predictive abilities at
unsampled locations. Table 4.2 presents the predicted concentrations at TW-1 through
TW-5 for each of the three models presented in this section and reinforces many of the
observations described above. The default model employs excessive averaging such that
Contouring 281

TABLE 4.2
Summary Table of Predictions at Unsampled Locations for Three Kriging Models
Predicted Concentrated (μg/L)
Model
Concentration Detrended, Log Log Transformed
Test Well (μg/L) Transformed Default Default
TW-1 1,436 2,981 4,034 1,074
TW-2 12,677 6,594 8,538 1,134
TW-3 8.10 11.7 25.0 513
TW-4 2.90 1.10 3.16 3.43
TW-5 0.52 0.94 3.25 1.80
Downloaded by [University of Auckland] at 23:40 09 April 2014

the peak concentration is grossly underpredicted, and lower concentrations are overpre-
dicted. The log-transformed model without detrending uniformly overpredicts, with the
exception of TW-2, as the hot spot exerts too much influence on the rest of the interpola-
tion surface. The detrended, log-transformed model performs best across the full range of
concentrations but still has difficulty capturing the peak at TW-2. It is unlikely that any
kriging model will accurately predict the TW-2 concentration because it is significantly
higher than the greatest measured values used in the contouring (8300 μg/L). For regula-
tory purposes, accurately predicting the extent of low concentrations in single-digit or tens
of micrograms per liter is often most important as drinking water standards are generally
found in that concentration range. The advanced model clearly performs best in this range
as demonstrated by the results at TW-3.

4.5.2.6 Advanced Detrending and Cokriging


While the log-transformed, detrended kriging model produces an excellent surface com-
pared to the default model and many contaminant concentration contour maps commonly
seen in professional practice, it may be of interest to certain readers to explore how the
model can be further improved through advanced detrending and/or cokriging methods.
Advanced detrending consists of manually modifying or smoothing the search neigh-
borhood and optimizing the bandwidth of the LPI model. For this example, cokriging
consists of using addition water-level data in areas where contaminant concentrations are
unknown to try to improve the kriging estimates. An example covariance graph between
the contaminant concentration data and the water-level data is presented in Figure 4.74,
demonstrating correlation between the two variables.
When using advanced detrending and cokriging, the superior surface displayed in
Figure 4.75 can be created. Note that the 10 μg/L concentration contour line, in particular,
is better estimated in Figure 4.75 than in Figure 4.70. Even with the full gamut of kriging
options, however, the nondetect area upgradient of the contaminant hot spot (where three
water-level data points are added) cannot be replicated nor can the heterogeneous bend
in the simulated contours between the 0.3 and 1.5 μg/L data values (see Figure 4.70). This
demonstrates that even with the most complex kriging model, the accuracy of the spatial
interpolation is dependent upon a monitoring network that captures the heterogeneity of
the data.
This advanced detrending and cokriging example is further described on the compan-
ion DVD, illustrating the following concepts:
282 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.74
Covariance plot between the contaminant concentration data (Variable 1) and the water level data (Variable 2),
demonstrating correlation between the two variables. Esri® ArcGIS Geostatistical Analyst graphical user inter-
face. Copyright © Esri. All rights reserved.

5,000

1,000
100
50
10

Kriged Concentration (ug/L)


0.05 - 1 5.0
1-2
2-5
5 - 10
10 - 50
50 - 100
100 - 500 2.0
500 - 1,000
1,000 - 5,000
1.0
5,000 - 8,300

FIGURE 4.75
Concentration contour map for a cokriging model using additional water-level monitoring points (white circles)
to supplement the existing contaminant concentration data set (purple circles).
Contouring 283

• Nondetect results significantly influence cross-validation statistics.


• Detrending underestimates prediction standard error.
• Models optimized for statistical performance can have conceptual problems
visually.

4.5.2.7 Advanced Uncertainty Analysis


This chapter has addressed means of assessing the uncertainty of spatial interpolation
models through cross-validation and kriging prediction standard error diagnostics.
Conceptually, analysis of these metrics is a form of post-processing in which the user’s
prescribed model parameters are evaluated in their ability to generate accurate predic-
tions. An alternative form of uncertainty analysis is to perturb these input parameters
Downloaded by [University of Auckland] at 23:40 09 April 2014

and generate multiple possible realizations of the interpolated surface. Semivariogram


sensitivity analysis and Gaussian geostatistical simulation are two methods of advanced
uncertainty analysis that involve the generation of multiple alternative realizations of the
same input data.
Semivariogram sensitivity analysis creates many different surfaces by changing the
semivariogram model parameters (nugget, range, partial sill/scale) within a percentage
of the user-specified values. A model is unreliable if small changes in model parameters
lead to large changes in predictions or prediction standard errors (Esri 2010b). Sensitivity
analysis is a required step in groundwater modeling (see Chapter 5), and with the ease of
performing the analysis in programs such as Geostatistical Analyst, it may soon become
standard practice for contouring applications as well.
Gaussian geostatistical simulation, a form of Monte Carlo simulation, generates random
stochastic realizations using the statistical features (mean, variance, and semivariogram)
of the input data. The set of predicted values at each location is normally distributed with
a mean value of the original kriging estimate and a spread equal to the kriging variance
(Esri 2010c). Geostatistical simulation attempts to recapture local, or microscale, variation
that is lost in the kriging process because of averaging. As a result, geostatistical simula-
tion is an excellent tool to assess the potential range of local data fluctuation (Krivoruchko
2011). An example application of Gaussian geostatistical simulation is presented on the
companion DVD using the contaminant concentration data contoured in this section.

4.5.3 Summary
The above contouring exercise demonstrates the extensive geostatistical capabilities that
are now readily available to the professional hydrogeologist. GUIs in programs such as
Surfer and Geostatistical Analyst greatly simplify advanced quantitative analysis, and
the use of advanced geostatistical methods is no longer limited to academia. Obviously,
there is danger inherent to introducing these methods to nonexperts because of the
potential for software misuse (see the discussion in this chapter regarding the use and
interpretation of prediction standard error and indicator kriging, for example). However,
in the opinion of the authors, the far bigger problem is the widespread use of default
models in professional hydrogeology. This practice is not scientific and completely
ignores the advancements that have been made in commercial and public-domain soft-
ware in recent years.
One potential reason for the persistence of default models is the failure to appreciate the
benefits of geostatistical spatial interpolation or, more precisely, the failure to appreciate
284 Hydrogeological Conceptual Site Models

the consequences of not using geostatistical spatial interpolation. Without defensible, sci-
ence-based interpolation, one is left with an endless cycle of data collection. If uncertainty
cannot be quantified then no argument can be made that enough data have been collected
to support the CSM and the data quality objectives of the sampling program. For site reme-
diation applications, this translates to the familiar process of delineating to nondetect.
Excessive data collection during site investigation and remediation often leads to the gen-
eration of absurd figures and data tables such as those in Figures 4.76 and 4.77. While the
extremely high density of monitoring wells in Figure 4.76 may be necessary for research
applications, it makes little sense in professional practice.
It goes without saying that the cycle of endless data collection results in unacceptable
costs using financial resources that could be better allocated elsewhere. Unfortunately,
environmental consultants will often acquiesce to regulatory demands for additional
sampling without even proposing a geostatistical alternative that expands knowledge
Downloaded by [University of Auckland] at 23:40 09 April 2014

about the data that have already been collected. Redundancy in data collection can be
both spatial and temporal, and geostatistical methods can document this redundancy and
prove that additional data collection has no value. For example, VSP, a statistical analysis
program available in the public domain, has specific modules designed to help the user
identify spatial and temporal redundancy in sampling networks (Matzke et al. 2010). As
advanced capabilities, such as single well/temporal variogram analysis, are now available
in the public domain in programs such as VSP, there is no excuse for the professional com-
munity to ignore these methods any longer.
To avoid wasteful data collection and to advance informed decision-making processes
that quantify uncertainty, it is necessary for both the consulting and regulatory commu-
nities to become more educated in geostatistics and more willing to apply and embrace
geostatistical concepts.

FIGURE 4.76
Photograph of a well field at the Massachusetts Military Reservation. Appropriate application of kriging may
help avoid sampling at this density in professional (nonacademic) settings. (From United States Geological
Survey, Carbon and Nitrogen Cycling in Groundwater: Cape Cod Study Site. Biogeochemistry of Carbon and
Nitrogen in Aquatic Environments, 2010. Available at www.brr.cr.usgs.gov/projects/EC_biogeochemistry/
Cape.htm.)
Contouring 285
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.77
Data table demonstrating the consequences of excessive data collection to satisfy regulatory demands.
Geostatistical methods tell us more about our data than ill-advised extraneous sampling and help ensure that
each additional sample has a substantive conceptual justification. In this case, the cost of each single piece of
data (i.e., each cell) on the table is estimated at $14.

4.6 Grid and Contour Conversion Tools


As the vast majority of contouring applications in professional hydrogeology involve the
use of computer programs such as ArcGIS, Surfer, and AutoCAD, it is important to know
basic file conversion techniques for both grids (rasters) and contours (polylines). Useful
computer tools to perform these operations, predominately in ArcGIS, are described in
Sections 4.6.1–4.6.3.

4.6.1 Converting Contour Maps to Grid Files


Historic contours of a parameter of interest are often obtained from external sources when
working on a project. For example, hard copy potentiometric surface or surficial geology
contours are often obtained from USGS publications, and topographic contours may be
obtained from state GIS sources. In addition, contours may be available from documents
produced by previous consultants at the given site. Contours can easily be georeferenced
and/or digitized electronically as polylines in feature class (ArcGIS) or dxf (AutoCAD)
format. However, it may be very useful to convert these contours into a continuous raster
or grid file. This grid file can then be used for input into groundwater models, to perform
grid math/grid calculus, to recontour the data at any desired interval, or to create three-
dimensional visualizations.
The Topo to Raster tool in the Spatial Analyst extension to ArcGIS rapidly converts
polyline contours to an Esri grid raster file. The dialogue box for the tool is presented in
Figure 4.78, displaying various options for user specification, including raster cell size
or output extent. Example input and output data for execution of the Topo to Raster tool
are presented in Figures 4.79 and 4.80, respectively. The contour lines in Figure 4.79 are
input into the tool, and the resulting raster surface is shown below the input contours in
Figure 4.80.
286 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.78
Dialogue box for the Topo to Raster tool in ArcGIS. Note the allowable input formats include hydrology-specific
data fields. Esri® ArcGIS Spatial Analyst graphical user interface. Copyright © Esri. All rights reserved.

FIGURE 4.79
Contour lines used as input to the Topo to Raster tool.
Contouring 287
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.80
Execution of the tool produces the colored raster shown below the contour lines.

4.6.2 Converting Grid File Types


As described above, a simple tool can be used to create an Esri grid file from a polyline
contour map. However, it may be necessary to have the raster in a Surfer grid file format
(.grd) for other applications. For example, the commercial groundwater modeling program
Groundwater Vistas uses the Surfer grid file format. The best way to convert grid file for-
mats is to first convert the raster in question into a point file, where each raster cell is
represented by a point in the center of that cell with the value of the attribute in question
(e.g., elevation). This is performed using the Raster to Point conversion tool in ArcGIS.
The dialogue box for the tool is depicted in Figure 4.81, and execution of the tool using

FIGURE 4.81
Dialogue box for the Raster to Point conversion tool in ArcGIS. Esri® ArcGIS ArcToolbox graphical user inter-
face. Copyright © Esri. All rights reserved.
288 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.82
Execution of the tool results in the creation of a point file with a feature placed at the center of each grid cell with
the value of the raster cell added to the point file’s attribute table. This point file can now be used to recreate the
raster in any format using exact interpolation.

the raster created for Figure 4.80 results in the point file presented in Figure 4.82. We have
now converted the originally obtained contour lines into a point file with the attributes of
an interpolated surface.
In order to convert these points to a Surfer grid file, one only needs to bring the point
file into Surfer and perform any exact interpolation method (such as default linear kriging)
specifying the same grid size as the original Esri grid file. The same process can be per-
formed in reverse using a point file from a Surfer grid and then interpolating the points
into a raster using Spatial Analyst or Geostatistical Analyst in ArcGIS.

4.6.3 Extracting Grid Values to Points


Spatial Analyst has a function similar to the Raster to Point tool that enables extraction
of raster values to any set of points overlying that raster. Raster values are automatically
appended to the attribute table of the point file in question. The Spatial Analyst tool is
termed Extract Values to Points and is very useful in obtaining interpolated predictions
at unsampled locations. This is preferable to manually estimating the interpolated values
between contour lines. Figure 4.83 shows how the Extract Values to Point tool can be used
to determine what the raster values are for the points shown on the map. The extracted
value can either represent the exact value within the center of each raster cell or an interpo-
lated value within the cell considering the values of adjacent cells. Note that Geostatistical
Analyst has a similar extraction tool from the Geostatistical Analyst layer (GA layer) grid
file type.
Contouring 289
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.83
Dialogue box for the Extract Values to Points tool. Raster values at the corresponding X, Y locations will be
added to the attribute table of the black points depicted in the figure. Esri® ArcGIS ArcToolbox graphical user
interface. Copyright © Esri. All rights reserved.

4.6.4 Appropriate Use of Spatial Analyst


The tools described in this section involve the use of the Spatial Analyst extension to
ArcGIS. Spatial Analyst is well suited to grid conversion and is also useful for numerous
other surface analysis applications, including, but not limited to, grid math, grid manipula-
tion, slope analysis, cut/fill calculations, and statistical calculations. However, the authors
do not recommend using Spatial Analyst to perform spatial interpolation for applications
other than grid conversion. In other words, Spatial Analyst should not be used to develop
potentiometric surface or contaminant concentration contours based on discrete monitor-
ing data. While Spatial Analyst has a kriging tool with which the most basic variogram
parameters can be specified, shown in Figure 4.84, variography cannot be performed to
determine model parameters (i.e., the semivariogram is not created and displayed to the
user). Additionally, Spatial Analyst does not incorporate data transformations, anisotropy,
or cross-validation.
Default kriging using Spatial Analyst is a common practice in environmental consulting
that results in the generation of low-quality contour maps. To illustrate this, the aquitard
data contoured in Figures 4.7–4.9 are contoured using default kriging in Spatial Analyst.
The resulting surface is presented in Figure 4.85 and shows significant instability that ren-
ders the contours unusable for most applications. If investment in Geostatistical Analyst
or Surfer is not feasible, it is strongly recommended that the reader performs contouring
with public-domain programs such as VSP or SADA, which allow variography, rather than
Spatial Analyst.
290 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:40 09 April 2014

FIGURE 4.84
Dialogue box used to specify parameters for kriging with Spatial Analyst. Variography cannot be performed in
Spatial Analyst, and if Spatial Analyst were meant to be used for kriging, Geostatistical Analyst would not exist.
Esri® ArcGIS ArcToolbox and Spatial Analyst graphical user interface. Copyright © Esri. All rights reserved.

FIGURE 4.85
Contours of the aquitard created using kriging with default settings in Spatial Analyst. Note areas of instability
with jagged transitions between contour lines.
Contouring 291

References
Carlson, R. E., and Foley, T. A., 1991. Radial Basis Interpolation on Track Data. Lawrence Livermore
National Laboratory, UCRL-JC-1074238.
Clark, I., 1979. Practical Geostatistics. Applied Science Publishers, London, 129 pp.
Cressie, N., 1991. Statistics for Spatial Data. John Wiley & Sons, Inc., New York, 900 pp.
Esri, Inc., 2003. ArcGIS® 9: Using ArcGIS® Geostatistical Analyst. Redlands, CA, 300 pp.
Esri, Inc., 2010a. How Local Polynomial Interpolation Works. ArcGIS 10 Help Library.
Esri, Inc., 2010b. How Semivariogram Sensitivity Works. ArcGIS 10 Help Library.
Esri, Inc., 2010c. Key Concepts of Geostatistical Simulation. ArcGIS 10 Help Library.
Golden Software, Inc., 2002. Surfer 8 User’s Guide: Contouring and 3D Surface Mapping for Scientists and
Engineers. Golden Software, Inc., Golden, CO, 640 pp.
Helsel, D. R., 2005. Nondetects and Data Analysis. John Wiley & Sons, Hoboken, NJ, 250 pp.
Downloaded by [University of Auckland] at 23:40 09 April 2014

Institute of Environmental Modeling, University of Tennessee, 2008. Spatial Analysis and Decision
Assistance (SADA) Software Home Page. University of Tennessee Research Corporation.
Available at www.tiem.utk.edu/~sada/index.shtml.
Isaaks, E., and M. Srivastava, 1989. An Introduction to Applied Geostatistics. Oxford University Press,
New York, 561 pp.
Kitanidis, P. K., 1997. Introduction to Geostatistics. Applications in Hydrogeology. Cambridge University
Press, Cambridge, UK, 249 pp.
Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC Press/Taylor & Francis,
Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources. Sustainability, Management, and Restoration. McGraw Hill,
New York, 852 pp.
Krivoruchko, K., 2011. Spatial Statistical Data Analysis for GIS Users. Esri Press, Redlands, CA, 928 pp.
Krivoruchko, K., and Bivand, R., 2009. GIS, Users, Developers, and Spatial Statistics: On Monarchs
and Their Clothing. In Interfacing Geostatistics and GIS, Springer, pp. 209–228. Paper presented
at the StatGIS 2003, International Workshop on Interfacing Geostatistics, GIS and Spatial
Databases, Pörtschach, Austria, September 29–October 1, 2003.
Krivoruchko, K., Gribov A., and Ver Hoef, J., 2000. A New Method for Handling the Nugget Effect
in Kriging. Available from ESRI online at http://training.esri.com/bibliography/index.
cfm?event=general.recorddetail&id=30570. Accessed March 23, 2011.
Kupusović, T., 1989. Measurements of piezometric pressures along deep boreholes in karst area and
their assessment. Nas Krs XV(26–27), 21–30.
Matzke, B. D., Nuffer, L. L., Hathaway, J. E., Sego, L. H., Pulsipher, B. A., McKenna, S., Wilson, J. E.,
Dowson, S. T., Hassig, N. L., Murray, C. J., and Roberts, B., 2010. Visual Sample Plan Version 6.0
User’s Guide. United States Department of Energy, PNNL-19915, 255 pp. Available at http://
vsp.pnnl.gov/docs/PNNL%2019915.pdf.
Milanović, P., 2006. Karst istocne Hercegovine i dubrovackog priobalja (Karst of Eastern Herzegovina and
Dubrovnik Litoral). ASOS, Belgrade, 362 pp.
United States Environmental Protection Agency, 1977. The Report to Congress: Waste Disposal
Practices and Their Effects on Ground-Water. EPA 570977001, 531 pp.
United States Geological Survey, 2010. Carbon and Nitrogen Cycling in Groundwater: Cape Cod
Study Site. Biogeochemistry of Carbon and Nitrogen in Aquatic Environments. Available at
http://www.brr.cr.usgs.gov/projects/EC_biogeochemistry/Cape.htm.
Webster, R., and Oliver, M. A., 2001. Geostatistics for Environmental Scientists. John Wiley & Sons, Ltd.,
Chichester, England, 271 pp.
Downloaded by [University of Auckland] at 23:40 09 April 2014
5
Groundwater Modeling

5.1 Introduction
Groundwater models are utilized during all phases of conceptual site model (CSM) develop-
ment. They vary in complexity from simple analytical models used as screening tools to three-
dimensional (3D) numerical models that serve as a CSM’s focal point. Advanced numerical
models with a graphical user interface (GUI) have many advantages of geographic informa-
tion system (GIS) and embody various quantitative relationships between the CSM elements.
Models enable hydrogeologists to make educated predictions about the state of the system and
its response to changes. These changes may be an increased demand of groundwater use in a
watershed, more frequent occurrence of droughts, or impacts of a groundwater remediation
technology on contaminant concentrations some distance downgradient of the site. Whatever
the case may be, a predictive model—analytical or numerical—is often used to help guide
development of the CSM, test scenarios, and support or refute conclusions.
Unfortunately, there exists a distrust of groundwater models in the environmental indus-
try. Some professionals and regulators believe groundwater model results to be entirely
nonunique, which means that results can easily be tweaked to fulfill a preordained pur-
pose while still satisfying a calibration standard. To a certain extent, the critics are correct.
The hydrogeologist has tremendous influence over model results and can sway things one
way or another by modifying model input parameters that are difficult to trace. For this
and other reasons, some professionals hired by parties with high-stakes agendas, such as
winning lawsuits, may bend the limits of professional ethics and develop groundwater
models that cannot be comprehended even by some practicing hydrogeologists.
The questions that inevitably arise regarding modeling for any purpose are “Why do we
need to do groundwater modeling in the first place?” and “Can groundwater modeling be
performed in a transparent manner that is easy to review and understand?” To answer the
first question, hydrogeologists need groundwater models for two primary reasons: One is to
predict future conditions, and the other is to understand how current conditions came to be.
Nearly all projects in hydrogeology require future projections, and the hydrogeologist is relied
upon for expert opinion in this matter. Most commonly, the hydrogeologist must evaluate the
efficacy of selected interventions in water-supply development or hazardous-waste remedia-
tion (particularly groundwater remediation). Examples include the installation and operation
of new water-supply wells or extraction wells associated with pump-and-treat remediation, the
injection of chemical reagents for groundwater remediation (in situ chemical oxidation), or the
stimulation of contaminant biodegradation. Groundwater models are the best available means
of simulating these interventions and, most simply, determining if they will work. Modeling
can also fill in the gaps between data that have been or will be collected at discrete time inter-
vals, thus helping the hydrogeologist better understand and simulate transient processes.
The answer by the authors and many professional hydrogeologists to the second ques-
tion (Can groundwater modeling be performed in a transparent manner that is easy to
review and understand?) is a resolute yes, as demonstrated further in this chapter.

293
294 Hydrogeological Conceptual Site Models

One conceptual reason to conduct groundwater modeling is that 3D numerical models


help the hydrogeologist truly understand physical and chemical processes occurring at
a site. The hydrogeologist can use a groundwater model to create a transient representa-
tion of the CSM and effectively put it to the test. The conceptual interpretation of geology,
surface water–groundwater interaction, and flow-boundary conditions become the input
to the model. The ability of the model to match measurements of the hydraulic head and
contaminant concentration at monitoring wells, stream flow, and other parameters is an
indicator of the strength of the underlying CSM. Ideally, the hydrogeologist’s model will
become a valuable instructional tool in identifying data gaps or components of the CSM
that are more critical than previously imagined. Therefore, the CSM is often refined after
numerical modeling has been conducted.
In a sense, something of a chicken or the egg dilemma exists. The hydrogeologist needs
a CSM to create a groundwater model to formulate model layers, boundary conditions,
Downloaded by [University of Auckland] at 23:45 09 April 2014

and hydraulic and chemical parameters. Yet the hydrogeologist will inevitably discover
something new about the site through the execution of the groundwater model and will
likely alter the CSM accordingly. The explanation for this iterative behavior is that the
initial CSM is usually developed with coarse resolution and focuses on macro concepts
such as general soil (rock) type, stratigraphic relationships, general groundwater chem-
istry, and aquifer production potential. When these broad categorizations are translated
into a transient, site-specific numerical model, the many details involved in groundwater
flow and contaminant transport become significant at a more refined scale. Changes in
model layering or key parameters such as groundwater recharge or hydraulic conductiv-
ity become necessary, and often a conceptual justification for these changes exists that
initially escaped the coarse resolution of the CSM. The groundwater model and the CSM
work hand-in-hand to create a 3D simulation of the real world, which the hydrogeologist
can use to predict future conditions and explain how current conditions came to be.

5.2 Misuse of Groundwater Models


Unfortunately, as mentioned earlier, complex 3D numerical models are sometimes used
to justify erroneous concepts with underlying agendas. This is particularly evident in liti-
gation cases involving groundwater contamination when experts for the opposing sides
almost always come up with very different modeling results based on supposedly the
same raw hydrogeologic data collected in the field. In the judicial system of the United
States, this amounts to gambling on the opinions of nonexperts and nonhydrogeologists,
such as judges and jury members, about the persuasiveness of a particular model. This
gambling relies, to a great extent, on the notion that models are somehow more objective
than humans, even though the opposing side may be using the very same argument. In
any case, because of many readily available, affordable, user-friendly, and powerful com-
puter programs, it appears that everyone is practicing groundwater modeling today. This
is illustrated in Figure 5.1, which shows interpretations of the subsurface geology in a
groundwater model developed for a multimillion-dollar lawsuit.
A self-declared nongeologist created this model together with an assistant. At the time,
the model was applauded by the nongeologist’s attorney as one of the most complex ever
created because it had well over 200 layers and a very fine spatial resolution. The model
was used to demonstrate how some water-supply wells will be impacted sometime in the
Groundwater Modeling 295
Downloaded by [University of Auckland] at 23:45 09 April 2014

4040 feet
Hydraulic conductivity, ft/d
4.3 x 10-5 0.01 4.32 25 Available boring log data

FIGURE 5.1
Spatial distributions of four sediment types at two different depths below ground surface, 30 ft apart. These
probabilistic realizations were created by an expert witness in a lawsuit using a technique developed by the
expert witness. Available boring logs for the two depths are shown with yellow circles. Each sediment type
was assigned arbitrary values of hydraulic conductivity. Regional lacustrine clay deposits are depicted in red.

future by leaks of gasoline at certain gas stations. Incidentally, the sites in question are
situated in an area with thick regional lacustrine clay deposits, which act as a regional
aquitard. This aquitard separates surficial sediments from the underlying aquifer where
the production wells are installed to extract groundwater for water supply. Apparently
because the nongeologist (an expert witness for the plaintiff) opined that “every aquitard
leaks,” a computer program was used to create an objective interpretation of the subsurface
based on well drillers’ logs and a statistical algorithm to fill in geologic data gaps based on
probability. The expert witness and the assistant applied this concept three times, creat-
ing three different 3D realizations of the probable subsurface geology. Two depth-discrete
horizontal maps (slices) of one of the 3D realizations, together with the locations of field
information (from borings/wells) available for these depths, are shown in Figure 5.1. By
utilizing a technique developed by the expert witness and the assistant, they proved every
aquitard should leak because the probabilistic model created many holes (small, large, and
everything in between) in the thick lacustrine clay deposits, including in wide areas with-
out any field information. The three probabilistic realizations of the 3D subsurface geology
created by the expert witness and the assistant looked similar but did have fundamental
differences because of the randomness of the model.
After creating the three probabilistic interpretations of the subsurface geology, the
expert witness and the assistant then simulated the fate and transport of the constituent of
concern (COC) in groundwater. The final deliverable of their groundwater modeling was
three different applicable (to the issue in question) predictions of the contaminant concen-
trations in the subsurface. At the same field locations (same wells) and at the same times
of prediction, concentrations ranged between nondetect and hundreds of micrograms per
liter (parts per billion) between the three scenarios. When asked by the judge which of the
predicted concentrations at certain locations was more probable, the expert witness could
296 Hydrogeological Conceptual Site Models

not decide. As a result, the “most sophisticated groundwater model ever created” was
dismissed by the judge.
As described further in Section 5.5, statistically based algorithms to assess model uncer-
tainty generally require hundreds or even thousands of model runs so that the probability
of a certain outcome can be assessed statistically. The expert witness may have been able to
quantify the likelihood of a certain contaminant distribution if a more robust analysis were
conducted, and doing so may have prevented the model from being discarded. However,
the possibility does exist that the expert witness performed additional analyses yet only
presented the three realizations that showed a breach in the aquitard solely to illustrate
that such an occurrence was possible. This practice of cherry-picking only the random
realizations that suit one’s conceptual objectives defeats the entire purpose of probabilis-
tic modeling. Anything is theoretically possible, and the real challenge is characterizing
uncertainty qualitatively through professional judgment and the CSM, and quantitatively
Downloaded by [University of Auckland] at 23:45 09 April 2014

through appropriate statistical techniques.


This example illustrates that CSMs and groundwater models should be based on thor-
ough understanding of geologic and hydrogeologic principles and other key elements
described in Chapter 2. As proven by the judge in this case, if a groundwater model does
not make common sense and cannot help answer the important questions for which it was
developed, it does not matter who or what created it. As to the expert witness’s and the
assistant’s application of probability, it is important to note that probabilistic models are
used in hydrogeology and may play an important role in developing CSMs provided, of
course, that they are applied with care and described in a transparent manner understand-
able to all interested parties.
Another common problem with the application of complex numerical models by practi-
tioners not well versed in either groundwater modeling or principles of groundwater flow
is not realizing that the numerical model may be producing results that contradict the
CSM and/or basic physical principles. In other words, the numerical model will always do
exactly what it was asked to do by its developer, including producing impossible results.
This is true even when the model developer firmly believes his or her CSM is correct and
is properly translated into the numerical environment (computer program). The model
developer may also firmly believe that the computer program is producing correct results
because the conceptual model is correct. Unfortunately, if the whole process is not checked
by independent, knowledgeable hydrogeologists and groundwater modelers, it can some-
times be impossible to find real reasons for questionable results created by a model if one
were to rely only on a published modeling report and some colorful figures. A disconnect
between the CSM, the numerical model developed from it, the groundwater modeling pro-
cess, and the presentation of modeling results is illustrated with the following example.
A new numerical groundwater flow model that incorporates important components of
the latest information and conceptualization of the karstic Edwards aquifer was developed
by the United States Geological Survey (USGS) in cooperation with the U.S. Department
of Defense and Edwards Aquifer Authority (Lindgren et al. 2004). This one-layer model
includes both the San Antonio and Barton Springs segments of the Edwards aquifer in
the San Antonio region of Texas and was calibrated for steady-state and transient condi-
tions. Transient simulations were conducted using monthly recharge and groundwater
pumpage (withdrawal) data. This equivalent-porous medium model was developed using
MODFLOW and incorporates virtual conduits simulated as one-cell-wide zones with very
large hydraulic conductivity values of as much as 300,000 ft per day. The flow in the con-
duit cells is simulated with Darcy’s equation, just as in all other cells in the model. As
stated by the model authors, the locations of the virtual conduits that were 1320 ft (400 m)
Groundwater Modeling 297

wide and tens of miles long were based on a number of factors, including major potentio-
metric surface troughs in the aquifer, the presence of sinking streams, geochemical infor-
mation, and geologic structures (Lindgren et al. 2004).
Figure 5.2a shows a portion of the model domain with the simulated steady-state poten-
tiometric surface of the Edwards Aquifer. The model’s no-flow boundary is emphasized by
annotation added to the original figure from Lindgren et al. (2004). This no-flow boundary
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.2
Portion of the model domain showing results of a numerical groundwater flow model developed for karstic
Edwards Aquifer in Texas. (a) Calibrated steady-state model with the model-predicted potentiometric surface
contours; annotation added for emphasis. (b) Calibrated transient model for drought conditions of August
1956; blue arrows and question marks added for emphasis. (From Lindgren, R. J. et al., Conceptualization and
Simulation of the Edwards Aquifer, San Antonio Region, Texas. U.S. Geological Survey Scientific Investigations
Report 2004-5277, 143 pp., 2004.)
298 Hydrogeological Conceptual Site Models

behaves as expected based on general principles of groundwater flow described in most


hydrogeology textbooks, including Chapter 4 of this book. In other words, the simulated
potentiometric surface lines are perpendicular to the no-flow boundary. As a reminder, a
no-flow boundary in hydrogeology and groundwater modeling is equivalent to an imper-
meable boundary—water cannot enter or leave the active model domain across this type
of boundary in either steady-state or transient conditions.
Figure 5.2b shows the same portion of the model domain with the simulated potentio-
metric surface for transient, drought conditions in August 1956. The situation shown is
completely different than in Figure 5.2a. The potentiometric lines are parallel to the no-
flow boundary. If one were to draw groundwater flow lines perpendicular to the potenti-
ometric surface lines, it would appear that the no-flow boundary is now providing water
to this one-layer model. This is emphasized by the blue arrows added to the original
figure from Lindgren et al. (2004).
Downloaded by [University of Auckland] at 23:45 09 April 2014

Figures 5.3a and 5.3b show the same steady-state and transient conditions depicted in
Figures 5.2a and 5.2b, respectively, when the virtual conduits are in the model (the con-
duits are represented by the jagged, thick red lines). Notably, the simulated potentiometric
contour lines for the conduit conditions are omitted from the two figures. Instead, the
authors of the model have chosen to schematically show groundwater directions using
hand-drawn black arrows (note that all arrows in Figure 5.3 are original to the USGS pub-
lication). The USGS arrows showing groundwater flow directions are perpendicular to the
no-flow boundary as emphasized with added question marks in Figure 5.3b, indicating
that flow from the no-flow boundary is occurring.
The authors of this book are not aware of any plausible explanation for the strange,
hydraulically impossible behavior of the no-flow boundary in this one-layer numerical
model of the Edwards Aquifer. In addition to the USGS, multiple public and private profes-
sionals are listed as authors of the model or members of the Ground Water Model Advisory
Panel that endorsed it. Consequently, it may appear that both the numerical groundwater
model and the conceptual hydrogeological model it is supposed to simulate must be cor-
rect because they were endorsed by numerous institutions and consulting companies.
The above example illustrates that one should never accept previously published, peer-
reviewed information at face value without independent, critical analysis. While the USGS
produces many useful and accurate reports, the institution is not infallible. It is often
desirable for a hydrogeologist to directly cite the USGS or United States Environmental
Protection Agency (U.S. EPA) as there will generally be less resistance from regulators in
accepting the related concepts and conclusions. However, if the assumptions and results of
the study in question are wrong, it can lead to the rapid propagation of conceptual errors
that can become entrenched in professional practice.
It is acknowledged that the circumstances of the model review are not known to the
authors, including the visualizations of model output made available to the reviewers.
However, the widespread endorsement of a model with apparent conceptual and other
problems could be construed as an example of groupthink, when group pressures lead to
a breakdown in independent thought and result in flawed decision making (Janis 1972).
Groupthink favors anecdotal assumptions over scientific evidence, avoids criticism and
controversy to achieve “consensus,” and rationalizes bad decisions made in the past rather
than exploring new solutions. To avoid groupthink, group members should remain as
impartial as possible and consult independent, evidence-based opinion from third parties
removed from the impacts and political pressures of the decision to be made.
Selecting and designing a remedial process may be the central decision a team of envi-
ronmental professionals is making for a site. As suggested by the U.S. EPA, it is difficult
Groundwater Modeling 299
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.3
Portion of the model domain showing results of a numeric groundwater flow model developed for karstic
Edwards Aquifer in Texas when virtual conduits (jagged red lines) are included in the model. Hand-drawn
black arrows showing groundwater flow directions are original to this USGS report. Note the absence of model-
predicted potentiometric surface contour lines. (a) Calibrated steady-state model. (b) Calibrated transient model
for drought conditions of August 1956; question marks are added for emphasis. (From Lindgren, R. J. et al.,
Conceptualization and Simulation of the Edwards Aquifer, San Antonio Region, Texas. U.S. Geological Survey
Scientific Investigations Report 2004-5277, 143 pp., 2004.)

for any software to incorporate the myriad input data and parameters that are required to
select and customize a remedial process to a site’s unique characteristics. The agency also
points out that new insight and experience are constantly reshaping design and applica-
tion of a remedial process, even for the most successful and widely used technologies.
For these reasons, many past attempts at developing decision support tools that include
groundwater models to support remedial process selection have been abandoned or are no
longer supported by the U.S. EPA (2005). Instead, hydrogeologists are relying on individ-
ual, site-specific groundwater models and their combinations to provide conceptual and
300 Hydrogeological Conceptual Site Models

quantitative answers regarding a specific remedial technology or a combination of tech-


nologies. However, both the conceptual model and the quantitative input from hydroge-
ologists based on numerical modeling may or may not be considered by decision makers
when selecting the final remedy for the site. A perfect example is the technical impractica-
bility of groundwater remediation as explained in detail in Chapter 8.4.2. A model may be
developed following all applicable industry standards, may pass reviews and rereviews,
and may be uniformly qualified by independent experts as producing reasonable quan-
titative answers, including associated uncertainty. Yet if the model does not produce the
answers acceptable to some stakeholders, including a regulatory agency, it may be ignored
altogether. Although this practice is far from being widespread, it occasionally causes
some hydrogeologists to abandon groundwater modeling altogether and some to never
consider it as an important tool to begin with. In the opinion of the authors, however, there
is not a single reason why every practicing hydrogeologist should also not be a ground-
Downloaded by [University of Auckland] at 23:45 09 April 2014

water modeler; the two professions are inseparable today, and groundwater modeling has
become an industry standard practice in all aspects of hydrogeology.

5.3 Types of Groundwater Models


In general, a groundwater model simulates the spatial and temporal properties of a ground-
water system, or one of its parts, in either a physical (real) or mathematical (abstract) way.
An example of a physical model in hydrogeology would be a tank filled with sand and
saturated with water—the so-called sandbox model, an equivalent to a miniature aquifer
of limited extent. This aquifer can be subject to miniature stresses, such as pumping from
a perforated tube placed in the sand, thus representing a water-supply well. An obvious
question when considering similar models is how feasible it is to build a multilayer aqui-
fer exposed to various stresses, such as precipitation, surface stream flow, or leakage from
deep underlying strata, and then change some of its geometric and hydrogeologic proper-
ties as needed. Consequently, the application of real physical models in hydrogeology has
been limited to educational and demonstration purposes. Models that use mathematical
equations to describe elements of groundwater flow are called mathematical. Depending
upon the nature of equations involved, these models can be

• Empirical (experimental)
• Probabilistic
• Deterministic

Empirical models are derived from experimental data that are fitted to some mathematical
function. Although empirical models are limited in scope and are usually site- or problem-
specific, they can be an important part of a more complex numerical modeling effort. For
example, the behavior of a certain contaminant in porous media can be studied in the
laboratory or in controlled field experiments, and the derived experimental parameters
can then be used for developing numerical models of groundwater transport.
Probabilistic models are based on laws of probability and statistics. They can have vari-
ous forms and complexity starting with a simple probability distribution of a hydrogeo-
logical property of interest and ending with complicated, time-series stochastic models.
Groundwater Modeling 301

The main limitations for wider use of probabilistic (stochastic) models in hydrogeology
are that they require large data sets needed for parameter identification and they cannot
be used to answer (predict) many of the most common questions from hydrogeologic prac-
tice, such as effects of future pumping on an aquifer (Kresic 2007).
Deterministic models describe the state or future reactions of the groundwater system
using the physical laws governing groundwater flow. An example is the flow of ground-
water toward a fully penetrating well in a confined aquifer as described with the Theis
equation. Most problems in traditional hydrogeology are solved using deterministic mod-
els, which can be as simple as the Theis equation or as complicated as a numerical model
of multiphase flow through a multilayered, heterogeneous, anisotropic aquifer system.
Regardless of the type, for any groundwater model to be interpreted and used properly,
its limitations should be clearly understood and described. In addition to strictly technical
limitations, such as accuracy of computations (hardware/software), the following is true
Downloaded by [University of Auckland] at 23:45 09 April 2014

for any model:

• It is based on various assumptions regarding the real natural system being


modeled.
• Hydrogeologic and hydrologic parameters used by the model are always just an
approximation of their actual field distribution, which can never be determined
with 100% accuracy.
• Theoretical differential equations describing groundwater flow and fate and
transport of contaminants are replaced with algebraic equations (e.g., finite differ-
ence solutions) that are more or less accurate depending on spatial and temporal
discretization.

It is therefore obvious that a model will have a varying degree of reliability, and it could
not be misused as long as all its limitations are clearly stated, the modeling process follows
industry-established procedures and standards (see Section 5.6), and the modeling docu-
mentation and reports are transparent, also following the industry standards.
The two most widely applied groups of deterministic models are analytical and numeri-
cal as described in the following sections.

5.3.1 Analytical Models


Simple analytical models are often used as screening tools to obtain first quantitative
feedback for solving a complex problem. For example, one equation—the one-dimensional
Domenico fate-and-transport equation—has been incorporated into several popular
Microsoft Excel–based analytical models, such as BIOSCREEN and BIOCHLOR. These
models are built on the basic assumption that the porous media are homogenous and
isotropic and thus require only a single value for each input parameter of interest. An
experienced practitioner recognizes the limitations of this simple analysis but exploits
the efficiencies it offers. The results of an analytical model are often used to demonstrate
whether additional field investigation and/or more complex numerical modeling will be
required. Analytical models also allow for efficient first sensitivity analysis of various flow
and fate-and-transport parameters. In some cases, an analytical model may be all that is
needed to support final conclusions.
BIOSCREEN and BIOCHLOR were developed for the USEPA and the Air Force Center for
Environmental Excellence (AFCEE) Technology Transfer Division at Brooks Air Force Base
by Groundwater Services, Inc., Houston, TX. BIOSCREEN (Newell et al. 1996) simulates
302 Hydrogeological Conceptual Site Models

remediation through natural attenuation of dissolved hydrocarbons at petroleum fuel–


release sites. The program has the ability to simulate advection, dispersion, adsorption,
and aerobic decay as well as anaerobic reactions that have been shown to be the domi-
nant biodegradation processes at many petroleum sites. Karanovic et al. (2007) present an
enhanced version of BIOSCREEN that supplements the Domenico (1987) solution with an
exact analytical solution for the contaminant concentration. The exact solution is derived
for the same conceptual model as Domenico (1987) but without invoking approximations
in its evaluation that introduce errors of unknown magnitude into the analysis. The exact
analytical solution is integrated seamlessly within a modified interface BIOSCREEN-AT,
available free of charge at http://www.sspa.com/Software/bioscreen.shtml.
BIOCHLOR (Aziz et al. 2000, 2002) simulates remediation by natural attenuation of dis-
solved solvents at chlorinated solvent release sites. The program has the ability to simu-
late 1D advection, 3D dispersion, linear adsorption, and biotransformation via reductive
Downloaded by [University of Auckland] at 23:45 09 April 2014

dechlorination (the dominant biotransformation process at most chlorinated solvent


sites). Reductive dechlorination is assumed to occur under anaerobic conditions, and dis-
solved solvent degradation is assumed to follow a sequential first-order decay process.
BIOCHLOR includes four different model types:

1. Solute transport with or without first-order decay for any dissolved constituent.
2. Solute transport with biotransformation modeled as a sequential first-order decay
process primarily for simulating the sequential reductive dechlorination of chlo-
rinated ethanes (TCA) and ethenes (PCE, TCE).
3. Solute transport with biotransformation modeled as a sequential first-order decay
process with two different reaction zones (i.e., each zone has a different set of deg-
radation rate coefficient values).
4. Decaying source. To model a decaying source in BIOCHLOR, the Domenico (1987)
semianalytical solution for reactive transport with first-order biological decay
was modified to incorporate a decaying source (boundary) condition. The revised
model assumes that the source decays exponentially via a first-order expression:
Coexp(–kst). The source decay constant ks must be determined by the user prior to
using BIOCHLOR. The model includes an option for a continuous (nondecaying)
source term as well.

Because BIOCHLOR and BIOSCREEN have been developed primarily as quick screening
tools, they should not be applied where pumping systems create a complicated flow field.
In addition, the models should not be applied where vertical flow gradients affect contami-
nant transport or where recharge from the land surface or laterally from adjacent aquifers
plays a significant role. However, either model can be used for simulating fate and trans-
port of any dissolved constituent, organic or inorganic, including those that do not sorb
onto porous media or are not degrading. An example screen shot of the BIOSCREEN-AT
input menu is shown in Figure 5.4. In this case, the modeled constituent is a metal with a
retardation factor, R, of 132. The metal does not degrade as specified by a decay rate of zero.
Note that the fields for all degradation coefficients or half-lives of BTEX constituents are
displayed by default but are not active in this case because the modeled constituent is not
BTEX. The model output screen showing the constituent’s concentration along the plume’s
centerline versus distance from the source after 100 years of travel is provided in Figure
5.5. The mechanisms that decrease the contaminant’s concentration as it moves away from
the source are the assumed mechanical dispersion and the retardation resulting from
Groundwater Modeling 303
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.4
BIOSCREEN-AT input menu. The modeled constituent is a metal with the retardation factor (retardation coeffi-
cient), R, of 132 and the source zone concentration of 1.23 mg/L. The simulation time is 100 years. Biodegradation
option is not active.

FIGURE 5.5
BIOSCREEN-AT results screen for the input parameters shown in Figure 5.4.
304 Hydrogeological Conceptual Site Models

sorption (again, note that this simple analytical model does not simulate dilution from
recharge or lateral influx of clean groundwater).

5.3.1.1 BIOCHLOR Case Study


BIOCHLOR is used to simulate the fate and transport of chlorinated solvents (COCs) at
the industrial site and to predict the maximum extent of dissolved-phase plume migra-
tion, which may then be compared to the distance to potential points of exposure (e.g.,
drinking-­water wells, groundwater-discharge areas, or the property boundary). The
field data from monitoring wells show clear evidence of biodegradation manifested by a
decrease in contaminant concentrations downgradient of the source area, and the pres-
ence of the degradation (daughter) products of TCE, cis-1,2-DCE, and VC. As stated by the
U.S. EPA, BIOCHLOR allows groundwater remediation managers to identify sites where
Downloaded by [University of Auckland] at 23:45 09 April 2014

natural attenuation is most likely to be protective of human health and the environment. It
also allows regulators to carry out an independent assessment of treatability studies and
remedial investigations that propose the use of natural attenuation (Aziz et al. 2000).
Advective transport. The representative seepage velocity (Vs) of groundwater flow through
the interstitial space of the porous media (aquifer matrix) is calculated by multiplying
hydraulic conductivity (K) by hydraulic gradient (i) and dividing by effective porosity (n).
As emphasized by the BIOCHLOR manual, it is strongly recommended that actual site
data be used for hydraulic conductivity and hydraulic gradient data parameters, whereas
effective porosity can be estimated based on predominant soil type.
The site-specific hydraulic conductivity is 0.00175 cm/s, the geometric mean of slug test
values from the four monitoring wells, MW-1, MW-2, MW-3, and MW-4: 0.01067, 0.00039,
0.00106, and 0.00214 cm/s, respectively. The geometric mean, rather than the arithmetic
mean, is selected as the most probable value, knowing that the hydraulic conductivity of
an aquifer typically follows a log-normal probability distribution (Kresic 2007).
The hydraulic gradient is 0.00179, calculated as the average change in the hydraulic head
between MW-1 and MW-4 (recorded between September 2003 and June 2011) divided by
the distance between the two wells: (246.01 – 244.31)/950 = 0.00179. Because of the minor
presence of silt and clay, the effective porosity is estimated at 20% or slightly less than the
typically used value of 25% for sands. The schematic of the industrial site showing the
contaminant source zone and the four monitoring wells selected for the modeling is pre-
sented in Figure 5.6. The BIOCHLOR input screen showing input fields for the advective
transport and other model parameters discussed further is given in Figure 5.7.
It is important to note that there is ongoing confusion in professional practice between
the total and effective porosity of groundwater media. Total porosity is a measure of the
total volume of voids in the soil matrix, whereas effective porosity is a measure of the
interconnected volume of voids in the soil matrix. Therefore, the effective porosity is always
lower than the total porosity. It is not uncommon for the authors to hear people say that
clay has a low hydraulic conductivity because of its low porosity, which is completely false.
In reality, clay has a total porosity much higher than that of sand, gravel, or silt. However,
the interconnected fraction of porosity is very low in clay, minimizing transmission of
groundwater.
Dispersion. The process by which a dissolved solvent will be spatially distributed lon-
gitudinally (along the direction of groundwater flow), transversely (perpendicular to
groundwater flow), and vertically (downward) because of mechanical mixing and chemi-
cal diffusion in the aquifer is called dispersion. Selection of dispersivity values is a diffi-
cult process, given the impracticability of measuring dispersion in the field. Nevertheless,
Groundwater Modeling 305

w
So h 2
id
t
ur 00
ft

ce ft
ft 150
350

ft -1
950 ection MW rce
Soukness
Dir
Flow ic
th t
-2 f
rline MW 25
Cente
me
Plu -3
MW
Downloaded by [University of Auckland] at 23:45 09 April 2014

Average hydraulic gradient


between MW-1 and MW-4 0.00179
-4
MW Hydraulic conductivity 1.8x10-3 cm/s

Effective porosity 0.20

Seepage velocity 16.2 ft/yr

FIGURE 5.6
Schematic of monitoring wells and source zone used for BIOCHLOR modeling at the industrial site.

FIGURE 5.7
BIOCHLOR input screen.
306 Hydrogeological Conceptual Site Models

the U.S. EPA lists simple estimation techniques based on the length of the plume (Aziz
et al. 2000). Additional discussion regarding the mathematical significance of dispersiv-
ity and various related uncertainties is provided in Section 5.4.2. From the 2011 field data,
the length of the longest COC plume (cis-1,2-DCE) at the industrial site is estimated to
be approximately 450 ft downgradient of the source (note that the farthest monitoring
well with any COC detection is MW-3). The longitudinal dispersivity (Alpha x) of 45 ft is
selected based on one of the three default options in BIOCHLOR. Option 1 assumes that
Alpha x is 10% of the estimated plume length. By the model’s default, the transverse dis-
persivity, Alpha y, is estimated to be one-tenth of Alpha x. To yield a conservative estimate
of vertical dispersion, Alpha z, the default value used in BIOCHLOR is set to a very low
number (1E-99) as suggested by the U.S. EPA. This means that the contaminant concentra-
tion will not decrease as a result of vertical dispersion.
Adsorption. Adsorption to the soil matrix slows down the migration of contaminants
Downloaded by [University of Auckland] at 23:45 09 April 2014

in groundwater, thereby reducing the extent of dissolved-phase plumes downgradient of


contaminant source areas. In BIOCHLOR, this process is described with the retardation
factor (R) calculated as the ratio of the groundwater seepage velocity to the rate that COCs
migrate in the groundwater. The degree of retardation depends on both the aquifer and
the COC properties. The model calculates R from the values of distribution (partition)
coefficient for the solute (Kd), soil bulk density (ρb), porosity (n), organic carbon partition
coefficients (Koc), and soil fraction organic carbon (  foc), using the following equation:

K dρb
R = 1+
n

where Kd = Koc × foc.


Organic carbon partition coefficients (Koc) for TCE, cis-1,2-DCE, and VC at 20oC are 130,
125, and 29.6 L/kg, respectively (values from the BIOCHLOR manual), aquifer matrix (soil)
bulk density is estimated to be 1.7 kg/L (default value in BIOCHLOR), and the soil fraction
organic carbon is estimated to be 0.002 (U.S. EPA 1996). Based on these values, the median
R for the three COCs calculated by the model is 3.21.
It should be noted that BIOCHLOR uses this median value for all calculations, rather
than individual retardation factors for each constituent. Alternatively, as in this case, the
user may calibrate another retardation value that results in a better representation of all
modeled constituent concentrations. At the industrial site, a calibrated common value
for R (2.10) results in a better overall model match for all constituents than the median
value. Note that the individual retardation coefficient for VC calculated by the model is
1.50, which is closer to the common calibrated value of 2.10 than the median value of 3.21.
For this reason, the final selected/calibrated value is more representative of VC, the most
mobile and toxic COC. Sensitivity analysis was also conducted to evaluate the effect of the
choice of the common retardation factor on the model results.
Biotransformation. The best approach for determining biotransformation rate constants
is to calibrate BIOCHLOR to field data for a given sampling event (Aziz et al. 2000, 2002).
Rate constants are estimated by changing the rate constant for TCE degradation until the
TCE predicted concentrations match the TCE field data. Then, the cis-1,2-DCE rate constant
is adjusted until the cis-1,2-DCE predicted concentrations match the field data; the same
is repeated for VC. In this way, site-specific rate constants are estimated, and the model is
then considered calibrated for the given set of model input parameters, including hydrau-
lic conductivity, hydraulic gradient, sorption, and dispersion. Using the site-specific rate
Groundwater Modeling 307

constants, predictive simulations can be conducted by increasing the simulation time to


estimate future plume behavior.
To expedite the calibration process, BIOCHLOR Version 2.2 incorporates the Buschek
and Alcantar (1995) rate constant estimation method, which automatically provides an
approximate calibration of the model to entered site-specific field data. The Buschek and
Alcantar approach uses equations that assume 1D dispersion, steady-state conditions, and
biotransformation only in the aqueous phase. Although this method has the potential to
overestimate biotransformation rate constants by lumping the effects of lateral and verti-
cal dispersion with biotransformation, it quickly yields a reasonable first approximation
of the rate constants. These rate constant estimates can be manually refined subsequently.
A minimum of four data points is required for rate-constant determination (e.g., the con-
centrations of COCs at the four monitoring wells located along the plume centerline as
shown in Figure 5.6 and Table 5.1). Where a constituent has been reported as being less
Downloaded by [University of Auckland] at 23:45 09 April 2014

than the detection limit, one-half of the constituent’s detection limit is used in the model
as required by most regulatory agencies. Note, however, that this practice can lead to erro-
neous conclusions and serious conceptual errors as described in detail in Chapter 8. Note
also that because of the inclusion of dispersion, the rate constant calculated in the Buschek
and Alcantar approach should be considered a bulk attenuation constant rather than a
pure biodegradation constant. These two terms are often confused.
Generally, the more highly chlorinated the compound is, the more rapidly it is reduced
by reductive dechlorination (Vogel and McCarty 1985, 1987). Therefore, it is possible for
daughter products to increase in concentration downgradient of the source zone before
they decrease (Aziz et al. 2000). This is illustrated with the site-specific examples for cis-
1,2-DCE and VC. Table 5.2 shows the biotransformation rate constants for the three COCs

TABLE 5.1
Monitoring Wells and Site-Specific Information Used for Model Calibration
Concentration (μg/L)
Well ID Distance from Source (ft) TCE DCE VC
MW-1 2 3200 280 2.3
MW-2 150 2100 450 87
MW-3 350 67 75 <1 (0.5)
MW-4 950 <1 (0.5) <1 (0.5) <1 (0.5)
Note: Concentration of one-half detection limit assumed for nondetect values is given in parentheses.

TABLE 5.2
Biotransformation Rate Constants in One per Year Calculated by BIOCHLOR from
the Field Data versus Final Calibrated Values and Their Equivalent Constituent
Half-Life (in Years)
Buschek-Alcantar Final Equivalent
Constituent Method Model-Calibrated Half-Life
TCE 0.223 0.224 3.09
DCE 0.157 0.157 4.41
VC 0.059 0.277 2.50
308 Hydrogeological Conceptual Site Models

calculated by the model from the field observations together with the final calibrated val-
ues. During the model calibration, the initial rate constants for TCE and DCE were not
changed, whereas the rate constant for VC was adjusted to better match the nondetect field
concentration observed further downgradient from the source (i.e., MW-3 is nondetect for
VC). Note that the biotransformation rate constant, ks, and the equivalent constituent half-
life, t1/2, are related as follows: ks = ln 2/t1/2.
The industrial site aquifer porous media consist primarily of fine to medium sands
with spatially varying content of silt and clay. The prevalent geochemical conditions that
drive natural biotransformation processes and, therefore, the estimated rate constants
may change spatially as the COCs migrate with the groundwater. The same is true with
sorption of COCs onto porous media solids. This variability is especially evident in case
of VC, as seen in Table 5.1 (see also Figure 5.7—BIOCHLOR input screen, Field Data for
Comparison). It appears that VC is still degrading faster and/or retarding more than what
Downloaded by [University of Auckland] at 23:45 09 April 2014

is simulated between the monitoring wells MW-2 and MW-3.


Source zone. The source of COCs dissolved in groundwater of the industrial site aquifer
is assumed to be in close proximity to monitoring well MW-1, which historically has the
highest dissolved concentration of TCE. These concentrations in excess of 1% to 10% solu-
bility of TCE (which is approximately 1000 mg/L) are indicative of the possible presence
of free-phase or residual-phase dense nonaqueous phase liquids (DNAPLs; Pankow and
Cherry 1996). The source area in the model is assumed to be a plane 200 ft wide and 25 ft
deep, which is equal to the saturated thickness of the industrial site aquifer at MW-1 (see
Figure 5.6).
Free-phase or residual-phase DNAPLs, such as TCE, can act as continuing sources of
groundwater contamination. The rate at which constituents in the DNAPL or source dis-
solve into the groundwater ultimately determines the concentration of dissolved contam-
inants in the plume and the lifetime of a dissolved plume. In BIOCHLOR Version 2.2,
the user has the option of modeling a source with constant or decaying concentration
over time. Source decay is modeled as a first-order process. This approach captures all
processes that can lead to depressed aqueous-phase concentrations in the source zone,
including a decreased dissolution rate from the DNAPL, biotransformation, or any other
degradation processes that follow first-order or pseudo first-order rates.
The value of the source decay constant is calculated by plotting temporal aqueous con-
centrations in a source area expressed with their log values and determining the slope of
the linear trend as shown in Figure 5.8 for monitoring well MW-1 (this can be easily done
in Microsoft Excel, for example). As can be seen from the field data over the last eight
years, there is a significant decreasing trend in concentrations of TCE, indicating a decay-
ing source. The source decay constant (ks) is different from the biotransformation constant
(λ). Once again, the overall source decay constant describes how the concentration in a
source-area well decreases as the DNAPL is depleted of the COC through dissolution,
volatilization, decay, and other mechanisms, whereas λ is the biotransformation rate con-
stant for a constituent in the dissolved phase plume downgradient of the source.
The source decay constant (ks) is first determined by the user using site-specific concen-
tration values for a monitoring well assumed to be representative of (closest to) the source
zone, such as MW-1 in this case. However, the user is restricted to ks values that are less
than 1/R*(λ + Vs/4α) to prevent unstable complex solutions. A default safety factor of 20% is
also incorporated. Based on the field data, the source decay constant for TCE is calculated
to be between 0.036 and 0.049 (Figure 5.8). As can be seen, the higher calculated value is
obtained when an anomalously low observed concentration at MW-1 in October 2004 is
removed from the data set. The trend line for this case has a significantly higher regression
Groundwater Modeling 309
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.8
Determination of the source decay constant. Explanation in text.

coefficient (better fit). However, because of the model restrictions and the 20% safety factor,
neither value can be used for the given set of input parameters (the message displayed by
the model states that ks must be less than 0.036). Therefore, the value of 0.035 is selected for
the model simulations. This results in an apparent overprediction of the simulated year-
2011 concentration of TCE at MW-1 (see Figure 5.9).
Initial source concentrations. In order to calibrate the model to the field-observed concen-
trations of COCs in 2011, the concentrations of three COCs for the source area represented
by MW-1 had to be assumed for some time prior to 2011 because of the transient (time-
dependent) nature of both the source strength and the fate and transport of dissolved
constituents. The initial source concentrations and the model run time were estimated
based on the calculated groundwater velocity of approximately 16 ft/year, the attenuating
effects, and the extrapolated trend lines of the TCE concentration at MW-1 (Figure 5.8). The
model run time of 15 years prior to 2011 and the initial source concentrations for TCE, cis-
1,2-DCE, and VC of 14,000, 790, and 5 µg/L (or ppb), respectively, were ultimately selected
during model calibration. Interestingly, the initial TCE concentration is approximately the
average of the two values shown in Figure 5.8 for the two different slopes. Possibly because
of the high historic detection limits for VC (10 ppb), none of the historic samples at MW-1
310 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.9
Model-calculated concentrations of COCs versus the field-observed concentrations in 2011 (calibration run–
thick blue lines) and the predicted future concentrations of COCs at the industrial site for years 2046, 2096, and
2196. Note log-scale and that all constituents are nondetect 950 ft from the source, but are plotted at one-half
the detection limit.

had detectable concentrations of VC. For the estimating purposes, the VC concentration
was represented with 5 µg/L, which is one-half of the historically reported detection limit.
Model results. Figure 5.9 shows the model-calculated concentrations of COCs versus the
field-observed concentrations in 2011 (calibration run–thick blue lines) and the predicted
future concentrations of COCs at the industrial site for years 2046, 2096, and 2196. Note,
however, that the U.S. EPA states that, “The longer the time frame simulated, the greater
the uncertainty associated with the modeling result. While the time to reach remedial
Groundwater Modeling 311

objectives at all points in the Joint Site groundwater will likely be on the order of 100 years,
simulations greater than the order of 50 years into the future are generally not reliable or
useful” (U.S. EPA 1999). Discussion on reasonable timeframes for groundwater remedia-
tion and supported modeling is provided in Chapter 8.
In general, because of the source decay constant restrictions (including the added safety
factor of 20%), in most cases, BIOCHLOR overpredicts by default the dissolved concentra-
tions at monitoring wells closest to the source zones. At the industrial site, this is evident
when comparing the field data from 2011 and the model-calculated concentrations of TCE
at MW-1: 3200 µg/L versus approximately 8000 µg/L (see Figure 5.9). This also means that
the model predictions for the future fate and transport of COCs at the industrial site are
conservative because the source decay rate is simulated to be lower than that currently
observed.
As mentioned earlier, the prevalent physical and geochemical conditions that drive
Downloaded by [University of Auckland] at 23:45 09 April 2014

natural fate and transport processes may change spatially as the COCs migrate with the
groundwater. Currently, BIOCHLOR does not provide for simulation of spatially variable
fate-and-transport parameters. Biotransformation rate constants can vary in two zones but
cannot be simulated together (at the same time) with a decaying source. Nevertheless, the
calibrated model for the industrial site shows a reasonable degree of accuracy in simulat-
ing the overall field-observed distribution of all three COCs.
Sensitivity analysis. The BIOCHLOR model has several built-in restrictions: most notably
the use of one common retardation factor for all COCs and a lower-than-observed source
decay rate. Combined, these restrictions contribute to overall conservative model predic-
tions of constituent concentrations when compared to the field-observed data. This is also
one of the reasons why certain parameters have different impacts on individual COC con-
centrations. For example, the common baseline value of the calibrated retardation factor,
R = 2.10, is used for all three COCs, but it is significantly lower than the more reasonable
3.21 initially calculated by the model for TCE (see Figure 5.7). This means that using a
higher R would likely result in a better match for TCE, but the match for DCE and VC may
be less favorable.
Because of its simplicity, BIOCHLOR can be used very efficiently to test the sensitiv-
ity of all input parameters. This is achieved by changing the values of one parameter at
a time while keeping the calibrated values of all other parameters constant. Figure 5.10

Field Data from Site, Year 2011


104
VC prediction for year 2046
103
Concentration (ug/L)

102
αx = 90
101 αx = 4.5
αx = 45 MCL = 2 ug/L
100

10-1
0 200 400 600 800 1000 1200 1400 1600
Distance from Source (ft)

FIGURE 5.10
Model sensitivity analysis for dispersivity.
312 Hydrogeological Conceptual Site Models

illustrates this process for the dispersivity and the model-predicted concentrations of VC
in year 2046. The initially assumed longitudinal dispersivity (Alpha x) of 45 ft, which was
also accepted as the calibrated value (baseline) was changed to 4.5 and 90 ft. The predicted
length of the 2 ppb [maximum contaminant level (MCL) for VC] plume edge is quite dif-
ferent for the three values of Alpha x, indicating that the dispersivity is a sensitive param-
eter (a detailed discussion on dispersion is provided in Section 5.4.2). Interestingly, the
predicted (high) concentrations for the first 400 ft are almost the same in all three cases,
illustrating the general importance and sensitivity of predicting low concentrations typi-
cal of MCLs at greater distances downgradient of the contaminant source.
For the benefit of the reader, the BIOCHLOR model file used in this example is pro-
vided on the companion DVD. For the file to run, the user must have a working version of
Microsoft Excel and enable macros. Note that the program has an online manual describ-
ing all input parameters and offering common ranges of their literature values.
Downloaded by [University of Auckland] at 23:45 09 April 2014

5.3.2 Numerical Models


As opposed to analytical models, numerical models describe the entire flow field of inter-
est at the same time, providing solutions for as many spatial data points as specified by the
user. The porous media volume (model domain) is subdivided into many small volumes
(referred to as cells or elements; see Figure 5.11), and the basic differential groundwater
flow (and fate and transport) equation is defined for each cell. This system of differen-
tial equations is replaced (approximated) by the systems of algebraic equations, so that
the flow field is represented by x equations with x unknowns where x is the number of
cells. The system of algebraic equations is solved numerically through an iterative process,
leading to the numerical model designation. Finite difference and finite-element methods
are two common numerical methods of solving the groundwater flow equations. Which
method will be used depends largely on the type of problem and the knowledge of the
modeler. Finite elements more easily describe irregular model boundaries and internal
boundaries, such as faults. They are also more appropriate in handling point sources and
sinks and large variations in the water table. However, finite difference models are easier

FIGURE 5.11
Three-dimensional model domain in MODFLOW divided into layers and cells.
Groundwater Modeling 313

to program, require less data, are friendlier for data input, and generally exhibit better
mass conservation than finite -element alternatives.
Numerical models can be one-, two-, and three-dimensional; can model the vadose zone, the
saturated zone, or both (variably-saturated models); and can model one-phase or multiple-­
phase flow (e.g., groundwater, air/vapor, and DNAPLs). They are divided into two main
groups: (1) models of groundwater flow and (2) models of contaminant fate and transport. The
latter ones cannot be developed without first solving the groundwater flow field, that is, they
use the solution of the groundwater flow model as the base for fate-and-transport calculations.
Some of the more common questions that fully developed and calibrated groundwater flow
and fate-and-transport models may help answer are (Kresic 2009) as follows:

• What is the safe (sustainable) yield of the aquifer portion targeted for groundwater
development?
Downloaded by [University of Auckland] at 23:45 09 April 2014

• How many wells , and at what locations, are needed to provide a desired flow rate?
• What is the impact of current or planned groundwater extraction on the environ-
ment (e.g., on surface stream flows, wetlands)?
• Is there a potential for saltwater intrusion from an increased groundwater
pumpage?
• Where is the contaminant flowing to, and/or where is it coming from?
• How long will it take the contaminant to reach the water table or potential
receptors?
• What will the contaminant concentration be once it reaches a receptor?
• How would a remedial intervention affect contaminant concentrations in the
source area and in the downgradient plume?

Once these questions are addressed by the model(s), many new ones may pop up, which is
exactly what the purpose of a well-documented and calibrated groundwater model should
be; it should answer all kinds of possible questions related to groundwater flow and fate
and transport of contaminants. Here are just two of the common important questions
often involving a multimillion-dollar price tag and the possibility of a protracted lawsuit:
Who is responsible for the groundwater contamination? And what is the most feasible
groundwater remediation option?
MODFLOW (a modular three-dimensional finite-difference ground-water flow model)
developed at the USGS by McDonald and Harbaugh (1988), Harbaugh and McDonald (1996),
Harbaugh et al. (2000), and Harbaugh (2005) is considered by many to be the most reliable,
verified, and utilized groundwater flow computer program today. There are several inte-
grated, user-friendly pre processing and postprocessing graphical software packages (GUIs)
for MODFLOW that greatly facilitate data input and visualization of modeling output (results).
Groundwater Vistas (Rumbaugh and Rumbaugh 2007; http://www.groundwater
models.com/), Groundwater Modeling System (GMS; http://www.ems-i.com/), Visual
MODFLOW (http://www.waterloohydrogeologic.com/), and Processing MODFLOW
(Chiang and Kinzelbach 2001; http://www.pmwin.net/pmwin5.htm) are the four com-
mercial (proprietary) modeling processors most widely used in modern hydrogeology.
They are all very similar in capabilities but have intricacies in operation that distinguish
them from one another. Additionally, different processors support different code alterna-
tives or add-ons to standard MODFLOW. In general, Processing MODFLOW, Groundwater
Vistas, and Visual MODFLOW stick with MODFLOW and its finite-difference companion
314 Hydrogeological Conceptual Site Models

modules developed by the USGS and for government agencies. This includes MT3D and
RT3D for three-dimensional saturated-zone fate-and-transport modeling and particle
tracking codes. Conversely, GMS offers alternative software, including two- and three-
dimensional finite-element models (SEEP2D and FEMWATER, respectively, the latter of
which supports variably saturated applications). All four GUIs also offer versatile 3D visu-
alization of model input and output and animation of results.
A new GUI program for MODFLOW, called ModelMuse, has been recently released by
the USGS. This free public-domain program, available for download at http://water.usgs​
.gov/software/lists/groundwater, will likely dramatically change the commercial ground-
water modeling software market. In addition to MODFLOW, this version of ModelMuse
also supports PHAST–A Program for Simulating Groundwater Flow, Solute Transport, and
Multicomponent Geochemical Reactions. The USGS also provides a number of ModelMuse
training videos on its website.
Downloaded by [University of Auckland] at 23:45 09 April 2014

The primary differences between various GUI programs are visual in nature, and the
question of which one is best is subjective and more a matter of personal preference. Some
hydrogeologists may prefer Processing MODFLOW and Groundwater Vistas because
of their transparency, flexibility, and affordable pricing, while others may prefer GMS
because of its finite-element capabilities or Groundwater Vistas because it has the best
interface with GIS. There is no correct answer to the question of which program is supe-
rior. The authors recommend that the hydrogeologist explores each of them to determine
which suits her or him best and meets the needs of the project. This includes the availabil-
ity and cost of technical support provided by the vendor as all these programs will inevita-
bly have bugs and various options not readily explained in their manuals and user guides.

5.4 Numerical Modeling Concepts


The success of translating a CSM into a numerical model will depend on both the experience
of the user and the capabilities and limitations of the selected computer program. The main
advantage of classic MODFLOW and the companion fate-and-transport models such as MT3D
and RT3D is their broad user base and the continuous updates, including frequent introduc-
tion of new modules and numerical solvers. Most modeling concepts today across various
computer programs are therefore indirectly or directly based on those first implemented in
MODFLOW such that an experienced MODFLOW user usually has little trouble when transi-
tioning to another program. What also generally makes groundwater modeling more efficient
than ever before is that modern-day computers and computer operating systems do not pose
model size limitations for most practical applications. Models can have literally hundreds of
layers and millions of cells and still be solved relatively quickly with desktop PCs.
However, despite its numerous advantages, even classic MODFLOW has some serious
conceptual limitations that many practitioners have to deal with on a routine basis. Four
that immediately come to mind are: (1) the requirement that all model layers must be
continuous throughout the model domain, (2) model instability when trying to simulate
large vertical displacements caused by faults or artificial structures, (3) that all cells in the
model must be rectangular with rows and columns extending from one edge of the model
to the other, and (4) model instability when simulating contacts between porous media
with highly contrasting hydraulic conductivities. While the first listed limitation (shown
in Figure 5.12 with a real-world example) is relatively easily dealt with as illustrated in
Groundwater Modeling 315

Layer
1

3
4

6
Downloaded by [University of Auckland] at 23:45 09 April 2014

7
25 feet

250 feet

FIGURE 5.12
All layers in classic MODFLOW must be continuous throughout the model domain.

Figure 5.13, the other three are more serious when trying to accurately simulate complex
3D geologic relationships and discontinuities and water fluxes between cells.
Fortunately, at the time of this writing, a true breakthrough in groundwater modeling
has taken place in the form of a fundamentally new program called MODFLOW-USG
(see Section 5.7 for a detailed description). MODFLOW-USG was developed by Dr. Sorab
Panday of AMEC and retains full compatibility with previous versions of MODFLOW

Area where Layer 2


is missing

Layer 1

Layer 2

Layer 3

K1
K=K1
K2 K=K3
K3 K=(K1+K3)/2

FIGURE 5.13
Principle of modeling discontinuous layers. The area where the actual layer is missing is still modeled as hav-
ing a certain thickness, usually similar to the adjacent cells where this layer is present. However, the hydraulic
conductivity (K) of the missing cells is adjusted to create the effect of discontinuity. For example, the hydraulic
conductivity of discontinuous cells in layer 1 may be assigned the same value as that of the underlying cells in
layer 2 to which the cells more properly belong. Note that, following the rule of thumb, the thickness of succes-
sive cells should not increase by more than 1.5 times in order to avoid possible model instability. (Modified from
Kresic, N., Hydrogeology and Groundwater Modeling, Second Edition. CRC/Taylor & Francis Group, Boca Raton, FL,
807 pp., 2007.)
316 Hydrogeological Conceptual Site Models

while taking advantage of unstructured grids (USG) and finite-volume numerical solu-
tions. The program is released in the public domain and is supported exclusively by the
latest version of Groundwater Vistas. It enables hydrogeologists to accurately translate
even the most complex CSMs into a numerical environment, thus eliminating the need
for various surrogate modeling solutions. This includes flow in fractured rock and karst
aquifers.
For a detailed technical description of groundwater modeling principles, the reader is
referred to the work of Kresic (2007). The following discussion addresses some of the most
important concepts in groundwater modeling, emphasizing translation of the CSM and
topics of interest to a wide range of stakeholders.

5.4.1 Initial Conditions, Boundary Conditions, and Water Fluxes


Downloaded by [University of Auckland] at 23:45 09 April 2014

In the world of groundwater modeling, the term initial conditions refers to the three-
dimensional distribution of observed hydraulic heads within the groundwater system,
which is the starting point for transient (time-dependent) modeling simulations. These
hydraulic heads (the water table of unconfined aquifers and potentiometric surface of
confined aquifers) are the result of various boundary conditions acting upon the system
during a certain time period. The initial distribution of the hydraulic heads for transient
modeling can also be the calibrated solution of a steady-state model. The steady-state solu-
tion is defined as the closest match to the field-observed hydraulic heads when assuming
constant boundary conditions and no change in storage. In general, any set of field-­
measured or calibrated hydraulic heads can serve as the starting point for further analy-
sis, including for transient groundwater modeling. Ideally, the initial conditions should
be as close as possible to the state of a long-term equilibrium between all natural water
inputs and outputs from the system or with as little anthropogenic (artificial) influence as
possible, the so-called predevelopment conditions (Figure 5.14). However, in many cases,
there are insufficient hydraulic head data for assessment of such natural conditions, which
causes various difficulties with data interpolation and extrapolation, including uncertain-
ties associated with any assumed predevelopment boundary conditions (Kresic 2009).
Whatever the case may be regarding the selection of initial conditions, contouring of the
hydraulic head data is the first important step (see Chapter 4).
It has become standard practice in hydrogeology and groundwater modeling to describe
the inflow and outflow of water from the model domain with three general boundary con-
ditions: (1) known flux, (2) head-dependent flux, and (3) known head, where head refers to
the hydraulic head. These conditions are assigned to both external and internal boundaries
or all locations and surfaces where water is entering or leaving the model. One example of
an external boundary, sometimes overlooked as such, is the water table of an unconfined
aquifer that receives recharge coming from the vadose (unsaturated) zone. This estimated
or measured vertical flux of water into the model is applied as a recharge rate over certain
surface areas. It is expressed in a model-consistent unit of length (e.g., feet or meters) per
unit of time (e.g., days), which, when multiplied by the area, gives the flux of water as
volume per time. A large spring draining an aquifer is another example of an external
boundary with a known flux. An example of an internal boundary with a known flux,
where water is also leaving the model, is a water well with the pumping rate expressed in
model-consistent units (e.g., cubic feet per day or cubic meters per day).
It is obvious that water can enter or leave the model in a variety of natural and artificial
ways, depending upon hydrogeologic, hydrologic, climatic, and anthropogenic conditions
specific to the system of interest. In many cases, these water fluxes cannot be measured
Groundwater Modeling 317

GA SC

Sa
70

van
JASPER

na
60

hR

40
50

ive

30
BEAUFORT

r
EFFINGHAM

n d
la
Is
a d
He
10

n
lto
Hi
Downloaded by [University of Auckland] at 23:45 09 April 2014

Savannah

20
CHATHAM

an
ce
O
ic
nt
la
At
BRYAN

0 5 10 15 miles
LIBERTY 0 5 10 15 km
50

Sa
van
n ah
40
30

Riv

20
er

10
0 Po
rt
-10 Ro
yal
-30 So
un
d
-50
dn
la
Is
a d
He
l

n
lto
l

Hi

-60
an
ce
O

-40
tic
n
la
At

-20 N

0
l 0 5 10 15 miles

0 5 10 15 km

FIGURE 5.14
Potentiometric surface of the Upper Floridan aquifer in the area of Savannah, Georgia, and Hilton Head Island,
South Carolina. Top: predevelopment conditions. Bottom: recorded in May 1998, showing the influence of major
groundwater withdrawal for water supply in Savannah. Contour interval is 10 ft, contour lines dashed where
approximate; arrows show general directions of groundwater flow. (Modified from Provost et al. 2006.)
318 Hydrogeological Conceptual Site Models

directly and have to be estimated or calculated externally to the model. The simplest
boundary condition is one that can be assigned to a contact between an aquifer and a
low-permeability or impermeable porous medium, such as an aquiclude. Assuming there
is no groundwater flow across this contact, it is called a zero-flux or no-flow boundary.
Although this no-flow boundary condition may exist in reality, it is very important not
to assign it indiscriminately just because it is convenient. For example, contact between
unconsolidated alluvial sediments and surrounding bedrock is often modeled as a zero-
flux boundary, even though there may be some flow across this boundary in either direc-
tion. Without site-specific information on the underlying hydrogeologic conditions, a
zero-flux assumption may lead to erroneous interpretations of groundwater flow or fate
and transport of contaminants.
Recording hydraulic heads at external or internal boundaries and using them to deter-
mine water fluxes indirectly, rather than assigning them directly, is a very common mod-
Downloaded by [University of Auckland] at 23:45 09 April 2014

eling practice. The hydraulic heads provide for determination of the hydraulic gradients,
which, together with the hydraulic conductivity and the cross-sectional area of the bound-
ary, give the groundwater flow entering or leaving the model across that boundary. This
boundary condition, expressed by the hydraulic heads on either side of the boundary and
the hydraulic conductance of the boundary, is called head-dependent flux. One example of
a head-dependent flux boundary would be a river with riverbed sediments of a hydraulic
conductivity different than that of the underlying aquifer (see Figure 2.24).
When not much is known about the real physical characteristics of a boundary, or for
reasons of simplification, the boundary may be represented only by its hydraulic head:
the so-called known-head, fixed-head, or equipotential boundary. River or lake stages,
without considering riverbed (lakebed) conductance, are examples of such a boundary.
The flux of water across the boundary (Q) is calculated using Darcy’s equation: Q = AKi,
in which A is the cross-sectional area of the boundary, i is the hydraulic gradient between
the boundary (river or lake) and the aquifer, and K is the hydraulic conductivity of the
aquifer porous media. The main conceptual problem with these boundaries is that, as the
hydraulic head in the aquifer decreases, the flow entering the system from the boundary
erroneously increases as a result of the increased hydraulic gradient caused by the fixed-
head condition. This is of particular concern when performing transient modeling, which
takes into account time-dependent changes that can affect the system. It is important to
note that this problem can also occur with a head-dependent flux boundary, such as the
MODFLOW river boundary. If one prefers to model a certain boundary with a fixed head
condition, the boundary hydraulic head should be adjusted in the model for different time
periods based on available field information. This option is available in MODFLOW by
using the variable-head boundary module.
When deciding on boundary conditions, it is essential to work with as many hydraulic
head observations as possible, in both space and time, because fluctuations in the shape
and elevations of the hydraulic head contour lines directly reflect various water inputs
and outputs along the boundaries (see Figure 4.41). Additionally, it is desirable to have
hydraulic head observations from periods of groundwater stress caused by pumping or
infiltration as these can better inform the influence of hydraulic boundaries such as sur-
face water features.
Accurate representation of surface water–groundwater interactions is often the most
critical when selecting boundary conditions in alluvial basins and floodplains of surface
streams. A river may be intermittent or perennial, it may lose water to the underlying aqui-
fer in some reaches and gain water in others, and the same reaches may behave differently
depending on the season (e.g., see Figures 2.19, 2.23, and 4.40). The hydraulic connection
Groundwater Modeling 319

between a river and its associated aquifer may be complete without any interfering influ-
ence of riverbed sediments. In some cases, however, a well pumping close to a river may
receive little water from it because of a thick layer of fine silt along the river channel or
simply because there is a low-permeability sediment layer separating the aquifer and
the river. In these situations, it would be completely erroneous to represent the river as a
constant-head (equipotential) boundary directly connected to the aquifer. Such a bound-
ary in a numerical model would essentially act as an inexhaustible source of water to the
aquifer (or a water well) regardless of the actual conditions as long as the hydraulic head
in the aquifer is lower than the fixed river stage.
The ultimate reason for selecting any of the three general boundary types is the deter-
mination of the overall water budget of the modeled groundwater system (see Figure 2.33).
The sum of all water fluxes entering and leaving the model through its boundaries has to
be equal to the change in groundwater storage. When utilizing groundwater models for
Downloaded by [University of Auckland] at 23:45 09 April 2014

aquifer evaluation or management, the user has to determine (measure, calculate) flux
to be assigned to the known-flux boundaries. In case of the other two boundary types
(head-dependent flux and fixed-head), the model calculates the flux across the boundaries
using other assigned parameters—hydraulic heads at the boundary and inside the model,
boundary conductance, and hydraulic conductivity of the porous media. As discussed
later, the match of the water budget is the ultimate measure of the groundwater model’s
calibration and success.

5.4.2 Dispersion and Diffusion


Dispersion in the aquifer is a process of mixing between the advancing fluid, such as a con-
taminant dissolved in groundwater and being carried by the flow of the groundwater, and
the fluid being displaced (e.g., clean groundwater). Dispersion always takes place, and the
main result of this mixing is a decrease in concentration of the dissolved contaminant and the
appearance of the chemical at a point downgradient earlier than would be expected based on
seepage velocity alone. Contrary to surface water systems where dispersion is caused by tur-
bulent flow, dispersion and associated mixing in groundwater is attributed to the winding, or
tortuous, path of fluid flow through porous media (Hemond and Fechner-Levy 2000).
Dispersion is porous medium–specific and is considered independent of the flowing
fluid. In a more narrow sense, scientists and engineers define dispersion as the sum of two
processes: mechanical dispersion and diffusion. Defined in this way, dispersion is often
called hydrodynamic dispersion. Mixing resulting from mechanical dispersion occurs
along the main direction of groundwater flow (this is called longitudinal dispersion) and
perpendicular to the main flow direction (transverse dispersion). There is also a third
main direction of mixing called vertical dispersion. Dispersion causes some solute par-
ticles to advance faster than the bulk of contamination, thus creating a halo (cloud) of low
concentrations around the main portion of the plume.
Most researchers, practitioners, and government agencies, such as the U.S. EPA (e.g., see
Wiedemeier et al. 1998; Aziz et al. 2000), have concluded that dispersion is dependent on
the length of solute flow being observed: the greater the plume length, the larger the value
of longitudinal dispersivity. This dependency is termed field-scale dispersion or macro-
dispersion and is primarily related to the compounding effects of heterogeneity at larger
scales (Hemond and Fechner-Levy 2000). There are still discussions regarding the nature
of this relationship (e.g., is it linear or nonlinear, is there a practical limit after which the
dispersion stabilizes and does not increase with the increasing plume length?), but there is
a general agreement that dispersion depends on both the scale and the time of flow.
320 Hydrogeological Conceptual Site Models

Various graphs based on various experiments at different scales conducted in the labo-
ratory and in the field, including various model calibrations, show values of longitudinal
dispersivity ranging from as small as 0.01 m to as large as 5500 m or more. One such often
cited graph is shown in Figure 5.15. The widely used rule of thumb, suggested by the U.S.
EPA, is that the longitudinal dispersivity in most cases could be initially estimated from
the plume length as being 10 times smaller (Wiedemeier et al. 1998; Aziz et al. 2000). Based
on this guidance, for example, if the plume length is 300 ft, the initial estimate of the
longitudinal dispersivity would be about 30 ft. There are also other suggested empirical
relationships relating the plume length and the longitudinal dispersivity. Recognizing the
limitations of the available and reliable field-scale data on dispersivity, the U.S. EPA also
suggests that the final values of dispersivities used in fate-and-transport models should
be based on calibration to the site-specific (field) concentration data. The main reason why
very few (if any) projects for practical groundwater remediation purposes consider field
Downloaded by [University of Auckland] at 23:45 09 April 2014

determinations of dispersivity is that it would require a large number of monitoring wells


and application of large-scale tracer tests. Such studies are expensive by default and are
usually not feasible because of the generally slow movement of tracers in intergranular

104

Longitudinal Dispersivity
= 10% of scale
103
(Pickens and Grisak, 1981)

102
LONGITUDINAL DISPERSIVITY (m)

101

100 Longitudinal Dispersivity


= 0.83[log10(scale)]2.414
(Xu and Eckstein, 1995)

10-1 RELIABILITY
High
Intermediate
10-2 Low

Data Source: Gelhar et al.,1992

10-3
10-1 100 101 102 103 104 105 106
SCALE (m)

FIGURE 5.15
Longitudinal dispersivity versus scale data reported by Gelhar et al. (1992). Data include Gelhar’s reanalysis of
several dispersivity studies. Size of the circle represents general reliability of dispersivity estimates. Location
of 10% of the scale linear relation plotted as a dashed line (Pickens and Grisak 10% rule of thumb). Xu and
Eckstein’s regression shown as a solid line. (From Aziz, C. E. et al., BIOCHLOR: Natural Attenuation Decision
Support System v. 1.0: User’s Manual. EPA/600/R-00/008. U.S. Environmental Protection Agency, Cincinnati,
OH, 2000.)
Groundwater Modeling 321

porous media over long distances (i.e., tracer tests may need to last several years to provide
any meaningful information).
Unfortunately, although the concept of dispersion has a logical physical explanation
(mixing resulting from tortuous flow), the related quantitative parameter of dispersivity is
just a surrogate without any real scientific justification. As explained in detail by Franke
et al. (1990), if it were possible to generate a model that could account for the actual per-
meability and effective porosity distributions of an aquifer, dispersive transport would
not have to be considered (except for molecular diffusion). Dispersivity cannot be directly
measured or calculated using proven deterministic laws of groundwater flow. For this
simple reason, although routinely used by hydrogeologists, it is the least understood quan-
titative parameter required by numerical fate-and-transport models. It is also the most
subjective parameter because its values are routinely selected from literature without any
additional quantitative analysis.
Downloaded by [University of Auckland] at 23:45 09 April 2014

The graph in Figure 5.15 shows how problematic it is to simply (and blindly) use litera-
ture or rule-of-thumb dispersivity values without considering site-specific conditions or
logic. Unfortunately, this is sometimes done because of the desired outcome of calcula-
tions by some stakeholders. The following narrative describes a situation from an actual
lawsuit. An expert for the plaintiff has produced the final, definitive, and calibrated fate-
and-transport model, which predicted certain future impact on a water-supply well from
a chemical added to gasoline spilled at a gas station. The distance between the gas station
and the well is thousands of feet, and the extent of the current contaminant plume is lim-
ited to the immediate area of the gas station. With the model, the expert predicted that the
contaminant would start arriving at the water-supply well, with a concentration of 0.2 ppb,
17 years from the present date. At two depositions, the expert stated that the accuracy of
his model was very high and he had a very high level of confidence that the concentration
at the supply well would be detected in 17 years. In addition, because the detection limit
of the chemical at the time of the depositions was exactly 0.2 ppb, the expert was very
confident in his model. Because of all this, the plaintiff was asking millions of dollars
for future damages to the water-supply well. However, because of some legal and other
issues, the judge in the case ruled, in the meantime (not knowing what any expert in the
lawsuit would say in his courtroom in front of the jury), that the future damages could
be considered only if they were certain to occur by a certain date, which was set by the
judge to be several months into the ongoing lawsuit. Having learned this, the plaintiff’s
expert changed the single most important parameter in his already calibrated, definitive,
and very accurate model. He increased the longitudinal dispersivity by 1100%, which then
enabled the contaminant to reach the water-supply well at very low but still detectable
concentrations (0.2 ppb) in just enough time to satisfy the new requirement for future
damages of millions of dollars. When he was asked, at his last-minute deposition one day
before the continuation of the trial and his appearance in front of the jury, why he did it,
the expert witness referred to the graph in Figure 5.15 and said that he made the decision
based on his very considerable professional experience of more than 25 years.
There are at least several points to be made from this case, but the authors will discuss
only those related to the technical rules of thumb and the use of graphs such as the one
in Figure 5.15. Most, if not all, fate-and-transport parameters commonly used in hydro-
geologic calculations act to decrease, or attenuate, a contaminant concentration as it trav-
els dissolved in groundwater. Dispersion is especially important if the contaminant does
not degrade and/or adsorb to aquifer solids. Scientists and agencies that set standards
and rules of thumb almost always do so with caveats as is the case with dispersivity.
Explanations that come with these rules of thumb should be read carefully by practicing
322 Hydrogeological Conceptual Site Models

hydrogeologists. For example, in our dispersivity case, the 10% rule of thumb refers to the
existing plumes and their lengths, not some future plumes that are yet to be developed.
Consider this scenario, which defies logic but may be handy for winning millions of dol-
lars, while referring to the graph published in quite a few books (including this one):

1. There is a public supply well 40,000 m away (that is correct, 40 km away) from this
particular gas station. Note that the graph in Figure 5.15 has a horizontal scale of
1000 km.
2. There was a spill of gasoline at this station that happened approximately one
month ago.
3. From the graph in Figure 5.15, it may be concluded that longitudinal dispersiv-
ity for a plume that is 40 km long could be as much as 1 km. (There are two data
Downloaded by [University of Auckland] at 23:45 09 April 2014

points on the graph suggesting so, and it does not matter why illogical data like
these are even included in a scientific graph.)
4. Although there is currently not a 40-km-long plume, I am expecting it to develop
for sure because my water-supply well is that far away.
5. I will create a fate-and-transport numerical model for the saturated zone (because
I am not familiar with the unsaturated zone processes and models, I will assume
that all of the spilled gasoline will immediately reach the water table and be dis-
solved at very, very high concentrations).
6. I will run the model and show that there will be a measurable impact 40 km away
in, for example, six months. I will not show the associated (predicted by the model)
plume map; rather, I will show a graph of concentration versus time at the 40-km
distant well because that is my focus in this legal case.
7. If someone (say, the technical expert for the other side) advises the defendant’s
attorney to ask me if the model was calibrated to some field data between the gaso-
line station and the 40-km distant water-supply well, I will answer that the model
accurately represents fate and transport of the contaminant as expected, based on
my experience.
8. I will keep to myself the somewhat unnerving fact that the model-generated
plume (which I am not going to share with anyone) shows contamination in
some water-supply and other wells, which are several hundred feet away from
the gas station and are currently nondetect for the contaminant, in just a few
days after the spill (after all, the dispersivity in my model is 1 km, so I am not
that surprised). Incidentally, the concealed plume map also shows that the con-
taminant traveled almost 900 ft upgradient of the gas station (not surprisingly).
If the attorney for the other side starts asking me all these questions, I may say
something about numerical dispersivity, the salmon effect, or something simi-
larly incomprehensible. I will, however, stick to the graph shown in Figure 5.15
and keep mentioning my professional judgment. (Authors’ note to the reader:
Salmon is an anadromous fish species that swims upstream in rivers in order to
spawn and can jump over high waterfalls as illustrated in Figure 5.16; in other
words, unlike free-flowing surface water or groundwater, salmon can, in fact,
move up gradient or uphill. The salmon effect therefore describes the greatly
exaggerated uphill migration of solutes caused by the inclusion of dispersivity in
numerical models.)
Groundwater Modeling 323
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.16
Artwork by Robert Hines showing Atlantic salmon swimming upstream. (Courtesy of U.S. Fish and Wildlife
Service, available at http://digitalmedia.fws.gov/.)

Unfortunately, the salmon effect is, to varying degrees, present in all numerical fate-and-
transport models that incorporate the dispersivity concept, including MT3D and RT3D.
This is because the governing finite difference equation is symmetrical, allowing for the
numerical upgradient migration of dissolved contaminants. The effect can be minimized
by fine model discretization (use of small cell size) but cannot be eliminated, especially if
the selected dispersivity value is unreasonable as suggested by the model results. This is
illustrated in Figure 5.17 where a model with a very small cell size (2.5 × 2.5 ft) was used
to test two dispersivity values, one of which satisfies common rules of thumb, including
the Péclet number (Pe). One form of this dimensionless number relates the cell size in the
direction of groundwater flow (Δx) and the longitudinal dispersivity (αx) as follows: Pe =
Δx/αx. It is usually recommended that Pe is less than 2–10 to minimize numerical disper-
sion in the model. However, as seen in Figure 5.17, the quite low longitudinal dispersivity
of 5 ft produces impossible results and an excessive salmon effect regardless of the Pe
number of 0.5. Note that a dispersivity of 5 ft would be considered by many as reason-
able and even too low at its face value for the scale of this model. In addition, the cell size
of 2.5 × 2.5 ft would also be considered as too fine by many practicing hydrogeologists
concerned about file size and model run-time. The use of the very low dispersivity value
324 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.17
Comparison of one-layer numeric fate-and-transport model results using two different values of longitudinal
dispersivity: 5 ft on the left and 0.1 ft on the right. Transverse dispersivity is 1/10 of the longitudinal dispersivity
in either model. The cell size is 2.5 × 2.5 ft. Groundwater flow is from northwest to southeast. The contaminant
is not retarded, and it does not degrade (a conservative tracer).

(0.1 ft) reduces this uphill migration despite the seemingly unreasonable Péclet number
of 25; however, some degree of the salmon effect is still visible. Despite the unavoidable
salmon effect and inconsistency regarding the Péclet number, there is no official literature
or practical technical guidance by the USGS, U.S. EPA, or other federal and state regula-
tory agencies on proper use of the highly questionable dispersivity concept in fate-and-
transport numerical modeling.
Groundwater Modeling 325

For the benefit of the reader, input and output files of the numerical groundwater flow
and fate-and-transport model presented above are provided on the companion DVD.
Note that this same model was used to illustrate contouring concepts in Chapter 4 (see
Figure 4.37). The model was created with Processing MODFLOW GUI and can be run
using the freeware software version 5.3.2 available for download at http://www.pmwin​
.net/software.htm.
Molecular diffusion. Diffusion does not play a significant role in the advective–disper-
sive transport of contaminants dissolved in groundwater as they move freely through
the effective porosity of the aquifer. It is when the groundwater velocity becomes very
low as a result of small pore sizes and very convoluted pore-scale pathways that dif-
fusion may become an important fate-and-transport process. Porosity that does not
readily allow advective groundwater flow (flow under the influence of gravity) but does
allow movement of the contaminant resulting from diffusion is sometimes called dif-
Downloaded by [University of Auckland] at 23:45 09 April 2014

fusive porosity. A dual-porosity medium has one type of porosity that allows preferable
advective transport through it and another type of porosity that does not allow free
gravity flow or supports flow quantity that is significantly smaller than the flow taking
place through the higher effective (advective) porosity. Examples of dual-porosity media
include fractured rock, where advective flow preferably takes place through fractures,
while the advective flow rate through the rest of the rock mass or rock matrix is com-
parably lower, much lower, or does not exist for all practical purposes. This gradation
depends on the nature of matrix porosity; in some rocks, such as sandstones and young
limestones, matrix porosity may be fairly high, and it may allow a very significant rate of
advective flow, often as high or higher than through the fractures. In most consolidated,
hard rocks, matrix porosity is usually low, less than 5% to 10%, and it does not pro-
vide for significant advective flow. Other examples of dual-porosity media, of particular
interest for the migration of DNAPLs, include fractured clay and residuum sediments. In
some cases, various discontinuities and fractures in such media may serve as pathways
for some advective contaminant transport while the bulk of the sediments may have a
very low effective porosity, which does not allow advective transport. Flow of solutes
with high concentration through the fractures may result in the solute diffusion into the
surrounding matrix.
The preceding discussion explains one of the two factors that have to be present for any
significant diffusion to take place: (1) diffusive or matrix porosity in which the contami-
nant can move and (2) the existence of a high-enough concentration gradient for enough
time that would cause the contaminant to start diffusing into the rock matrix (diffusive
porosity).
A good example is a persistent body of NAPL, resting on or suspended in a low-­
permeable clay sediment and creating high concentration gradients for long periods of
time. Another example would be a DNAPL body sitting in fractures in limestone for a
long time, thus driving high contaminant concentrations into the rock matrix. If, for any
reason, the concentration gradient reverses itself, such as because of the final dissolu-
tion of the free-phase (residual) DNAPL in fractures and flushing of the fractures with
the incoming uncontaminated groundwater, the contaminant that was diffused into the
matrix will start diffusing back into the fractures. In some cases, this back-diffusion may
act as a secondary source of groundwater contamination, and it may be important to
predict its effects using numerical models (Kresic 2007). Figure 5.18 shows an example
of model-predicted contaminant concentrations with and without taking diffusion into
account. Additional examples, including computer animation of contaminant migration,
are provided on the companion DVD.
326 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.18
Comparison of modeling results when accounting for diffusion into a low-permeable clay lens (top) and when
diffusion is not modeled (bottom). These two cross-sectional views of a migrating plume are created with the
variably saturated numeric model VS2DTI.

5.5 Model Calibration, Sensitivity Analysis, and Error


Although often explained separately in modeling reports, calibration and sensitivity anal-
ysis are inseparable and are part of the same process. While performing calibration, which
is composed of numerous single and multiple changes of model parameters, every user
determines quickly which parameters are more sensitive to changes with regard to the
final model result. By carefully recording all the changes made during calibration and
commenting on their results, the modeler is already engaged in the sensitivity analysis
and can effortlessly finalize this part of the modeling effort later.
Calibration is the process of finding a set of boundary conditions, water fluxes (stresses
on the model), and hydrogeologic parameters that produce the result that most closely
matches field measurements of hydraulic heads, water fluxes, and potential contaminant
concentrations. Calibration of every model should have the target of an acceptable error set
beforehand. Its range will depend mainly on the model purpose. For example, a ground-
water flow model for evaluation of a regional aquifer system can sometimes tolerate a
difference between calculated and measured heads of up to several feet. This, however,
would be an unacceptable error in the case of a model for the design of containment and
cleanup of a contaminant plume spread over, for example, 50 acres.
Model calibration can be performed for steady-state conditions, transient conditions, or
both. Although steady-state calibration has prevailed in modeling practice, every attempt
should be made to have a transient calibration as well for the following reasons:

• Groundwater flow is transient by nature and is often subject to artificial (man-


made) transient changes.
• The usual purpose of the model is prediction, which is, by definition, time-related.
Groundwater Modeling 327

• Steady-state calibration does not involve aquifer storage properties, which are crit-
ical for a viable (transient) prediction.

A limited field data set predetermines the steady-state calibration. In such a case, an appro-
priate approach would be to define boundary conditions and stresses that are representa-
tive for the period in which the field data are collected.
When a transient field data set of considerable length is available, some meaningful aver-
age measure should be derived from it for a steady-state calibration. For example, this can
be the mean annual water-table elevation or the mean water table for the dry season, the
average annual groundwater withdrawal, the mean annual precipitation (recharge), the
average baseflow in a surface stream, and so on. Transient calibration typically involves
water levels recorded in wells during pumping tests or long-term aquifer exploitation.
There are two methods of calibration: (1) trial-and-error (manual) and (2) automated
Downloaded by [University of Auckland] at 23:45 09 April 2014

calibration. Trial-and-error calibration, or brute force, was the first technique applied in
groundwater modeling and is still preferred by most users. Although it is heavily influ-
enced by the user’s experience, it is always recommended to perform this type of cali-
bration, at least in part. By changing parameter values and analyzing the corresponding
effects, the modeler develops a better feeling for the model and the assumptions on which
its design is based. During manual calibration boundary conditions, parameter values and
stresses are adjusted for each consecutive model run until calculated heads match the
preset calibration targets. The first phase of calibration typically ends when there is a good
visual match between calculated and measured hydraulic heads at observation wells.
The next step involves quantification of the model error with various statistical param-
eters, such as standard deviation and distribution of model residuals, that is, differences
between calculated and measured values. Once this error is minimized (through a lengthy
process of calibration) and satisfies a preset criterion, the model is ready for predictive use.
It will sometimes be necessary to change input values and run the model tens of times
before reaching the target. The worst case scenario involves a complete change of the CSM
and redesign of the model with new geometry, boundaries, and boundary conditions.
During calibration, the user should focus on parameters that are determined with less
accuracy or assumed and change only slightly those parameters that are more certain. For
example, hydraulic conductivity determined by several pumping tests should be the last
parameter to change freely because it is usually the most sensitive. Most other parameters
are less sensitive and can be changed only within a certain realistic range; it is obviously
not possible to increase the precipitation infiltration rate 10 times from 10% to 100%. In
general, hydraulic conductivity and recharge are two parameters with equivalent quality;
an increase in hydraulic conductivity creates the same effect as a decrease in recharge.
Because different combinations of parameters can yield similar, or even the same, results,
trial-and-error calibration is not unique. During calibration, it is recommended to plot
residuals (measured values minus calculated values) on the model map using different
symbols (or colors) for negative and positive values. Together with mandatory graphs of
model predictions and residuals (see Figures 5.19a, 5.19b, 5.20a, and 5.20b), this allows for a
more accurate determination of the parameter value that produces the best overall visual
fit throughout the model domain.
Quantitative techniques for determining model error compare model results (simu-
lations) to site-specific information and include calculations of residuals, assessing cor-
relation among the residuals, and plotting residuals on maps and graphs (ASTM 1999).
Individual residuals are calculated by subtracting the model-calculated values from the
328 Hydrogeological Conceptual Site Models

116

114

112
Simulated value (ft amsl)

110 Layer 1
Layer 2
108 Layer 3
1:1
106
Linear
(all layers)
104
Downloaded by [University of Auckland] at 23:45 09 April 2014

102

100
100 102 104 106 108 110 112 114 116
(a) Observed value (ft amsl)

3.5

2.5 Layer 1
Layer 2
Residual (ft)

2
Layer 3
1.5 No bias
Linear (all layers)
1
Linear (Layer 1)
0.5 Linear (Layer 3)

-0.5

-1
100 102 104 106 108 110 112 114 116
(b) Observed value (ft amsl)

FIGURE 5.19
(a) Results of an initial model calibration when the influence of residential bedrock wells was not fully accounted
for. All points should be as close to the 1:1 line as possible, which means that simulated values exactly match
observed values. (b) Plot of model residuals for the calibration run presented in Figure 5.19a, showing that the
residuals are not random; this correlation of residuals is particularly evident for Layer 3, indicating possible
errors in the conceptual model. All points should be equally distributed above and below the no-bias line across
the range of observed heads.

targets (values recorded in the field, not extrapolated or otherwise assumed). They are
calculated in the same way for hydraulic heads, drawdowns, contaminant concentrations,
or flows; for example, the hydraulic head residuals are differences between the computed,
or simulated, heads and the heads actually measured in the field:

ri = hi – Hi
Groundwater Modeling 329

116

114

112
Simulated value (ft amsl)

110 Layer 1
Layer 2
108 Layer 3
1:1
106
Linear
(all layers)
104
Downloaded by [University of Auckland] at 23:45 09 April 2014

102

100
100 102 104 106 108 110 112 114 116
(a) Observed value (ft amsl)

0.8

0.6

0.4
Layer 1
Residual (ft)

0.2 Layer 2
Layer 3
0
No bias
-0.2 Linear (all layers)
Linear (Layer 1)
-0.4
Linear (Layer 3)
-0.6

-0.8

-1
100 102 104 106 108 110 112 114 116
(b) Observed value (ft amsl)

FIGURE 5.20
(a) Improved model calibration after obtaining site-specific data on residential well locations and pumping
rates. (b) Plot of model residuals for the improved calibration presented in Figure 5.20a. Although there still may
be a slight trend (correlation) visible, the residual values are all less than 1 ft and more evenly scattered along
the no-bias (zero) line.

where ri is the residual, Hi is the measured hydraulic head at point i, and hi is the computed
hydraulic head at the approximate location where Hi was measured. If the residual is posi-
tive, the computed value was too high; if it is negative, the computed value was too low
(ASTM 1999).
Spatial or temporal correlation among residuals can indicate systematic trends or bias
in the model (see Figure 5.19). Of two simulations, the one with less correlation among
330 Hydrogeological Conceptual Site Models

residuals has a better degree of correspondence (ASTM 1999). Apparent trends or spatial
correlations in the residuals may indicate a need to refine aquifer parameters or boundary
conditions or even to reevaluate the CSM. For example, if all of the residuals in the vicin-
ity of a no-flow boundary are positive, then the recharge may need to be reduced or the
hydraulic conductivity increased (ASTM 1999). For transient simulations, a plot of residu-
als at a single point versus time may identify temporal trends. Temporal correlations in
residuals can indicate the need to refine input aquifer storage properties or initial condi-
tions (ASTM 1999).
As noted earlier, graphs of model predictions and residuals across the range of observed
values are mandatory deliverables in model reports. They are also highly useful in iden-
tifying areas of the model that need improvement. For example, Figure 5.19 provides
calibration graphs for a preliminary groundwater flow model that was constructed and
calibrated based on available data for a site. Both graphs show significant overprediction
Downloaded by [University of Auckland] at 23:45 09 April 2014

of model heads (positive residuals up to approximately 4 ft) in the bottom layer (layer 3),
which represents a fractured bedrock aquifer. One potential reason for this discrepancy
identified by the hydrogeologist was the lack of site-specific information regarding the
number, locations, and pumping rates of residential bedrock wells in the vicinity of the
site. After conducting a field study to address this data gap, the hydrogeologist modified
the number and locations of the wells and increased the flow rates based on real data from
the residences. The changes greatly improved the model calibration in layer 3 by lowering
hydraulic heads as shown by the revised graphs presented in Figure 5.20a and b.
Another recommended procedure during model calibration and sensitivity analysis is
to plot a graph of model error change versus parameter change as illustrated in Figure 5.21.
This describes parameter sensitivity—more sensitive parameters have steeper slopes than
less sensitive parameters.

0.270

0.264
River Stage
0.258
Absolute Residual Mean (ft)

1
0.252 2
0.246 3
0.240 4
0.234 5
0.228 6
0.222 7
0.216 8
0.210
0.990 0.992 0.994 0.996 0.998 1.000 1.002 1.004 1.006 1.008 1.010
Multiplier

FIGURE 5.21
Graph of the absolute residual mean model error for hydraulic head based on changes in river stage at various
river reaches in a groundwater model. This is typically a very sensitive parameter because of the importance of
surface water–groundwater interactions on the potentiometric surface. In this case, a 1% decrease in river stage
can lead to an increase in model error of more than 15%, highlighting the importance of obtaining accurate river
stage data in the field.
Groundwater Modeling 331

It is more difficult to calibrate a contaminant fate-and-transport model because aquifer


heterogeneities and biochemical reactions usually have a much greater effect on contami-
nant flow pathways and concentrations compared to the bulk flow rate of groundwater.
However, the process of calibration of fate-and-transport models is exactly the same as
for groundwater flow models: Various fate-and-transport parameters are being changed,
within reasonable bounds, until a satisfactory match between the field-measured and
model-predicted contaminant concentrations is achieved.
Because of the increasing complexity of numerical flow and fate-and-transport models,
automated, computationally intensive methods of model calibration are becoming more
popular. Historically, computing power was insufficient to run thousands of model simu-
lations in a reasonable period of time (e.g., 1 day). As computing power has caught up with
available statistical methods, all modelers may now employ such advanced methods as
PEST and Monte Carlo uncertainty analysis. PEST (Doherty 2005) is an automated calibra-
Downloaded by [University of Auckland] at 23:45 09 April 2014

tion tool that is an example of an inverse model, which means that parameters are esti-
mated from model input and observations. This is different from conventional modeling,
in which the user specifies parameter values and produces model outputs that are then
compared to observations. Example parameters that can be estimated by PEST include
hydraulic conductivity and recharge.
PEST works by minimizing an objective function based on the observed values used
as calibration targets. Many modelers or model reviewers still believe that the main ben-
efit of automated calibration is that it takes less time than the brute-force, trial-and-error
process. However, a major benefit of PEST in its current state is that the modeler is no
longer limited to the zone-based approach in which polygons represent areas of uni-
form parameter values (e.g., the hydraulic conductivity is the same everywhere within a
hydraulic conductivity zone polygon). Instead, the modeler can use point values, termed
pilot points, to estimate parameters at individual X, Y, Z locations and then generate a
continuous parameter surface with kriging. This enables assessment and quantification
of heterogeneity within the model (or within zones of a model) and the creation of a vari-
able conductivity field that is more representative of reality. Furthermore, the use of PEST
with pilot points greatly facilitates uncertainty analysis through the creation of sensitiv-
ity parameters. PEST is also easily linked to advanced uncertainty analysis tools such as
Monte Carlo simulation.
Monte Carlo analysis with PEST is a statistical tool that involves the running of tens,
hundreds, or even thousands of simulations. As a first step, a random field of the param-
eter of interest, such as hydraulic conductivity, is generated based on a statistical distri-
bution. This field is then calibrated using PEST. This process is completed over and over
again using different realizations of the initial field. The outcome of Monte Carlo analysis
is a distribution of values for the parameter of interest that all meet calibration criteria
specified by the user. For example, one can obtain innumerable different hydraulic con-
ductivity fields that all represent viable calibrations based on statistical criteria. The mini-
mum, average, and maximum conductivities for each pilot point can then be calculated
and compared to assess the uniqueness of the model.
An example figure of two potential hydraulic conductivity fields generated using PEST
with Monte Carlo analysis in Groundwater Vistas is presented in Figure 5.22. Both of these
fields lead to comparable calibration statistics, and some modelers may therefore accept
either outcome as a potential solution. These fields are remarkably different, which is also
demonstrated in Figure 5.23 by the range of conductivities calculated for five different
pilot points. The conductivity at individual points can span several orders of magnitude.
It is further noted that this analysis was conducted with only 10 realizations, whereas a
332 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.22
Two Monte Carlo realizations of the hydraulic conductivity field generated by the automated calibration pro-
gram PEST.

Log Hydraulic Conductivity, in ft/d

2
Pilot Points

0.1 1 10 100 1000

FIGURE 5.23
Variation in hydraulic conductivity at five pilot points from 10 Monte Carlo realizations.
Groundwater Modeling 333

defensible Monte Carlo analysis should include hundreds or thousands of realizations.


Therefore, the nonuniqueness of this calibration may be even more extensive than indi-
cated by Figure 5.23. Monte Carlo analysis is predicated on the running of a great number
of simulations, which is the whole point of using this technique. Consequently, using only
a handful of realizations and picking one as the best calibration does not make sense.
The question then becomes as follows: how does the modeler interpret and address this
nonuniqueness to defend a particular solution as better than any other potential realiza-
tion? The most important strategy is to rely on the CSM based on real hydrogeological
data obtained in the field. The results of any statistically generated calibration should be
assessed in light of the CSM to determine if the degree of heterogeneity predicted by the
model is realistic, considering the hydrogeologic conditions observed and characterized
at the site. Secondly, the real value of Monte Carlo analysis is quantifying the probability
of achieving a certain outcome given the uncertainty of the model parameters. Figure 5.24
Downloaded by [University of Auckland] at 23:45 09 April 2014

presents an example use of Monte Carlo and PEST simulation in Groundwater Vistas to
quantify the capture probability for a well field in an alluvial aquifer system. This visu-
alization can be used to successfully refute an argument that because the model is non­
unique it cannot be used to accurately assess the well field’s capture zone.
Because the conceptual objective of Monte Carlo analysis is quantifying uncertainty
arising from field-scale heterogeneity, the process may also eventually replace the use
of dispersivity in fate-and-transport modeling. If heterogeneity can be captured by the
calibration process, a mathematic construct to represent field-scale heterogeneity with
a geologic zone is no longer necessary. This remains an area where further research is
desirable.

FIGURE 5.24
Capture probability for a well field in an alluvial aquifer system. Colors are blue = 1% and red = 100% capture
probability. (Courtesy of Jim Rumbaugh.)
334 Hydrogeological Conceptual Site Models

5.6 Modeling Documentation and Standards


Preparing model documentation and a report is the final phase of the modeling effort
and arguably the most important from the client’s standpoint. A poorly documented
and confusing report can ruin days of work and an otherwise excellent model. Every
effort should be made to produce an attractive and user-friendly document that will con-
vey clearly all previous phases of the model design. Special attention should be paid to
clearly state the model’s limitations and uncertainties associated with calibrated param-
eters. Electronic modeling files (model input and output) and GIS files, if applicable, will
have to be made available to the client in most cases. All modeling documentation should
strictly follow widely accepted industry practices, guidelines, and standards for ground-
water modeling.
Downloaded by [University of Auckland] at 23:45 09 April 2014

The following industry standards, created by leading industry experts for the ground-
water modeling community under the auspices of ASTM International cover all major
aspects of groundwater modeling and should be followed when attempting to create a
defensible groundwater model that can be used for predictive purposes:

• Guide for application of groundwater flow model to a site-specific problem


(D 5447-93)
• Guide for comparing groundwater flow model simulations to site-specific infor-
mation (D 5490-93)
• Guide for defining boundary conditions in groundwater flow modeling (D 5609-94)
• Guide for defining initial conditions in groundwater flow modeling (D 5610-94)
• Guide for conducting a sensitivity analysis for a groundwater flow model applica-
tion (D 5611-94)
• Guide for documenting a groundwater flow model application (D 5718-95)
• Guide for subsurface flow and transport modeling (D 5880-95)
• Guide for calibrating a groundwater flow model application (D 5981-96)
• Practice for evaluating mathematical models for the environmental fate of chemi-
cals (E 978-92)
• Guide for developing CSMs for contaminated sites (E 1689-95)

The following language accompanies the U.S. EPA OSWER Directive #9029.00 entitled
“Assessment Framework for Ground-Water Model Applications” (U.S. EPA 1994): The pur-
pose of this guidance is to promote the appropriate use of groundwater models in EPA’s
waste management programs. More specifically, the objectives of the framework are to

• Support the use of groundwater models as tools for aiding decision making under
conditions of uncertainty
• Guide current or future modeling
• Assess modeling activities and thought processes
• Identify model application documentation needs

Following is the introduction to “Guidelines for Evaluating Ground-Water Flow Models”


published by the USGS (Reilly and Harbaugh 2004):
Groundwater Modeling 335

“Ground-water flow modeling is an important tool frequently used in studies of


ground-water systems. Reviewers and users of these studies have a need to evaluate the
accuracy or reasonableness of the ground-water flow model. This report provides some
guidelines and discussion on how to evaluate complex ground-water flow models used
in the investigation of ground-water systems. A consistent thread throughout these
guidelines is that the objectives of the study must be specified to allow the adequacy of
the model to be evaluated.”

The authors recommend that interested readers obtain copies of some or all of the above
standards for further reading and professional reference.
Downloaded by [University of Auckland] at 23:45 09 April 2014

5.7 MODFLOW-USG
The description of this new groundwater modeling program is provided courtesy of Dr.
Sorab Panday, the groundwater modeling practice leader at AMEC.
As described earlier, the USGS’s finite-difference model MODFLOW (Harbaugh 2005) is
the most popular groundwater code worldwide. MODFLOW solves the groundwater flow
equation on a rectangular finite-difference grid. Although there are options available for
blocking out portions of a rectangular grid that are outside the simulation domain and for
local grid refinement, a rectangular grid structure is generally less effective or efficient in
fitting irregular domain geometries or for having refined grid structures within areas of
high activity or interest. The MODFLOW-USG code extends the MODFLOW simulation
capabilities to irregular domains using unstructured grids.
Unstructured grids provide a high level of flexibility of discretization by implementing dif-
ferently shaped grid-block geometries and by use of nested grid structures. Figure 5.25 shows
some unstructured grid geometries that may be used by MODFLOW-USG. The grid may be

FIGURE 5.25
Examples of unstructured grid geometries supported by MODFLOW-USG. (Courtesy of Dr. Sorab Panday,
AMEC.)
336 Hydrogeological Conceptual Site Models

created as a combination of nested grids and polygons of different geometries. Sublayering


and vertical displacements along faults may be directly accommodated without excessive dis-
cretization. The grid is, however, limited to a prismatic shape in the vertical direction.

5.7.1 Description of Method


A finite-volume formulation is implemented to discretize the groundwater flow equations for
an irregular grid structure. Advantages of the finite volume scheme include the following:

• Finite-volume discretizations provide gridding flexibility and are easy to under-


stand and implement.
• Finite-volume discretizations provide mass-conserved solutions, which are a
Downloaded by [University of Auckland] at 23:45 09 April 2014

topologic extension of the finite-difference formulation (Peaceman 1977; Moridis


and Pruess 1992).
• Finite-volume schemes are robust and efficient with the flexibility of finite ele-
ments but without the computationally intensive numerical integration, elemental
assembly schemes, and expanded matrix connectivity.
• The finite-volume formulation provides flexibility with various grid-block shapes
and connectivities without the need for extensive elemental catalogues as required
by the finite-element schemes.

The integrated finite difference formulation discussed by Pruess et al. (1999) is applied
to the groundwater flow equation. The discretized formulation is identical to the finite-
difference approximation but with generalized computations for volumes, flow areas,
flow lengths, and connections. The cell-by-cell conductance term may be computed using
any of the averaging options provided by the block-centered flow and layer-property flow
packages of MODFLOW-2005, including harmonic, arithmetic, and logarithmic averaging.
The upstream weighting formulation of Niswonger et al. (2011) is also included along with
the Newton–Raphson linearization to provide added robustness for difficult nonlinear
problems. The conductance term may further be modified for presence of a horizontal flow
barrier conceptualized using the horizontal-flow barrier package of MODFLOW-2005.
Use of an unstructured formulation with arbitrary flow connections between nodes
allows for a natural extension of the basic groundwater flow solution to include various
other connected flow processes in a fully implicit manner. Therefore, there exists a frame-
work for incorporating additional flow domains and connections to the subsurface porous
medium system. An important flow process significant in several situations includes flow
through long-screened wells, conduits, and fracture networks. A conduit domain flow
(CDF) package has been included with the current version of MODFLOW-USG to simulate
flow through conduit networks and between the porous medium and conduits. Conduits
may be horizontal, vertical, or angled in any direction. Figure 5.26 shows typical conduit
geometries and connections that may be used by the model. Exchange of water between
the conduit and the porous medium is expressed via a linear leakance term or via the
Thiem Equation to simulate the net head loss between the conduit node and the porous
medium grid block. Therefore, the CDF package provides most of the combined function-
alities of the conduit flow process (Shoemaker et al. 2007) and the multinode well (Halford
and Hanson 2002; Konikow et al. 2009) packages of MODFLOW-2005 in a fully implicit
manner. Figure 5.27 shows comparison of groundwater flow solution for a karst conduit
embedded in an aquifer matrix using MODFLOW-USG and FEFLOW.
Groundwater Modeling 337

Conduit
Nodes

Single Dimensional Multi Dimensional


Conduit Segment Conduit Segment
Downloaded by [University of Auckland] at 23:45 09 April 2014

Conduit Networks

FIGURE 5.26
Examples of conduit domain geometries supported by MODFLOW-USG. (Courtesy of Dr. Sorab Panday, AMEC.)

FIGURE 5.27
Solution of groundwater flow toward a single conduit embedded in the aquifer matrix. Left: MODFLOW-USG
solution. Right: FEFLOW solution. (Courtesy of Dr. Sorab Panday, AMEC.)
338 Hydrogeological Conceptual Site Models

For finite-volume grids, the formulation is a second-order approximation only when the
perpendicular from the cell centroid to a face between two nodes coincides with the mid-
point of the face (Dehotin et al. 2011)—as is the case for isosceles triangles, rectangles,
and regular higher-order polygons. There is an error in the flux computation, however,
for irregular polygon shapes or nested grids in which the cell center does not coincide
with the perpendicular bisector of the face. A ghost node correction (GNC) module based
on the work of Edwards (1996) and Dickinson et al. (2007) has also been introduced with
MODFLOW-USG to maintain higher-order accuracy for irregular grids. The concept of
the GNC is that the nodal value of the head at an irregular grid block is not representative
for flux across the interface between grid blocks. Instead, a more representative value is
obtained by interpolating the head value to a ghost-node location that does lie along the
perpendicular bisector of the face. As illustrated in Figure 5.28, the ghost-node location
interpolation can be performed in one or multiple dimensions. The ghost-node correction
Downloaded by [University of Auckland] at 23:45 09 April 2014

is also applicable to conduits that are located off-center from the porous matrix grid block
to provide subgrid scale location adjustments without grid refinement.
MODFLOW-2005 boundary packages that have been currently converted for use with
unstructured grids include the recharge package (RCH), the evapotranspiration pack-
age (EVT), the transient flow and head boundary package (FHB), the well package (WEL),
the drain package (DRN), the river package (RIV), the head-dependent flux (general head
boundary) package (GHB), the stream package (STR7), the streamflow routing package
with unsaturated flow beneath streams (SFR2), the stream-gage monitoring package
(GAGE), and the lake package (LAK3). Boundary nodes are identified within these pack-
ages by indexing the global node number instead of the structured layer, row, and column
classification used by MODFLOW-2005.
Unstructured grids cannot be accommodated by the structured, symmetric solvers pres-
ent in MODFLOW-2005. Therefore, the MODFLOW-USG code incorporates its own suite of

Perpendicular Bisector

Ghost Node

FIGURE 5.28
Ghost-node conceptualization in MODFLOW-USG. (Courtesy of Dr. Sorab Panday, AMEC.)
Groundwater Modeling 339

unstructured solvers for symmetric and nonsymmetric matrices. The χMD solver (Ibaraki
2011) is an unstructured solver that has options for symmetric and nonsymmetric accelera-
tion methods. It uses a level-based ILU decomposition scheme with drop tolerance followed
by conjugate gradient (for symmetric systems), Orthomin, and Bi-CGSTAB (for nonsym-
metric systems) acceleration schemes. Nonsymmetric matrices may be generated by the
GNC package or by use of Newton–Raphson linearization using the upstream weighted
formulation of Niswonger et al. (2011). The generalized minimal residual (GMRES) solver
of Kipp et al. (2008) and the ORTHOFEM solver of Mendoza et al. (1994) are alternative
solvers for nonsymmetric systems. The PCGP solver of Hughes (2011) provides an alterna-
tive solver for symmetric systems.

5.7.2 Input and Output


Downloaded by [University of Auckland] at 23:45 09 April 2014

The input/output (I/O) formats for MODFLOW-USG follow the MODFLOW-2005 conven-
tions. A name file is read by MODFLOW during problem initialization, which provides
all other file names that are opened by MODFLOW-USG. Input for a structured or an
unstructured grid is accommodated. If the structured grid I/O option is selected, the code
uses the finite-difference input data files of MODFLOW-2005 with minimal changes, such
as switching to one of the unstructured solver routines. Output from a structured grid is
consistent with MODFLOW-2005 ASCII and binary formats for easy postprocessing by
tools available for MODFLOW.
For unstructured grids, the code provides various levels of generalization to ease the
I/O burden. Furthermore, I/O is based on a global node number convention rather than
using the layer-row-column format. Nodal outputs are provided in layer lists to accommo-
date easier processing of results from each model layer. Flow outputs are provided for all
connections to a node. Only the upper triangular portion of the cell-by-cell flow matrix is
saved to avoid redundant output (because Qnm = –Qnm).

5.7.3 Benchmarking and Testing


Several test problems have been simulated to verify and benchmark the code. First, the
suite of test problems available from the USGS for MODFLOW-2005 was run to ensure
that the new code and solvers provide the same results for rectangular finite-­difference
problems as the MODFLOW-2005 code. These same problems were also simulated
with unstructured grid inputs to again verify that the code behaves appropriately for
unstructured grid inputs. Further testing was done using field-scale applications on
finite-­difference grids to compare performance of the new code and solvers against
MODFLOW-2005. Simulated water levels, mass balances, and execution speeds are simi-
lar, depending on the solver options selected. Additional testing was then performed
using various nested grid structures to evaluate code performance for unstructured,
rectangular grids. These tests are discussed by Langevin et al. (2011). Finally, the CDF
package and GNC module were also tested using simple and complex examples and
comparing the results with more comprehensive codes or refined solutions. The code has
also been tested on multiple field examples. Figure 5.29 shows the water level results of
an application in the Biscayne Aquifer where nested grids were used to provide resolu-
tion around wells and canal features. MODFLOW-USG is supported by the last version
of Groundwater Vistas.
340 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.29
Biscayne Aquifer model example showing unstructured grids used to accommodate geometry of canals and
groundwater pumping centers. (Courtesy of Dr. Christian D. Langevin, USGS.)

5.8 Variably Saturated Models


Variably saturated groundwater models, or those that incorporate flow solutions for both
the vadose zone and the saturated zone, are highly useful for the following applications:

• Groundwater recharge studies


• Contaminant fate-and-transport modeling where the vadose zone is involved
• Modeling leachate migration from landfills
• Evaluating the effects of remedial interventions, such as capping or shallow soil
excavation, on contaminant concentrations in the underlying aquifer

The use of variably saturated flow solutions in MODFLOW, such as MODFLOW-SURFACT or


MODFLOW-2005, can also more properly simulate the desaturation and resaturation of model
cells. Previous versions of MODFLOW were susceptible to unwanted influences from dry cells,
which were permanently set as no-flow cells upon desaturation. The incorporation of variably
saturated solutions in fate-and-transport models prevents inaccurate assumptions regarding
the dilution and mixing of contaminants dissolved in groundwater recharge and leachate. For
the reader’s benefit, two common, standalone (independent of MODFLOW), variably saturated
models available in the public domain are briefly described below.
VS2DTI. This graphical software package is produced by the USGS for two-dimensional
finite-difference, variably saturated fate-and-transport modeling. It is available in the public
domain (http://wwwbrr.cr.usgs.gov/projects/GW_Unsat/vs2di1.2/) and is particularly use-
ful in groundwater recharge studies. VS2DTI is also used for contaminant fate-and-transport
modeling where waste disposal occurs at ground surface or in the vadose zone (see Figure
5.30). Hydrogeologists can use VS2DTI to develop site-specific leaching-based soil cleanup
criteria by evaluating the relationship between contaminant concentrations in unsaturated
Groundwater Modeling 341
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 5.30
Example of simulated contaminant migration through the vadose zone using VS2DTI.

soil versus those at the water table and within the underlying aquifer. VS2DTI has a user-
friendly graphical interface but does not readily link to external programs. The program is
quite unstable when attempting to model more complex scenarios and often terminates with-
out providing any explanation or hints as to possible reasons. The user guide is very brief,
and there is also no detailed explanation of quite a few numerical parameters appearing in
several input windows. Because of the general lack of related support literature, updates, and
well-documented modeling examples, this USGS program is not as widely used as would
be expected based on its capabilities. Unfortunately, it appears that VS2DTI is being quietly
abandoned by the USGS in favor of the new versions of MODFLOW that now include variably
saturated flow. Additional examples of VS2DTI model results and animation of results are
provided on the companion DVD.
342 Hydrogeological Conceptual Site Models

HYDRUS. Similar to VS2DTI, HYDRUS is a variably saturated groundwater fate-and-


transport modeling program. HYDRUS-1D is a one-dimensional model available in the
public domain that is widely used in the industry. Two- and three-dimensional model-
ing capabilities are added to the commercially sold version of HYDRUS. All versions of
HYDRUS use finite-element numerical methods to solve the equations of unsaturated
and saturated flow in porous media. HYDRUS-2D and 3D have many advanced modules,
including those allowing simulation of constructed wetlands and bacteria, virus, colloid,
and flowing particle transport. In general, the advanced functionality and complexity of
HYDRUS-2D and 3D are better-suited to an academic setting.
Downloaded by [University of Auckland] at 23:45 09 April 2014

5.9 GIS and Numerical Modeling Software


Similar to the cooperative relationship between the CSM and the numerical groundwater
model, GIS systems and numerical modeling software have a synergistic relationship that
the hydrogeologist can use to his or her advantage. The visual conceptual model created
and presented with the geodatabase and GIS system forms the backbone of the numerical
groundwater model. Model layer elevations, parameter zones (areas of identical param-
eterization), boundary conditions, and the finite-difference (or element) grid can all be
defined in GIS. Modules such as Surfer or Geostatistical Analyst can be used to create the
geologic layers through kriging or other interpolative means. The model structure can
then be imported and refined in groundwater modeling programs, such as Groundwater
Vistas and Processing MODFLOW. Model time periods and other specifications are set
within the modeling software, and the model is run and calibrated to the hydrogeologist’s
liking. All revised model input parameters and, most importantly, the model output can
then be exported back into the GIS-geodatabase environment.
The full capabilities of the GIS visual processor are then made available to the modeling
output, simplifying the creation of high-quality figures illustrating calibration residuals,
hydraulic head or drawdown maps, or plume orientations.
The ability to present model input and output in GIS demystifies the groundwater mod-
eling process, creating transparency and accountability. Reviewers without access to the
modeling software can still easily review and understand the model input and output.
Most importantly, the hydrogeologist can clearly illustrate the dependence of the ground-
water model on the underlying CSM.
As with all technological advancements, the rapid spread of groundwater modeling
processors and GIS has led to heightened expectations for hydrogeological deliverables.
Historically, the hydrogeologist may have spent the vast majority of time formatting data
tables for MODFLOW input and making sense of the output. In current practice, the hydro-
geologist is expected to produce figure after figure of model input and output with asso-
ciated sensitivity analysis and clear articulation of the modeling results. In other words,
time saved on MODFLOW pre processing and postprocessing is now devoted to extensive
documentation of results. This trade-off can be frustrating to the hydrogeologist, but it is
a necessary demand that helps promote the overall utility of groundwater models. It goes
without saying that expertise in GIS makes these demands less onerous.
Example uses of numerical models are presented throughout the remaining chapters as
well as on the accompanying DVD, including animations.
Groundwater Modeling 343

References
ASTM International, 1999. ASTM Standards on Determining Subsurface Hydraulic Properties and
Ground Water Modeling, Second Edition, 320 pp.
Aziz, C. E., Newell, C. J., Gonzales, J. R., Haas, P., Clement, T. P., and Sun, Y., 2000. BIOCHLOR:
Natural Attenuation Decision Support System v. 1.0: User’s Manual. EPA/600/R-00/008. US
Environmental Protection Agency, Cincinnati, OH.
Aziz, C. E., Newell, C. J., and Gonzales, J. R., 2002. BIOCHLOR Natural Attenuation Decision Support
System, Version 2.2, March 2002, User’s Manual Addendum.
Buschek, T. E., and Alcantar, C. M., 1995. Regression Techniques and Analytical Solutions to
Demonstrate Intrinsic Bioremediation. In Intrinsic Bioremediation, edited by R. E. Hinchee, J. T.
Wilson, and D. C. Downey. Battelle Press, Columbus, OH.
Chiang, W.-H., and Kinzelbach, W., 2001. 3D-Groundwater Modeling with PMWIN: A Simulation
Downloaded by [University of Auckland] at 23:45 09 April 2014

System for Modeling Groundwater Flow and Pollution. Springer, Berlin, 346 pp.
Dehotin, J., Vazquez, R. F., Braud, I., Debionne, S., and Viallet, P. 2011. Modeling of hydrological
processes using unstructured and irregular grids: 2D groundwater application. J. Hydrol. Eng.
16(2), 108–125.
Dickinson, J. E., James, S. C., Mehl, S., Hill, M. C., Leake, S. A., Zyvoloski, G. A., Faunt, C. C., and
Eddebbarh, A., 2007. A new ghost-node method for linking different models and initial investi-
gations of heterogeneity and nonmatching grids. Adv. Water Res. 30, 1722–1736.
Doherty, J., 2005. PEST: Model-Independent Parameter Estimation, User Manual: 5th Edition.
Watermark Numerical Computing.
Domenico, P. A., 1987. An analytical model for multidimensional transport of a decaying contami-
nant species. J. Hydrol. 91(1–2), 49–58.
Edwards, M. G., 1996. Elimination of adaptive grid interface errors in the discrete cell centered pres-
sure equation. J. Comput. Phys. 126, 356–372.
Franke, O. L., Reilly, T. E., Haefner, R. J., and Simmons, D. L., 1990. Study Guide for a Beginning
Course in Ground-Water Hydrology: Part 1-Course Participants. U.S. Geological Survey Open
File Report 90-183, Reston, VA, 184 pp.
Gelhar, L. W., Welty, C., and Rehfeldt, K. R., 1992. A critical review of data on field-scale dispersion
in aquifers. Water Resour. Res. 28(7), 1955–1974.
Halford, K. J., and Hanson, R. T., 2002. User Guide for the Drawdown-Limited, Multi-Node Well
(MNW) Package for the U.S. Geological Survey’s Modular Three-Dimensional Finite-Difference
Ground-Water Flow Model, Versions MODFLOW-96 and MODFLOW-2000. U.S. Geological
Survey Open-File Report 02-293.
Harbaugh, A. W., 2005. MODFLOW-2005, the U.S. Geological Survey Modular Ground-Water Model:
The Ground-Water Flow Process. U.S. Geological Survey Techniques and Methods 6-A16.
Harbaugh, A. W., and McDonald, M. G., 1996. User’s Documentation for MODFLOW-96, an Update
to the U.S. Geological Survey Modular Finite-Difference Ground-Water Flow Model. U.S.
Geological Survey Open-File Report 96-485, Reston, VA, 56 pp.
Harbaugh, A. W., Banta, E. R., Hill, M. C., and McDonald, M. G., 2000. MODFLOW-2000, the U.S.
Geological Survey Modular Ground-Water Model—User Guide to Modularization Concepts
and the Ground-Water Flow Process. U.S. Geological Survey Open-File Report 00-92, Reston,
VA, 121 pp.
Hemond, F. H., and Fechner-Levy, E. J., 2000. Chemical Fate and Transport in the Environment, 2nd
Edition. Academic Press, San Diego, CA, 433 pp.
Hughes, J. D., 2011. PCGP, An Unstructured, Symmetric Solver for MODFLOW. U.S. Geological
Survey Techniques and Methods, in print.
Ibaraki, M., 2011. χMD User’s Guide—An Efficient Sparse Matrix Solver Library, Version 1.30. School
of Earth Sciences, Ohio State University, Columbus, OH.
Janis, I. L., 1972. Victims of Groupthink. Houghton Mifflin, Boston, 277 pp.
344 Hydrogeological Conceptual Site Models

Karanović, M., Neville, C. J., and Andrews, C. B., 2007. BIOSCREEN-AT: BIOSCREEN with an exact
analytical solution. Ground Water 45(2), 242–245.
Kipp, K. L., Jr., Hsieh, P. A., and Charlton, S. R., 2008. Guide to the Revised Groundwater Flow and
Heat Transport Simulator: HYDROTHERM—Version 3. U.S. Geological Survey Techniques
and Methods 6-A25.
Konikow, L. F., Hornberger, G. Z., Halford, K. J., and Hanson, R. T., 2009. Revised Multi-Node
Well (MNW2) Package for MODFLOW Ground-Water Flow Model. U.S. Geological Survey
Techniques and Methods 6-A30, 67 pp.
Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC/Taylor & Francis
Group, Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources: Sustainability, Management, and Restoration. McGraw
Hill, New York, 852 pp.
Langevin, C. D., Panday, S., Niswonger, R. G., and Hughes, J. D., 2011. Evaluation of Mesh Alter­
natives for an Unstructured Grid Version of MODFLOW. MODFLOW and More.
Downloaded by [University of Auckland] at 23:45 09 April 2014

Lindgren, R. J., Dutton, A. R., Hovorka, S. D., Worthington, S. R. H., and Painter, S., 2004.
Conceptualization and Simulation of the Edwards Aquifer, San Antonio Region, Texas. U.S.
Geological Survey Scientific Investigations Report 2004-5277, 143 pp.
McDonald, M. G., and Harbaugh, A. W., 1988. A Modular Three-Dimensional Finite-Difference
Ground-Water Flow Model. U.S. Geological Survey Techniques of Water-Resources
Investigations, Book 6, Chap. A1, 586 pp.
Mendoza, C., Sudicky, E. A., and Therrien, R., 1994. ORTHOFEM Users Guide, Version 1.04.
Moridis, G., and Pruess, P., 1992. TOUGH Simulations of Updegraff’s Set of Fluid and Heat Flow
Problems. Lawrence Berkeley Laboratory Report LBL-32611, Berkeley, CA.
Newell, C. J., McLeod, R. K., and Gonzales, J. R., 1996. BIOSCREEN: Natural Attenuation Decision
Support System User’s Manual. EPA/600/R-96/087, Robert S. Kerr Environmental Research
Center, Ada, OK.
Niswonger, R. G., Panday, S., and Ibaraki, M., 2011. MODFLOW-NWT, A Newton Formulation for
MODFLOW-2005. US Geological Survey Techniques and Methods 6-A37, 44 pp.
Pankow, J. F., and Cherry, J. A., 1996. Dense Chlorinated Solvents and Other DNAPLs in Ground­
water. Waterloo Press, Guelph, Ontario, 522 pp.
Peaceman, D. W., 1977. Fundamentals of Numerical Reservoir Simulation. Elsevier, Amsterdam.
Pruess, K., Oldenburg, C., and Moridis, G., 1999. TOUGH2 user’s guide, Version 2.0, Lawrence
Berkeley Laboratory Report LBL-43134, Berkeley, California, various paging.
Reilly, T. E., and Harbaugh, A. W., 2004. Guidelines for Evaluating Ground-Water Flow Models. U.S.
Geological Survey Scientific Investigations Report 2004-5038, 30 pp.
Rumbaugh, J. O., and Rumbaugh, D. B., 2007. Guide to Using Groundwater Vistas, Version 5.
Environmental Simulations, Inc., Reinholds, PA, 372 pp.
Shoemaker, W. B., Kuniansky, E. L., Birk, S., Bauer, S., and Swain, E. D., 2007. Documentation of
a Conduit Flow Process (CFP) for MODFLOW-2005. U.S. Geological Survey Techniques and
Methods, Book 6, Chapter A24.
U.S. EPA, 1994. Assessment Framework for Ground-Water Model Applications. OSWER Directive
9029.00, US Environmental Protection Agency, Office of Solid Waste and Emergency Response,
Washington, DC.
U.S. EPA, 1996. Soil Screening Guidance: User’s Guide; Equation 10: Soil Screening Level Partitioning
Equation for Migration to Groundwater. U.S. Environmental Protection Agency, Office of Solid
Waste and Emergency Response, EPA/9355.4-23.
U.S. EPA, 1999. EPA Superfund Record of Decision: Montrose Chemical Corp. and Del Amo. EPA
ID: CAD008242711 and CAD029544731 OU(s) 03 & 03, Los Angeles, CA, 03/30/1999. Dual Site
Groundwater Operable Unit. II: Decision Summary. EPA/ROD/R09-99/035.
U.S. EPA, 2005. Decision Support Tools—Development of a Screening Matrix for 20 Specific Software
Tools. U.S. Environmental Protection Agency, Office of Superfund Remediation and Technology
Innovation, Brownfields and Land Revitalization Technology Support Center, Washington, DC,
18 pp.
Groundwater Modeling 345

Vogel, T. M., and McCarty, P. L., 1985. Biotransformation of tetrachloroethylene to trichloroethylene,


dichloroethylene, vinyl chloride, and carbon dioxide under methanogenic conditions. Appl.
Environ. Microbiol. 49(5), 1080–1083.
Vogel, T. M., and McCarty, P. L., 1987. Abiotic and biotic transformations of 1,1,1-trichloroethane
under methanogenic conditions. Environ. Sci. Technol. 21(12), 1208–1213.
Wiedemeier, T. H., Swanson, M. A., Moutoux, D. E., Gordon, E. K., Wilson, J. T., Wilson, B. H., Kampbell,
D. H., Haas, P. E., Miller, R. N., Hansen, J. E., and Chapelle, F. H. 1998. Technical Protocol for
Evaluating Natural Attenuation of Chlorinated Solvents in Ground Water. EPA/600/R-98/128,
US Environmental Protection Agency, Office of Research and Development, Washington, DC.
Downloaded by [University of Auckland] at 23:45 09 April 2014
Downloaded by [University of Auckland] at 23:45 09 April 2014
6
Three-Dimensional Visualizations

Courtesy of Gavin Hudgeons, AMEC-Geomatrix, Austin, TX.

6.1 Introduction
The ability to demonstrate comprehension of complex data is a fundamental element
of successful hydrogeological consulting and engineering. Seeing multiple data sets in
their true geospatial context is a powerful way to understand and communicate the
intricacies of a site. Indeed, if a conceptual site model (CSM) is the synthesis of assimi-
lated data into a graphical representation, a 3D or 4D visualization can itself become
the conceptual model. A 3D visualization is a representation of site data in a realistic or
relative context to its actual geospatial location. A 4D visualization includes the fourth
dimension of time and is therefore a transient depiction of a 3D model or, more simply,
a 3D animation.
3D visualization is not new. Geologists have illustrated their data with 3D drawings
and models for centuries. What have changed are the tools. What was once pen and
ink are now geospatial visualization technologies that allow us to rapidly create hun-
dreds of multidimensional views of a site, animated over multiple parameters and digi-
tally distributable in an interactive format. Hydrogeologists now work with innovative
technologies specifically designed to capture and assimilate vast amounts of geospatial
site data and render those data into interactive 3D or 4D visualizations. An expanding
assortment of visualization software is available to address the needs of the working
hydrogeologist. CTech’s MVS/EVS Pro, Esri ArcGIS 3D Analyst, RockWare RockWorks,
and EarthVision by Dynamic Graphics, Inc., are examples of some commonly used com-
puter programs.
Desktop workstations have the ability to process large data sets, and with the devel-
opment of publicly accessible supercomputers and distributed systems, data density
limitations are being overcome. Open source distributable software, such as VisIT and
ParaView, are gaining popularity as scientists recognize how their high-resolution data
can be customized and visualized in new ways using supercomputers. High-end 3D
visualization labs are emerging at universities across the world. Visualization soft-
ware originally designed for other industries, such as Autodesk 3DsMax, Maya, and
NewTek Lightwave, are finding hydrogeologic applications. Geologists are finding
themselves learning about software renderers and video editors as they develop their
CSMs.
Because all geospatial data have the same basic structure (X, Y, Z, n1, n2, n3,…), visualiza-
tion techniques are highly adaptable from one application to another. For example, output
from multiple geostatistical modeling packages can be exported into a single common
visualization engine for rendering, further analysis, or delivery. The same technologies

347
348 Hydrogeological Conceptual Site Models

used by doctors to visualize brains can be used by geologists to visualize aquifers. The
ability to move between packages taps the open-source nature of geospatial data, expand-
ing the toolbox and simplifying collaboration.
3D data availability has exploded. With online government geospatial databases, Web-
map services, and geospatial depots, more data are readily available to be visualized than
ever before. The Web has brought much of the world’s hydrogeologic data into the digi-
tal age. Groundwater-well and oil-well data sets are ever-expanding. Remote sensing and
geophysical data collection techniques are increasing in both coverage and resolution.
Land use, boundary and infrastructure vectors, and flood maps are readily available in
useable geospatial formats. Governments have invested significant resources in mapping,
and we are the beneficiaries.
Visualization displays are also rapidly increasing in both resolution and features.
Desktop computer monitors are available at resolutions approaching 60002 pixels. Monitors
Downloaded by [University of Auckland] at 23:45 09 April 2014

are being tiled into visualization walls, and large-scale visualization rooms and caves are
in use. Innovative delivery systems are being tested, including virtual reality, augmented
reality, 3D printing, holograms, and active and passive 3D. Tablets and handhelds are
beginning to bring rapid and real-time visualization to the field, and common file formats
such as .pdf are being expanding to support interactive 3D.

6.2 3D Conceptual Site Model Visualizations


3D visualization is not limited to static 3D views. Instead, multidimensional, high-­
resolution, multiframe, interactive, comprehensive, portable, and distributable visual-
ization packages are being developed that assimilate massive amounts of site data into
intuitive visual systems. They are used to demonstrate both concrete and abstract ideas
and represent legacy, real-time, and forward modeling data. 3D conceptual site model
visualizations (CSMVs) are used for the following purposes:

• Communication
• 2D and 3D paper graphics production
• Training and educating
• Project management
• Performance tracking and quality control/quality assurance (QA/QC)
• Data assimilation
• Visualization of project geospatial databases

CSMVs are built with flexible interfaces and adaptable workflows designed to allow
for rapid and effective deployment of 3D or 4D to meet hydrogeologic programmatic needs.
Every hydrogeologic CSM is unique. No two sites demand exactly the same sequence of
visualizations to be built, interpolations to be performed, or animations to be rendered.
However, because of the common structure of geospatial data, many commonalities exist
between all forms of 3D visualization. What follows is an approach to the development of
3D CSMVs.
Three-Dimensional Visualizations 349

A CSMV can be thought to exist within a digital 3D geospatial framework. The frame-
work establishes the space within which assimilated data can be visualized in a uni-
form format. Optimally, the framework consists of single site-wide verified data set
visualized using software that relates the spatial data. For this reason, site-wide digital
ground elevation data sets make good geospatial frameworks. Usually the framework
data set is the first data set to be visualized. The concept of a framework data set is
analogous to a basemap in 2D mapping applications. In general, the framework data set
is designed to

• Define the coordinate extents of the model


• Utilize both site-specific and large geographic areas as needed
• Prepare the space for the assimilation of all the appropriate site data into a com-
Downloaded by [University of Auckland] at 23:45 09 April 2014

prehensive and uniform visual format


• Provide a reliable data set for comparative visual QA/QC of forthcoming data
• Easily assimilate new data as they become available

A simple visualization framework might consist of a 3D topographic surface defining the


X and Y extents of a model. Coordinate limits of the visualization may extend beyond the
visualization framework in any direction as needed to accommodate future data sets or
conceptualization.
The topographic surface is usually the highest-resolution layer of a 3D hydrogeologic
CSM. For large-scale regional models with high-resolution data, the framework may be
designed to automatically adjust resolution with zoom levels to accommodate common
hardware limitations. Figure 6.1 shows an example of a tiled mosaic of National Elevation

FIGURE 6.1
Explanation in text.
350 Hydrogeological Conceptual Site Models

Dataset (NED10) topography data as the basis for a regional visualization framework.
Figure 6.1 also shows four NED10 tiles stitched together and visualized in 3D at full reso-
lution. For each tile, a consistent color scheme has been applied that accounts for the mini-
mum and maximum extents of elevation for the combined visualized tiles. Larger areas
can be shown, but at lower resolutions on most workstations. Higher-resolution Light
Detection and Ranging (LiDAR) topographic data, visualized in Figure 6.2, may also be
incorporated into a visualization framework. Large seamless data sets covering regional
areas processed on supercomputers can eliminate the need for the kind of subdividing
shown in Figure 6.1. This is the future of visualization.
Once a 3D visualization framework is designed, it can be populated with project data.
A relational visualization database (RVD) is designed for CSMV development. Data incor-
porated into the RVD include all pertinent geospatial site data, both current and legacy,
within the coordinate extents of the visualization framework. The RVD may include dig-
Downloaded by [University of Auckland] at 23:45 09 April 2014

ital and tabular data as well as scanned paper maps and images. Common data types
include well logs, geologic descriptions, geophysical data, soil, 4D surface and groundwater
chemistry data, 4D potentiometric surface data, forward modeling output, preexisting maps,
preexisting GIS and geospatial databases, cultural data, surface and infrastructure data, and
conceptual data. Except in cases where on-the-fly coordinate projections will be used, all
data in the RVD must be defined within a consistent coordinate system. As discussed in
Chapter 3, it is advisable to always use one uniform coordinate system as many programs
for spatial data analysis cannot project on the fly.
Legacy paper and .pdf maps can also be rasterized and georeferenced to be draped
onto any related 3D surface and included in the database, a concept discussed further in
Chapter 7 with example applications. Figure 6.3 shows how georeferenced paper maps
can be draped from surface to surface. Figure 6.4 shows an example of five paper maps, an
aerial photograph, and four tiles of NED10 data in 3D geospatial synergy. Frame-by-frame

FIGURE 6.2
Explanation in text.
Three-Dimensional Visualizations 351

FIGURE 6.3
Downloaded by [University of Auckland] at 23:45 09 April 2014

Explanation in text.

4D animations chronologically scrolling through legacy paper documents draped onto a


3D topographic surface can be a quick and effective way to display historic site conditions
and interpretations.
Once the data have been assimilated in the RVD, a set of 3D visualizations can be devel-
oped to visually apply QA/QC to the data for any incorrect or misplaced elements. Figure
6.5 shows boring coordinate errors detected by visualization QA/QC. Once the data are

FIGURE 6.4
Explanation in text.
352 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.5
Explanation in text.

FIGURE 6.6
Explanation in text.
Three-Dimensional Visualizations 353

ready, the visualizations are optimized and rendered for delivery. The interactive ani-
mations (IAs) can be organized in a file structure or interface, linking to source data if
desired, along with a visualization viewer (e.g., ArcScene by Esri), for end use.
Figure 6.6 shows a sample CSMV interface for a remediation site. Individual IAs are
launched into a distributable player that allows end-users to rotate, pan, zoom, capture,
manipulate, and analyze frames of the animations. The following discussions and figures
of fictionalized sites (built using CTech MVS and edited with Adobe Photoshop) demon-
strate some of the ways CSM data can be visualized in 3D.

6.2.1 3D Views of Geologic Model


Many different techniques are used to visualize geologic models in 3D. All elements of
geologic data can be visualized, from points to lines to surfaces to volumes. Traditional
Downloaded by [University of Auckland] at 23:45 09 April 2014

geologic mapping techniques, such as cross sections, fence diagrams, and isopleth maps,
translate naturally into 3D. Geostatistical algorithm output from software geology build-
ers can be visualized across platforms. 2D and 3D grids used by modeling packages can
be imported and seen. Stratigraphic layers can be stretched apart or turned on and off like
lights. Faults can be visualized as complex 3D surfaces, and fault blocks can be displaced,
uplifted, and eroded through 3D animations. Slice planes can be run through models in
any direction. Multiple slice planes can be positioned anywhere, and 3D well-to-well fence
diagrams can be cut from a model. One can apply custom color scales, display contours,
or make geologic layers pinch out and disappear below specified thicknesses. Vertical
exaggeration can be applied to discern subtle topographic features or for projects that
have a large horizontal span relative to depth. Transparency values, lighting effects, and
visual effects, such as volume rendering, can be customized. Any parameter can be ani-
mated, from benzene concentrations over time to stratigraphic thickness isovolume levels.
Figures 6.7 and 6.8 show several views of two geologic models. Figure 6.7 is a kriged geo-
logic model based on correlated boring data, and Figure 6.8 is the result of indicator krig-
ing on cone penetrometer test (CPT) data.
Software packages such as MVS contain several tools to conceptually manipulate geo-
logic models within the context of the 3D visualization workspace. For example, Figure
6.9 depicts a soil plume to be remediated by excavation. Using the MVS overburden
module, the excavation surface is automatically calculated and visualized based on user
inputs. Excavation volumetrics are also calculated. The third frame of Figure 6.9 shows
the excavation turned inside-out, revealing the lithologic units to be excavated. If desired,
volumetrics of each lithologic unit within the excavation can be quickly calculated from
the model, informing how much clay, sand, gravel, or debris fill material will be removed.
Such volumetrics can be very useful when evaluating disposal options for contaminated
soils.
Virtually any element of a CSMV can be animated in 4D; multiple elements can be
animated simultaneously, and spatial analyses can be performed as needed. Figure 6.10
shows an example of a virtual core extracted from a geologic model at a proposed well
location. A copy of the virtual core accompanies the logging geologist to the field to act as
a guide for identifying marker beds during drilling. Figure 6.11 displays two frames from
a stratigraphic animation in which only the borings defining the active stratigraphic units
are displayed. Figure 6.12 shows surface volumetrics of a containment dike calculated by
simulating flooding. Figure 6.13 shows the effects of vertical exaggeration on a geologic
model.
354 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.7
Explanation in text.
Three-Dimensional Visualizations 355
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.8
Explanation in text.

FIGURE 6.9
Explanation in text.
356 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.10
Explanation in text.

FIGURE 6.11
Explanation in text.

FIGURE 6.12
Explanation in text.
Three-Dimensional Visualizations 357
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.13
Explanation in text.

6.2.2 4D Views of Groundwater Chemistry


4D visualization is a powerful tool for revealing trends in groundwater conditions. Figure
6.14 depicts frames captured from a visualization of a groundwater remediation system.
Incorporating thousands of groundwater samples collected over a decade, concentrations
were visualized as a 4D IA tracking plume remediation conditions relative to groundwater
elevations. Well-construction details, sampling locations, property boundaries, and aerial
photographs are also incorporated. A flight beneath the site reveals the effect of the pump-
and-treat system on groundwater elevations and chemical concentrations through time
while simultaneously displaying daily sampling plans (purple balls) with dynamic titles.
The visualization can be viewed and manipulated on a PC with no additional software to
purchase by the end-user. Such a cohesive demonstration of legacy data from a complex
site would be difficult, if not impossible, to create if relying only on traditional 2D maps,
charts, and tables. 4D animations are able to assimilate multiple components, demonstrat-
ing in minutes what may otherwise take hours, days, or longer.
Grids used by geologic, chemical, and numerical modeling packages to assign calcu-
lated values across a site can also be visualized in 3D or 4D. Figure 6.15 displays visual-
izations of a rectilinear, convex hull and a rotated finite-difference grid. Grids may vary
greatly in both resolution and complexity; however, complex and unstructured grids can
similarly be visualized in 3D. Output from the modeling packages can also be visualized
and animated. Figure 6.16 shows particles ready to be animated along 10 layers of fate-and-
transport modeling output. Figure 6.17 displays surface advectors animated along gravity
defined pathways atop a potentiometric surface.
Figure 6.18 shows pumping-test drawdown data animated over time. Note that each color
separation along the boring tubes equals only 0.1 ft of drawdown. Thus, this 4D animation
depicts well drawdown values from a common Z elevation but not the actual geospatial loca-
tion of the potentiometric surface. Such visualization is very useful in understanding hydrau-
lic communication between wells but must not be confused with a visualization of actual
geospatial conditions. The hydrogeologist must clearly articulate the nature of the data to the
external viewer or user of the 4D model to minimize confusion and prevent interpretive error.
358 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.14
Explanation in text.

FIGURE 6.15
Explanation in text.
Three-Dimensional Visualizations 359
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.16
Explanation in text.

FIGURE 6.17
Explanation in text.

FIGURE 6.18
Explanation in text.
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.19
Explanation in text.

Hydrogeological Conceptual Site Models 360


Three-Dimensional Visualizations 361

6.2.3 Views of 3D Plumes and Soil Plumes


3D visualization of subsurface soil and groundwater data is useful for many purposes,
including calculating the volume of contaminated groundwater and soil and the overall
contaminant mass in the subsurface, performing spatial moment calculations, such as the
center of mass and the plume spread, visualizing the spatial relationship of the plume
to receptors, identifying preferential pathways, and verifying the horizontal and verti-
cal nature and extent of contamination. 4D animations of the above parameters can help
visualize changes over time and are very useful in monitored natural attenuation studies,
described further in Chapter 8. Figure 6.19 shows a visualization of a 3D soil plume kriged
from chemical samples collected during a boring investigation. Frames captured from
two types of animations are shown: a concentration-based animation whereby each frame
depicts plume concentrations at increasing isolevels and a contaminant-based animation
Downloaded by [University of Auckland] at 23:45 09 April 2014

whereby each frame represents a different chemical of concern (COC) at the appropriate
regulatory levels. Higher-resolution geophysical log data, CPT, or rapid optical screen tool
data can also be visualized (Figure 6.20). Because remote sensing and geophysical data are
often collected in very fine vertical intervals (centimeter scale or less), vertical exaggera-
tion must often be used to visualize subtle vertical changes, and grids must be of sufficient
resolution to capture these variations, which are often critical to the CSM.

6.2.4 Specialty 3D Visualizations


Thanks to the common structure of geospatial data and the flexibility of 3D digital visual-
ization tools, a wide assortment of specialty visualizations and analyses can be performed.
For example, on-the-fly mathematical operations can be performed on 3D fields. Figure
6.21 shows three frames, the first two of which depict the site topography and aerial photo-
graphs for a site in 1964 and 2004, respectively. The third frame shows the volume of mate-
rial accumulated between the surfaces over that time span, calculated by subtracting the
1964 surface from that of 2004 (an example of grid math). The volume of the intersections
for comingling plumes or the percentage of an element beneath a surface vector bound-
ary could similarly be calculated as could any number of other mathematical operations,
simple or complex, on related fields in a visualization.
Figure 6.22 shows an extreme use of vertical exaggeration. 2D road maps, shown in red,
were draped onto LiDAR data over a large regional area. The LiDAR data are turned off,
leaving the 2D roads with their newly assigned third dimension. A duplicate of the roads

FIGURE 6.20
Explanation in text.
362 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.21
Explanation in text.
Three-Dimensional Visualizations 363
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.22
Explanation in text.

created 2 ft down, shown in black, is the compaction zone. The spatial relationship between the
compaction zone and the regional potentiometric surface, in blue, is shown. The vertical exag-
geration is 200 times, which allows us to visually perform this 2-ft vertical analysis beneath a
1.3-mi2 region in a single view. While this represents an appropriate use of vertical exaggera-
tion, for many applications, this extent of exaggeration can be misleading and result in errone-
ous data interpretations. In some cases, excessive exaggeration may be used for manipulative
purposes, similar to the intentional modification of the axes of graphs to mislead the reader.
In general, the hydrogeologist must be mindful of vertical scale when reviewing 3D visualiza-
tions and maintain ethical standards in his or her own practice.
Many additional applications of 3D and 4D visualization are possible. Custom conceptu-
alizations of land development projects can be visualized (Figure 6.23) as can 3D buildings
and infrastructure built interactively or imported as 3D CAD models (Figure 6.24). Sediment
transport modeling, flood modeling (Figure 6.25), air modeling, and meteorologic data can
all be incorporated. In fact, any type of digital geospatial data can be visualized, includ-
ing the high resolution X-ray computed tomography data as shown in Figure 6.26. With a

FIGURE 6.23
Explanation in text.
364 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 6.24
Explanation in text.

FIGURE 6.25
Explanation in text.

FIGURE 6.26
Explanation in text.
Three-Dimensional Visualizations 365

good grasp of the structure of geospatial data and an understanding of modern 3D and 4D
visualization platforms, hydrogeologic CSMs can be developed that display site data and
modeling projections more comprehensively and intuitively than ever before.

Citations
Figure 6.3 contains draped maps obtained from D. A. Wierman, A. A. S. Broun, and B. B.
Hunt, “Hydrogeologic Atlas of the Hill Country Trinity Aquifer, Blanco, Hays, and Travis
Counties, Central Texas,” July 2010; and from R. J. Lindgren, A. R. Dutton, S. D. Hovorka,
S. R. H. Worthington, and S. Painter, “Conceptualization and Simulation of the Edwards
Downloaded by [University of Auckland] at 23:45 09 April 2014

Aquifer, San Antonio, Texas,” SIR 2004-5277, 2004.


Figure 6.26 contains data provided courtesy of The High-Resolution X-ray Computed
Tomography Facility at The University of Texas at Austin.
Downloaded by [University of Auckland] at 23:45 09 April 2014
7
Site Investigation

Site investigation is generally the first phase of any project in professional hydrogeol-
ogy. It is the stage at which available information is compiled and new data collected
with the primary goal being the development of a scientific, defensible conceptual site
model (CSM). The term site investigation is used in different technical and regulatory
contexts, and the exact scope depends on the application in question. The term is most
widely applied in the context of hazardous-waste site investigation. For example, under
the Comprehensive Environmental Response, Compensation, and Liability Act, com-
monly known as Superfund, site-investigation activities are conducted during both the
preliminary assessment/site inspection and remedial investigation/feasibility study
phases. Under the Massachusetts Contingency Plan, site investigation activities occur dur-
ing phase 1 (site investigation) and phase 2 (comprehensive site assessment) of the cleanup
process. For water-resources applications, site investigation is a more general term that
encompasses initial exploration and data analysis activities.
Regardless of the exact context, site investigation is the process of collecting and ana-
lyzing hydrogeologic data to generate a CSM that can be used to make informed techni-
cal decisions. At hazardous-waste sites, a well-conceived site investigation is critical in
order to

• Determine the nature and extent of contamination


• Evaluate risk to human and ecological receptors
• Develop analytical and numeric models to perform forensic analysis and/or to
predict future conditions
• Select and design an appropriate remedial technology—discussed in detail in
Chapter 8

For water-resource applications, the site investigation phase can be used to answer the fol-
lowing questions:

• Is this site suitable for water-supply development?


• What is the potential yield of this water supply?
• What will be the short- and long-term effects of water supply development on
related hydrologic and ecological resources?
• What is the long-term sustainability of this water supply?

Data analysis and visualization are compulsory steps in the site investigation process.
The aim of this chapter is to outline data types and sources typically used in conducting
hydrogeological investigations and to provide examples of how these data can be analyzed
and visualized.

367
368 Hydrogeological Conceptual Site Models

7.1 Data and Products in Public Domain


With the advent of and expansion of the Internet, a wealth of hydrogeological information
has been made available in the public domain. Hydrogeologists who are not experienced
in geographic information systems (GIS) are often amazed at the amount of data that is
freely available for download. At the beginning of a project, it is no longer appropriate to
claim that no data exist. On the contrary, it is now possible to develop a preliminary CSM
and create site basemaps without any site-specific field investigation.
The collection and analysis of public-domain data should be a required step in the
site investigation process as it provides a regional conceptual context for the site-specific
data to be collected. In other words, the scale of available public-domain data is typically
broader than that of the site investigation, which is useful in understanding the macro
Downloaded by [University of Auckland] at 23:45 09 April 2014

processes that shape the site, such as regional geologic structure, watershed hydrology,
regional groundwater flow, regional groundwater supply development, geomorphology,
and regional depositional patterns. Similar to how large-scale trend should be removed
from data prior to evaluating local data variation with kriging, this regional conceptual
model is necessary to develop and justify a local conceptual model. For example, for sites
with karst geology, it will often be impossible to understand local groundwater flow direc-
tions without the broader regional understanding of recharge and discharge locations (i.e.,
springs).
For the reader’s benefit, a list of useful public-domain data sources is provided as Table
7.1. These data are often in a file format readily compatible with ArcGIS, such as shapefile,
feature class, or raster. This, again, highlights the importance of using GIS at each stage of
project implementation. After downloading or digitizing spatial data, they can be added to
a project geodatabase. Two sources of particular interest to hydrogeological investigations

TABLE 7.1
List of Useful Web Pages for Downloading Data Available in the Public Domain
Web Site URL
EDR—Environmental Data Resources, Inc. http://www.edrnet.com/index.php?option=com_content&​
view=article&id=112&Itemid=213
ESRI Online Resources http://resources.esri.com/
MapMart—Aerial Imagery http://www.mapmart.com/Products/AerialPhotography​
.aspx#HistoricalImagery
FEMA National Flood Hazard Layer https://hazards.fema.gov/femaportal/wps/portal/
NFHLWMS
GeoData.gov http://gos2.geodata.gov/wps/portal/gos
National Geospatial Program http://nationalmap.gov/viewers.html
U.S. National Atlas http://www.nationalatlas.gov/
USGS Coastal and Marine Geology Program http://coastalmap.marine.usgs.gov/
USGS Emergency Operations Portal http://eoportal.cr.usgs.gov/EO/gis.php
USGS Topo Quad Download http://store.usgs.gov/b2c_usgs/usgs/maplocator/
(xcm=r3standardpitrex_prd&layout=6_1_61_75&uiarea=2&​
ctype=areaDetails&carea=%24ROOT)/.do
Source: Courtesy of Woodard & Curran, Inc.
Site Investigation 369

are the United States Geological Survey (USGS) and the GIS office for the state in which the
site resides. These two data sources are explained further in the following sections using
Massachusetts as an example state.

7.1.1 USGS Data and Publications


The official mission statement of the USGS is as follows:

“To provide geologic, topographic, and hydrologic information that contributes to


the wise management of the Nation’s natural resources and that promotes the health,
safety, and well-being of the people. This information consists of maps, data bases, and
descriptions and analyses of the water, energy, and mineral resources, land surface,
underlying geologic structure, and dynamic processes of the earth” (USGS 2005).
Downloaded by [University of Auckland] at 23:45 09 April 2014

In keeping with this mission statement, the USGS publishes hundreds of excellent technical
reports, maps, and data sets each year to assist the professional, academic, and regulatory
communities. Unfortunately, many hydrogeologists fail to appreciate the wealth of infor-
mation produced by the USGS. This is especially concerning as USGS data are intended to
help professional hydrogeologists in their work, and USGS data are easily accessed online.
A standard step in every hydrogeological investigation should be a review of available
USGS information for the region containing the site of interest. In some instances, stream
flow, water level, precipitation, or aquifer testing data may be available at or immediately
adjacent to the hydrogeologist’s site. At the minimum, it is likely that data from the hydro-
logic atlas or water resources assessment reports will contain information relevant to the
site investigation.
Figure 7.1 presents a screen shot of the main data access page at the USGS website.
Clicking the “Find a USGS Publication” link will lead to the USGS Publications Warehouse
(http://pubs.er.usgs.gov/), which is an excellent search utility for finding current reports
and historic reports that have been scanned to electronic format. An advanced search may
be performed of the publications warehouse by keyword, author, title, or USGS publica-
tion number. Figure 7.2 presents an example results screen when searching for Edwards
Aquifer via keyword. Note that 115 publications were found. USGS publications often have
links to data in electronic format, so that the hydrogeologist may incorporate the data in
figures, geodatabases, groundwater models, and contour maps. An example exercise using
georeferencing to incorporate groundwater contours from a USGS report into a site map is
presented in Section 7.3.
370 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.1
Data available for download or order at the USGS Web page, http://www.usgs.gov/pubprod/. (From United
States Geological Survey (USGS), USGS Publications Warehouse, 2011. Available at http://pubs.er.usgs.gov/,
accessed June 5, 2011.)

FIGURE 7.2
Search results screen for “Edwards Aquifer.” Note that the online version of the highlighted report can be
downloaded by clicking the View Index Page link.

7.1.2 State GIS Data


State online GIS repositories are excellent resources for physical, civil (e.g., roads, town
boundaries), and imagery data. The Web site for the Commonwealth of Massachusetts
(http://www.mass.gov/mgis/) is a typical example of what states make available to the
general public. Figure 7.3 shows example spatial data layers (e.g., shapefile, raster) available
for download from the MassGIS Web site. After selecting the desired data type, individ-
ual files can be downloaded by watershed, town, groundwater basin, or other grouping.
Information related to water-supply well or surface-water intake locations may no longer
be directly available for download because of security concerns; however, oftentimes if a
special request is filed to the state GIS office, these data can be obtained.
Site Investigation 371
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.3
Data available for download from MassGIS at http://www.mass.gov/mgis/laylist.htm. (From MassGIS, Office of
Geographic Information, 2011. Available at http://www.mass.gov/mgis/, accessed May 20, 2011.)

7.2 Database Coordination


In professional practice, hydrogeologists often inherit legacy sites, indicating that some
investigation work has already been completed. For these sites, historical data must be
assessed and incorporated into the hydrogeologist’s current project database. If this step
is not taken, there is a high likelihood of making errors as a result of operation of parallel
geodatabases. For example, data from one database may be omitted from a query, leading
to erroneous data interpretation and presentation. At the minimum, spatial data analysis
372 Hydrogeological Conceptual Site Models

and mapping will be much more inefficient as two separate databases need to be queried
before obtaining a consolidated data set.
While it may seem as if incorporating legacy data into an existing database is straightfor-
ward, in reality, the logistics of making this happen can be quite complex. Databases from
prior consultants may have completely different relationships and field names. Additionally,
prior consultants may have used a proprietary (enterprise) database that the hydrogeologist
is unable to access. Legacy data may be in a different coordinate system that is not labeled
clearly in the database. Spatial data may be stored in AutoCAD drawings only without real-
world coordinates. The list of potential issues can be daunting. While some degree of manual
manipulation of spreadsheets and/or spatial data will invariably be required, there are several
steps the hydrogeologist can take to incorporate historic data accurately and efficiently:

• Maintain a constant database structure for all projects for both tabular and spatial data
Downloaded by [University of Auckland] at 23:45 09 April 2014

• Use modules/macros to modify historic tables and make them compatible with
his or her database
• Perform quality control/quality assurance (QA/QC) checks on both the data in the
historical database (against raw laboratory reports, for example) and on the upload
of historical data into the current database (to make sure all data were transferred,
for example)

The investment in consolidation of historical data will be greatly beneficial to future spa-
tial data analysis exercises for the project in question.

7.3 Georeferencing
When inheriting a project, the hydrogeologist may obtain AutoCAD data that are in relative
rather than real-world coordinates. In other words, coordinates of the AutoCAD drawing are
based solely on the relative position of the items in the drawings. In this manner, the drawing
is to scale but does not have coordinates pertaining to any geographic or projected coordinate
system. The hydrogeologist may also receive very useful figures in hard-copy format or may
download geologic reports from the USGS as Portable Document Format (.pdf) files. USGS
reports may contain critical data, such as surficial geology or groundwater contours, that are
directly applicable to the site in question. The question thus arises: “How does the hydrogeolo-
gist link these AutoCAD and hard-copy/.pdf files to spatially referenced data in the real world
so that comprehensive data querying, analysis, and mapping can be performed?”
The process of defining the real-world location of unreferenced AutoCAD or raster data
is termed georeferencing. Georeferencing is easily performed in ArcGIS as described in
the following two sections.

7.3.1 Georeferencing AutoCAD Data


As AutoCAD data are typically to scale (but potentially with fictitious, relative coordi-
nates), georeferencing is relatively simple. All that is needed from the entire CAD data
set are two point locations with real-world coordinates. The relative and real-world coor-
dinates of these two points are used to create a world file (.wld) that performs the spatial
Site Investigation 373

transformation in ArcGIS. An example application illustrating use of the world file in the
georeferencing of AutoCAD data is presented below.
A hydrogeologist has inherited a legacy site from a previous consultant. All the spatial
data from the prior consultant are located in an AutoCAD drawing with relative coordi-
nates. Figure 7.4 presents the AutoCAD data, which consist of site buildings, roads, and 15
monitoring wells. The relative coordinates of two wells (MW-3 and MW-10) are also shown
in Figure 7.4. In order to create a spatial reference for the entire CAD data set, all that are
needed are surveyed, real-world coordinates for these two wells. After obtaining these
real-world coordinates, a world file can be created as depicted in the following:

4.037,3.674 491413.8,177708.8

7.750,7.681 491737.0,178058.5
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.4
AutoCAD data in relative coordinates as viewed in ArcGIS. Esri® ArcGIS ArcMap graphical user interface.
Copyright © Esri. All rights reserved.
374 Hydrogeological Conceptual Site Models

The top row contains the relative and real-world coordinates (X, Y) for MW-3, on the left
and right, respectively, separated by a space. The bottom row contains the relative and
real-world coordinates for MW-10 in the same format.
The first step of the georeferencing process in ArcMap is to import the AutoCAD data set,
named “Basemap_Drawing Scale.dxf” for this example. All relevant data in this drawing are
stored in the polyline layer. After opening the georeferencing toolbar, the user can select this
polyline layer as the georeferencing target. The relative and real-world coordinate transfor-
mation information is specified in the data-link table, which is where the existing world file
is loaded. Figure 7.5 is a screen shot depicting the georeferencing toolbar, the data-link table,
and the load option where the world file for this example is selected (“Basemap_worldfile​
.wld”). Figure 7.6 presents the updated link table with the AutoCAD drawing in the back-
ground. MW-3 and MW-10 are the control points for this example, which are symbolized in
the background in Figure 7.6 with a green cross symbol. The blue lines in Figure 7.6 connect
Downloaded by [University of Auckland] at 23:45 09 April 2014

the control points to their respective real-world location. Note also that the control point
relative coordinates and real-world X, Y coordinates can be entered manually or by select-
ing locations on the map (which will be demonstrated in Section 7.5.2). After accepting the
updated link table, the CAD data layer is automatically adjusted to the real-world coordinate
system based on the survey data of the two points. The georeferenced data set is presented
in Figure 7.7 with labeled coordinates demonstrating a successful transformation.

FIGURE 7.5
After selecting the layer to be georeferenced (“Basemap_Drawing Scale.dxf”), the world file can be imported to
the link table. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.
Site Investigation 375
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.6
Screen shot of loaded world file coordinates and the resulting link points (shown in the table). Note that the
blue lines connecting the initial map locations with the real-world locations cannot be seen in their entirety
because of the excessive distance in between. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri.
All rights reserved.

FIGURE 7.7
Screen shot of georeferenced data in real-world coordinates. Note that the coordinates are in unknown units
because the data have not yet been assigned the correct projection. This is a simple step in ArcGIS using the
Define Projection tool. Esri® ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.
376 Hydrogeological Conceptual Site Models

7.3.2 Georeferencing Raster Data


Scanned images of hard-copy reports and .pdf files downloaded from the USGS can be
easily converted to raster format [Joint Photographics Expert Group (.jpg) or Tagged Image
File Format (.tif), for example]. Once these files are in raster format, they can be imported
into ArcMap and georeferenced to the real world. This is a very useful tool because it
enables the hydrogeologist to digitize (i.e., trace) pertinent information contained on the
georeferenced files. An example application illustrating use of raster georeferencing to
digitize data obtained in .pdf format from the USGS is presented below.
A hydrogeologist is working on a hypothetical project in Southern California assessing
the influence of geologic faults on groundwater flow. Preliminary research is performed
at the online USGS Publications Warehouse, and a useful report (Woolfenden and Koczot
2001) is identified, which contains fault delineations and contour maps of the aquifer poten-
Downloaded by [University of Auckland] at 23:45 09 April 2014

tiometric surface in and around the fault zone of interest. One map, presented as Figure 7.8,
is extracted from the .pdf report, saved as raster format (.jpg), and imported into ArcMap.
There are two requirements in order to georeference this figure. The following list shows

FIGURE 7.8
Figure of potentiometric contours and fault lines from Woolfenden and Koczot (2001). This example was selected
because of its classic demonstration of faults as groundwater barriers. The tectonic valley groundwater system
in this area has been highly studied, and an example case study is also presented in the work of Fetter (2001).
Site Investigation 377

• The ArcMap document must contain spatially referenced data in the study loca-
tion in real-world coordinates.
• Both the spatially referenced data and the scanned raster data (.jpg) must have
common features that can serve as control points in the georeferencing.

In order to satisfy the first requirement, the World Street Map layer from ArcGIS Online
is added to the ArcMap document in the study area using the Add Data command as
shown in Figure 7.9. The next step is identifying viable features to serve as control points.
Common features used in raster georeferencing are road or stream intersections, building
or street corners, the mouth of a stream, and physical land features such as rock outcrops
or land jetties (Esri 2010b). To reiterate a common theme of this book, the appropriateness
of features for georeferencing depends on the scale of the investigation in question. The
mouth of a stream may be well suited for a basin-scale data analysis, but for finer resolu-
Downloaded by [University of Auckland] at 23:45 09 April 2014

tion applications, features such as building corners or road intersections will prove more
accurate. Figure 7.8 clearly shows the intersection points of major roads, which can be
linked to the real world through the street map image. Therefore, roadway intersections
are selected as control points for this application.
For most applications, a minimum of three control points is typically required for the
data transformation. In general, it is recommended to distribute these control points across
the raster to achieve the most accurate shift (Esri 2010b). For this example application, four
control points are specified by first selecting a road intersection point on the raster and
then selecting the corresponding road intersection on the street map. The control points
are shown in Figure 7.10 (top and bottom) for the raster and street map, respectively. Note
that the resolution of the raster is worsened when displayed in ArcMap. The image quality
is further impaired by the georeferencing process if significant warping occurs. The link
table for these manually specified points is presented in Figure 7.11, which also shows the
residual transformation error caused by each point. This error can conceptually be inter-
preted as the difference between where the point was exactly specified to where it ends
up based on the overall spatial transformation (Esri 2010b). Ideally, the raster and the real
world line up perfectly, and this error is zero. Conversely, the raster data may have an inac-
curate scale or misrepresent spatial features, leading to a highly warped transformation.
In this case, the first three control points line up perfectly, while the fourth (at the top of the
map where the road intersection is less clearly defined on the raster) is slightly distorted.
Once satisfied with the temporary transformation, the georeferencing of the r­aster can be

FIGURE 7.9
A World Street map from ArcGIS Online is added as a basemap to the ArcMap document for use in raster geo-
referencing. Esri® ArcGIS Online graphical user interface. Copyright © Esri. All rights reserved.
378 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.10
Georeferencing control points are symbolized by the four red crosshair symbols that link the USGS map (top)
to the street map (bottom) in real-world coordinates. World Street Map sources: Esri, DeLorme, NAVTEQ,
TomTom, USGS, Intermap, iPC, NRCAN, Esri Japan, METI, Esri China (Hong Kong), Esri (Thailand).
Site Investigation 379
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.11
Link table for the four control points depicted in Figure 7.10. Note that these points can be exported as a world
file, which can be used to automatically georeference rasters with the same extents as the USGS map. Esri®
ArcGIS ArcMap graphical user interface. Copyright © Esri. All rights reserved.

updated (temporarily accepted with an external display file) or permanently transformed


into the new coordinate system.
After performing the georeferencing, the hydrogeologist may digitize relevant data for
analysis and incorporation into a geodatabase. For this example, the groundwater eleva-
tion contours and fault lines are traced using the Draw toolbar in ArcMap and then con-
verted to spatial features in shapefile or feature-class format. The digitized groundwater
contours and fault lines are displayed on top of an aerial photograph (instantly obtained
through the Add Basemap tool) in Figure 7.12.
It is important to remember that the only data obtained outside of ArcMap used to gen-
erate Figure 7.12 was the USGS .pdf file. The above operations can be completed in a mat-
ter of minutes to create high-quality visualizations. More advanced analysis can also be
performed using this data with the many tools available in the ArcGIS environment. For
example, the Topo-to Raster tool can be used to create a raster of the digitized poten-
tiometric surface, and then this raster can be visualized in a three-dimensional setting
in ArcGlobe. Figure 7.13 presents this data visualization scenario, which can be used to
clearly show the influence of the fault system on the potentiometric surface to nontechnical
audiences that have difficulties interpreting contour maps.
It should be noted that default topo to raster interpolation was not used to generate this
surface. Because of the presence of faults, which act as groundwater barriers (the concept
demonstrated in Figure 7.13), groundwater contours must be interpolated to separate ras-
ters on either side of the fault and then joined together using the Raster Mosaic command.
When default topo to raster interpolation is used across the fault, a conceptually incorrect
surface results as shown by Figure 7.14.
380 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.12
Digitized potentiometric surface contours (red lines) and faults (white lines) from the USGS map. Note that
the potentiometric surface can be more than 150 ft higher on the right side of the dividing fault than on the left
side. World Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and
the GIS User Community.

FIGURE 7.13
Colored, filled contours clearly show the influence of the fault as an aquifer boundary. The higher-elevation
basin to the right of the fault receives recharge from the mountain wash that cannot be transmitted across
the fault. This has important implications with respect to water-supply wells in the isolated basin. ArcGlobe
Imagery Source: Esri® and i-cubed.
Site Investigation 381
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.14
When interpolating across the fault, the potentiometric surface indicates that water flows uphill and has non-
sensical irregularities. In this case, the conceptual model for fault influence is critical in developing accurate
elevation contours. ArcGlobe Imagery Source: Esri® and i-cubed.

The above exercise demonstrates the power and efficiency of georeferencing tools in
ArcGIS. Hard-copy maps can be easily digitized and reformulated for quantitative spatial
analysis, greatly enhancing CSM development where limited site-specific data may exist.

7.4 Developing a Site Basemap


After consolidating available data from both public-domain sources and historical, site-­
specific reporting, it is possible to generate a site basemap for use in data visualizations. The
term basemap is widely used in professional practice and is defined by Esri (2010c) as follows:

“A basemap is used for locational reference and provides a framework on which users
overlay or mash up their operational layers, perform tasks, and visualize geographic
information. The basemap serves as a foundation for all subsequent operations and
mapping. Basemaps provide the context and a framework for working with information
geographically.”

More simply, the basemap is the underlying set of data layers that appears on most map-
ping figures produced by the hydrogeologist. Layers often included in a basemap are
imagery, topographic contours, street networks, property parcels, buildings, and hydro-
logic features such as rivers, lakes, and streams. As described in Chapter 3, preconstructed
382 Hydrogeological Conceptual Site Models

basemaps are immediately available for display within ArcGIS from ArcGIS Online. A well-
conceived basemap will clearly establish site-specific and regional features of importance
and enhance the overall visual appeal of the map. It is important to remain flexible when
creating a basemap as different clients and project stakeholders have different preferences
in terms of how maps are presented. For example, some clients may think that aerial pho-
tographs enhance the utility of maps, and others prefer simple CAD-style diagrams with
lines and polygons representing basic site features. Example basemap layers are presented
in the figures included throughout the subsequent sections of this chapter.
Downloaded by [University of Auckland] at 23:45 09 April 2014

7.5 Developing and Implementing Sampling Plans


To this point, we have discussed data that are already available to the hydrogeologist before
instigation of site-specific field activities. While some projects may be pure desktop evalua-
tions, the vast majority involve the collection and analysis of site-specific geologic data (i.e.,
environmental sampling). These data are used to refine the preliminary CSM developed with
historical data and information in the public domain. More specifically, the field investigation
component of a project seeks to fill data gaps identified in the CSM and ultimately lead to a
defensible study conclusion and recommended course of action. Concepts related to the devel-
opment and implementation of field sampling plans are presented in the following sections.

7.5.1 Developing Sampling Plans


7.5.1.1 Systematic Planning to Balance Cost and Risk
As described in Chapter 3, one of the most critical steps in any site investigation is deter-
mining the type and quantity of data to be collected, which are typically specified through
sampling plans subject to regulatory review and approval. Primary decision factors influ-
encing the development of sampling plans are

• The conceptual justification for the sampling


• The desired level of statistical confidence in decisions made with the sampling
results
• Regulatory requirements
• The available project budget for data collection and analysis

It is often quite difficult to balance the above decision factors and devise a sampling plan
that the hydrogeologist (consultant), the client (project financier), and the regulator are all
100% satisfied with, which means that compromises are necessary. Almost every project in
hydrogeology will involve a trade-off between cost and certainty (i.e., risk) with the client
pushing for lower costs and the regulator pushing for greater certainty. It is the hydroge-
ologist’s job to balance these demands and use technical expertise (e.g., quantitative data
analysis) to minimize both cost and risk to the extent practicable.
Faced with the above competing objectives, the question of how many samples are
needed becomes very challenging to answer. The most widely used method of developing
appropriate, defensible sampling plans that balance both cost and risk is the seven-step
Site Investigation 383

U.S. EPA Data Quality Objective (DQO) process. The DQO process is an example of sys-
tematic planning, which is defined by the U.S. EPA (2006) as follows:

“Systematic planning is a process based on the widely-accepted ‘scientific method’ and


includes concepts such as objectivity of approach and acceptability of results. The pro-
cess uses a common-sense approach to ensure that the level of documentation and rigor
of effort in planning is commensurate with the intended use of the information and
the available resources. The systematic planning approach includes well-established
management and scientific elements that result in a project’s logical development, effi-
cient use of scarce resources, transparency of intent and direction, soundness of project
conclusions, and proper documentation to allow determination of appropriate level of
peer review.”

Execution of the DQO process involves completion of the following seven steps, the first
Downloaded by [University of Auckland] at 23:45 09 April 2014

six of which are entirely conceptual in nature. A summary of the DQO steps, adapted from
the U.S. EPA (2006) follows:

1. State the problem


Document the problem and identify the requisite project staff, budget, and sched-
ule (e.g., contamination was discovered at a site).
2. Identify the goal of the study
Determine what the study decision will be, evaluate alternative outcomes, and
describe how the data will be used to reach a decision (e.g., use site-specific data and
risk assessment to determine if the site requires no further action or remediation).
3. Identify information inputs
Evaluate the types of data and analysis needed to reach the decision.
4. Define the boundaries of the study
Specify the spatial and temporal bounds of the study and identify decision factors
such as potential receptors and regulatory criteria.
5. Develop the analytic approach
Identify parameters of interest and develop a decision rule used to reach the rel-
evant conclusion (e.g., define a statistical hypothesis test comparing site data to
regulatory action levels).
6. Specify performance or acceptance criteria
Define data usability criteria and determine acceptable limits on decision error
(e.g., type I and type II error for the hypothesis test).
7. Develop the sampling plan
Optimize the design for obtaining data based on decision error and cost thresh-
olds. Move on to detailed work plan formulation.

Visual Sampling Plan (VSP), introduced as a GIS module in Chapter 3, is a public-domain


computer program designed to assist the hydrogeologist in completing the DQO process
(Matzke et al. 2010). Specifically, criteria from steps 1 through 6 are entered into VSP, which
calculates the optimal sampling design comprising step 7. VSP can help answer the chal-
lenging question: How many samples do I need?
The hypothesis testing approach employed by VSP involves defining an acceptable level of
uncertainty and computing the required number of samples based on that uncertainty and
384 Hydrogeological Conceptual Site Models

on the expected variability of contaminants at the site. Data variability is typically estimated
from historical site-specific data, data from comparable sites, or simply professional judgment
(Matzke et al. 2010). The most common hypothesis test used in VSP for a contaminant of
interest is comparing the mean concentration of site data to the fixed, regulatory action level,
or threshold. It is recommended to set the null hypothesis to the mean concentration being
greater than the threshold value (i.e., the site is contaminated). In this manner, the burden of
proof is placed on rejecting the null hypothesis and concluding that the site is not contaminated.
The three primary inputs to VSP for a contaminant of interest are the expected standard
deviation of the data, the regulatory action level, and decision error thresholds. Decision
error metrics that require specification in VSP are described below, assuming the null
hypothesis of the site being dirty:

• Alpha (α): Alpha is the type I error for the hypothesis tests and is the probability
Downloaded by [University of Auckland] at 23:45 09 April 2014

of falsely rejecting the null hypothesis (concluding the mean to be lower than the
threshold value when it is, in fact, greater). In the authors’ experience, the range
of acceptable type I error is typically 1%–10% (equivalent to a confidence level of
90%–99%). However, with substantive technical justification, it is possible to dem-
onstrate that data quality objectives are met at higher error levels within reason.
For example, everyone would agree that a false rejection error of 50% is unaccept-
able because type I error represents a risk to public health.
• Beta (β): Beta is the type II error for the hypothesis tests and is the probability of
falsely accepting the null hypothesis (concluding the mean to be greater than the
threshold value when it is, in fact, lower). In general, Beta is higher than Alpha
as there are no public health implications for a false positive. Costs for unneces-
sary assessment/cleanup are the primary consequences of a high type II error.
Potential consequences of decision error are summarized in Table 7.2.
• Width of the gray region: The width of the gray region is the range of true mean
concentrations below the threshold value within which it is considered acceptable
to falsely conclude the site to be dirty and conduct unnecessary cleanup. In other
words, it is generally accepted that a high probability of type II error exists when
the true mean concentration is 9.9 and the threshold value is 10.0. The prescribed
type II error is achieved exactly at the lower boundary of the gray region. Type II
error decreases for true mean concentrations below the gray region and increases
significantly for true mean concentrations within the gray region, where error
probabilities are typically 20%–95% (Matzke et al. 2010). The sample size is very

TABLE 7.2
Potential Consequences of Type I and Type II Decision Error
Type of Decision Error Impact Potential Consequences
False negative (type I error α): The site would not be remediated Contamination continues to
Mistakenly reject the null when it should be remediated. present a risk to human health
hypothesis (i.e., erroneously and the environment.
conclude that site contamination Relative severity: High
does not require remedial action).
False positive (type II error β): The site would be remediated Unnecessary costs for further
Mistakenly fail to reject the null unnecessarily. characterization and remediation
hypothesis (i.e., erroneously are incurred.
conclude that site contamination Relative severity: Low
requires remedial action).
Site Investigation 385

sensitive to the width of the gray region, indicating that a very large number of
samples will be required to determine that a true mean of 9.9 is less than 10.0 with
meaningful statistical significance.

VSP offers both parametric (i.e., distributional) and nonparametric hypothesis testing
options. Together with the need to assess data variability, this indicates that, as with krig-
ing, data exploration should be a mandatory precursor to running VSP. An example appli-
cation of VSP to determine the required number of soil samples at a hazardous-waste site
is presented in the following section. This represents the most common use of VSP in
professional practice.

7.5.1.2 Example Application of Visual Sample Plan


Downloaded by [University of Auckland] at 23:45 09 April 2014

This example application involves a hypothetical legacy site with polychlorinated biphenyl
(PCB) contamination in surface soils (0–2 ft below ground surface). Based on available his-
torical data, a comprehensive RI must be submitted to the U.S. EPA with a statistically based
sampling plan to fully delineate the nature and extent of contamination. To determine the
number of samples needed to complete the investigation, the site boundary, encompassing
40 acres, and the 16 historical sampling locations are imported into VSP. The site boundary
(imported as a shapefile) and historical sampling locations (imported as a text file with the
analytical results) are presented in Figure 7.15 as seen in VSP’s visual processor.
A regulatory standard of 10 mg/kg applies to the site, above which remediation is
required. Therefore, the appropriate statistical hypothesis test in VSP is to compare the
average PCB concentration to the fixed threshold value of 10 mg/kg. However, before
executing the hypothesis test, historical data exploration must be conducted to select
the appropriate form of the hypothesis test. Figure 7.16 presents a screen shot of the

FIGURE 7.15
Yellow polygon representing the site boundary with blue crosshair symbols representing the historic sampling
locations.
386 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.16
Summary statistics for the historic sampling data. Note that because of very low detection limits, a zero value
is substituted for nondetect results.

data analysis tab in VSP, which summarizes the statistics of the historical data. The
minimum result is zero (which was substituted for nondetect results because of a low
reporting limit), and the maximum result is 17 mg/kg, which exceeds the regulatory
criteria. The standard deviation, which is the critical parameter for the hypothesis test,
is 4.84 mg/kg.
Figure 7.17 presents a screen shot of statistical tests performed by VSP to evaluate
whether or not the historical data follow a normal distribution and to calculate the 95%
upper confidence limit (UCL) of the mean. As shown by Figure 7.17, the data do not follow
a normal distribution; therefore, a nonparametric approach should be employed to deter-
mine the sample size. It is very common for data at hazardous-waste sites to have skewed
distributions that are better represented by a lognormal distribution, rather than a pure
normal distribution. VSP does not have a means of transforming and back-transforming
the data to perform hypothesis tests for lognormal distributions. The use of data trans-
formations in hypothesis testing is an area in need of further exploration by the regula-
tory and research communities. Fundamentally, VSP treats skewed data as being from
two separate populations—one is representative of background conditions, and the other
is representative of contaminated soil. This is a different concept than treating the data
as belonging to one comprehensive lognormal distribution, which is an assumption that
greatly improves the accuracy of kriging and other analytical applications. Because of this
inability to transform the data, nonparametric methods will be used in most cases, and the
Site Investigation 387
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.17
VSP uses the Shapiro–Wilk test to evaluate if the historic sampling data follow a normal distribution. Note that
the nonparametric 95% UCL is significantly higher than that of a normally distributed data set, indicating that
there is greater uncertainty in the data characterization.

sample size estimate may be overly conservative. In VSP, nonparametric hypothesis tests
will always require more samples than those that assume a normal distribution.
As stated above, a nonparametric test is required for the PCB data. Therefore, the one-
sample, nonparametric Multiagency Radiation Surveys and Site Investigation Manual
(MARSSIM) Sign test was selected as the appropriate statistical method. The MARSSIM
Sign test will develop conservative estimates of the number of samples required as the
assumption is made that analytical data are asymmetric and not normally distributed.
This is an accurate assumption for the PCB data, which have high variability, a skewed dis-
tribution, and a mean considerably different from the median (which indicates asymme-
try). Note that the MARSSIM Sign test is a true test for the median and an approximate test
for the mean when the asymmetric assumption is used (Matzke et al. 2010). After selection
of the appropriate statistical test, decision error thresholds must be specified. Preliminary
discussion with the government regulator of the site indicates that type I errors of up to
10% are acceptable (a 90% confidence level). The regulator was not concerned about type II
error and did not specify criteria.
Figure 7.18 presents a screen shot of the VSP sample size determination for the above
example, using a type I error of 10%, a type II error of 20%, and a width of the gray region
equal to 2.5 mg/kg. In summary, VSP recommends the collection of 35 samples to charac-
terize the site with the desired levels of certainty. Note that the total number of samples
presented at the bottom of the figure includes a 20% overage that is recommended by
MARSSIM to account for additional uncertainty—this applies another level of conserva-
tism to the design beyond the nonparametric, nonsymmetric assumption. Analytical vari-
ability can also be added to the hypothesis test, which would increase the sample size
388 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.18
Sample size recommendation when comparing the true average concentration to a fixed threshold using the
nonparametric, nonsymmetric MARSSIM hypothesis test with an expected standard deviation calculated from
historic data.

further. In this case, analytical variability is not yet quantified; however, consideration
should be given to the collection of duplicate samples for QA/QC purposes.
When factoring in the 16 historical samples collected to date, this means that 19 addi-
tional (new) samples are required. VSP can automatically locate these additional samples
based on random placement, adaptive fill placement (filling in the largest unsampled areas
sequentially), or grid placement (Matzke et al. 2010). For this example, random sample
placement was used, resulting in the sample distribution presented in Figure 7.19. One
concept that is somewhat counterintuitive is that this sample size recommendation is com-
pletely independent of the size of the site. While this example site is 40 acres, the same set
of historical data and the same hypothesis test parameters will yield a recommendation of
35 samples even if the site were 1 acre or 400 acres in size. In theory, the overall variabil-
ity of site data should be a function of the homogeneity of the site. For example, using an
estimated standard deviation, VSP calculates that 40 samples are required to characterize
a 100-acre homogeneous site. While it may seem counterintuitive, VSP will determine that
the same number of samples would be required for a small subset of the same site if ana-
lyzed independently. This is because the exact same assumed standard deviation is used
for both analyses. However, one would expect that a larger site would inherently be less
Site Investigation 389
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.19
New 19 sample locations, represented by red diamonds, are randomly placed across the site by VSP.

homogeneous than a small site, therefore having a larger standard deviation and requir-
ing more samples for characterization. As illustrated in an example problem later in this
section, the user must rely on professional judgment to ensure that the assumed standard
deviation is representative of the entire area of interest.
According to statistical theory, all samples should be randomly placed. However, this is
often fundamentally at odds with the study objectives, which may require focused (biased)
sampling to delineate contamination in soil and groundwater. One hybrid approach used
frequently in professional practice is stratified random sampling, which locates samples
randomly within grid cells that have been deterministically placed according to the objec-
tives of the sampling.
A recommended component of a statistical sampling design in VSP is a sensitivity anal-
ysis that determines how changes in error thresholds affect the sample size recommenda-
tion. Figure 7.20 presents the VSP hypothesis test calculation for the PCB example when
the confidence is increased to 95% and the width of the gray region is decreased to 1 mg/
kg. As shown at the bottom of the figure, these relatively small changes in error param-
eters lead to an increase in sample size by more than one order of magnitude. The result-
ing sample placement map presented in Figure 7.21 demonstrates the absurdity of this
design relative to the scope of the historical sampling and that of Figure 7.19. To reiterate,
the width of the gray region is a very sensitive parameter, and the user is cautioned against
setting expectations with regulators of achieving a narrow gray region. In general, VSP
users must document sample-size sensitivity and establish the correct balance between
investigation cost and data collection in order to efficiently meet project objectives.
In addition to the use of overly stringent error thresholds, one other situation that often
results in exorbitant sampling recommendations is the presence of hot spots in the his-
torical data set. Including hot spot samples with background samples in a VSP analysis
can result in standard deviations that significantly exceed the regulatory threshold limit,
390 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.20
VSP calculates that 384 total samples are required to meet the more stringent error thresholds.

leading to an absurdly high sampling requirement (e.g., >10,000 samples). This problem
occurs so frequently in professional practice that many consultants completely abandon
VSP and go to great lengths to avoid its use. For example, two samples for the above PCB
problem are converted to hot spot samples by increasing the measured concentrations
to 900 mg/kg as shown in Figure 7.22. The summary statistics evaluation, depicted in
Figure 7.23, shows that the resulting standard deviation is 244 mg/kg, which means that
it will be almost impossible to state with confidence that the data are below 10 mg/kg. As
a result, the MARSSIM analysis presented in Figure 7.24 recommends collection of more
than 10,000 samples to characterize the site—a nonsensical approach.
To circumvent this problem, the hydrogeologist must be aware of hot spots and segre-
gate data populations that have significantly different statistical properties. For the above
example, the hot spot should be treated independently from the rest of the site, which
exhibits low levels of PCB impacts. Removing the two hot spot data points and rerunning
the VSP analysis will yield a much more reasonable number of samples that can be used
to evaluate conditions outside of the hot spot, where the extent of contamination is uncer-
tain. Within the hot spot, there is known contamination, and the question becomes how to
deterministically bound the hot spot, for example, with a grid system. VSP also supports a
hot spot identification tool, and the reader is referred to Matzke et al. (2010) for additional
information. To summarize, highly contaminated samples from a relatively small area
Site Investigation 391
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.21
Additional samples recommended by VSP are presented as magenta squares. This chicken-pox extent of sam-
pling is unreasonable and generally cost-prohibitive. Each sample has requisite work-plan costs, data-collection
costs in the field, laboratory-analytical costs, data-validation costs, data-presentation costs, and regulatory-
review and discussion costs. Spatial data interpolation is a much better alternative as described in Chapter 4.

FIGURE 7.22
Hot spot at 900 mg/kg is represented by the two large, red circles.
392 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.23
Summary statistics for the hot spot scenario, showing a significant increase in the variability of the data.

FIGURE 7.24
Because of a standard deviation that is significantly higher than the action level, VSP recommends that more
than 10,000 samples are needed at the site.
Site Investigation 393

of the site should be removed from the data set used to determine how many samples
are needed to evaluate the broader area of the site where the extent of contamination is
unknown and remediation may not be necessary.

7.5.2 Implementing Sampling Plans


7.5.2.1 Data Collection
After determining the appropriate number of samples required for site characterization, the
next step is developing a detailed sampling plan that presents the field and laboratory meth-
ods of data collection. There are innumerable important concepts in translating a seemingly
simple number of samples needed to a real-life field investigation, including, but not limited to

• Selection of the appropriate drilling method


Downloaded by [University of Auckland] at 23:45 09 April 2014

• Selection of the appropriate soil and groundwater sampling techniques


• Management strategies for investigation derived waste (IDW)
• Selection of the appropriate laboratory analytical methods
• Resolution of site logistical problems

There are many drilling methods available to the modern-day hydrogeologist. As with
most applications, the selection of the appropriate drilling method depends on the data
quality objectives of the site. Among the questions that need to be answered when evaluat-
ing site-specific drilling requirements are as follows:

• Am I drilling in overburden (unconsolidated) or bedrock formations?


• Do I need to collect samples for visual classification and/or laboratory analysis,
and if so, what are the quality criteria for these samples?
• Do I need to perform penetration testing to evaluate soil density and consistency?
• What are my cost and logistical limitations?

In general, the least expensive drilling method that is also widely used in professional
practice is the Geoprobe direct push system. A Geoprobe machine uses hydraulic power,
static force, and percussion to directly push sample cores and casing into the subsurface
(Geoprobe Systems 2011a). The primary advantages of this system are that decent-quality
soil cores can be rapidly obtained in overburden formations, and shallow, small-diameter
monitoring wells can be rapidly installed without generating IDW. A Geoprobe machine
is also small and track-mounted and, therefore, can access remote locations with uneven
topography and limited clearance (see photo in Figure 7.25). Example soil cores of over­
burden materials obtained from a Geoprobe system are presented in Figure 7.26. The qual-
ity of the sample depends on the formation in question; in some cases, Geoprobe sample
cores can collect intact cores that allow continuous classification and sampling. Figure 7.27
demonstrates a close-up photo of a Geoprobe soil core where dense nonaqueous phase
liquids (DNAPLs) were observed and sampled. A new trend in Geoprobe technology is
the use of in situ tools for down-hole chemical and physical parameter analysis, such as
soil conductivity and membrane interface probes, which can help evaluate soil type and
delineate groundwater contaminant locations in real time.
The obvious limitations of a direct-push system are that most sites have a practical
depth limitation of 50 ft or less, based on soil density, and bedrock drilling is not feasible
394 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.25
Photo of the National Oceanic and Atmospheric Administration’s Geoprobe unit driving sample cores into
beach sediments. (From National Oceanic and Atmospheric Administration (NOAA), Wyckoff Co./Eagle
Harbor Beach Investigation for Creosote, 2006. Available at http://response.restoration.noaa.gov/, accessed
June 15, 2011. With permission.)

FIGURE 7.26
Photo of continuous Geoprobe sample cores of overburden material. (Courtesy of Woodard & Curran, Inc.)
Site Investigation 395
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.27
Close-up photo of Geoprobe sample core showing accumulation of DNAPLs in a silty-sand layer immediately
above a dense, continuous silt. Sample was collected approximately 20 ft below the water table. (Courtesy of
Woodard & Curran, Inc.)

(Geoprobe Systems 2011b). Furthermore, only small-diameter wells can be installed, gen-
erally of 2 in. or less. Also, penetration tests are not readily conducted with a Geoprobe,
and sample collection can be time consuming and of poor quality where soil density is
high.
Where large-diameter, deep drilling is required in both unconsolidated and consoli-
dated formations, full-scale drilling rigs are required. Drilling technologies typically
defined as conventional include cable-tool, rock-coring, hollow-stem auger, and drive-and-
wash methods. The latter two examples can be fitted with a down-hole air hammer to drill
in bedrock formations. Examples of more modern, advanced (and generally more expen-
sive) technologies are air-rotary (or dual-rotary) and sonic drilling. The major decision
parameters regarding the use of drilling rigs are the quality of samples needed and IDW
and site access limitations. For example, rotary technologies pulverize soil and bedrock
and circulate and discharge cuttings to an above-ground cyclone. A dual-rotary rig, show-
ing the cyclone and a hay bale containment area is presented in Figure 7.28. Sonic and rock
396 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.28
Photo showing operation of a dual-rotary rig. (Courtesy of Woodard & Curran, Inc.)

coring methods, conversely, extract a solid core of unconsolidated or consolidated depos-


its, allowing much more resolute classification with depth. A minisonic drill rig, which
has the added advantage of being relatively compact and mobile, is presented in Figure
7.29. It may be very difficult to correctly log consolidated deposits when rotary cuttings are
the exclusive sampling materials as demonstrated by Figure 7.30. When a mini sonic rig is
used in the same formation, much more representative samples can be obtained as shown
in Figure 7.31.
As mentioned above, site access is a major concern for large-drill rigs. For example, the
Barber dual-rotary rig, which has the advantage of simultaneously advancing casing
outside of the drill bit, can weigh between 50,000 and 100,000 lb (Foremost Industries 2003).
Therefore, it may be necessary to build access roads for a rig of this size where topography
is steep as depicted in Figure 7.32.
The selection of the field sampling and laboratory analytical methods requires knowl-
edge of applicable regulatory requirements, both in terms of the physical field method
and the required reporting limits. Low-flow sampling has become the universal standard
of practice for groundwater sampling, and it is important to remember the underlying
concept that rather than low flow, a more appropriate term may be minimal drawdown as
the goal is to minimize stress in the well (U.S. EPA 2010). When sampling soil for volatile
organic compound (VOC) analysis, an important concept receiving increasing regulatory
attention is the minimization of the time that samples sit exposed to the atmosphere after
being removed from the ground. In general, it is advisable for the hydrogeologist to con-
sult with certified analytical laboratories to determine appropriate methods, sample con-
tainers and preservatives, and holding times.
Site Investigation 397
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.29
Photo of a track-mounted, mini sonic drill rig. (Courtesy of Woodard & Curran, Inc.)

FIGURE 7.30
Rotary (pulverized) cuttings from a granite–gneiss formation. In some cases, these cuttings may be mistakenly
classified as silty, gravelly sand. (Courtesy of Woodard & Curran, Inc.)
398 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.31
Rock cores from a granite–gneiss formation collected with a mini sonic rig, enabling correct rock classification
and identification of fractures. (Courtesy of Woodard & Curran, Inc.)

FIGURE 7.32
Photo of dirt and gravel access road constructed for a dual-rotary rig. (Courtesy of Woodard & Curran, Inc.)
Site Investigation 399

7.5.2.2 Real-Time Data Management, Analysis, and Visualization


As discussed in Chapter 3, developing a functional data management system is imperative
for successful execution of site-investigation projects. Data that are streaming back to the
office from the field and the lab must be validated and efficiently integrated into the site
geodatabase and visual processor (e.g., ArcGIS) for subsequent analysis. A site-investigation
strategy that is increasingly used in professional practice is real-time data management,
analysis, and visualization. In other words, data are entered into databases directly from
the field and instantly displayed on maps to support decision-making processes on a day-
to-day basis. Additionally, maps can be displayed on a Web server to enable clients, regula-
tors, and third-party stakeholders to review the data in real time to better understand the
data and contribute to site decisions.
One potential application of this technology is delineating contamination in soil and
Downloaded by [University of Auckland] at 23:45 09 April 2014

tracking progress during site remediation. Field screening tools such as X-ray fluorescence
(XRF) instruments and photoionization detectors (PIDs) can be used in lieu of laboratory
analysis to further expedite data collection and visualization. For example, when exca-
vating soils contaminated with metals at a hazardous-waste site, an XRF can be used to
determine in real time if the excavation needs to extend further in the horizontal or verti-
cal directions. At the end of each day, field personnel can upload XRF data into the site
database with a thumb drive through a Web–database interface, shown in Figure 7.33. The
uploaded XRF data can be displayed on the Web utility along with laboratory analytical
data to confirm XRF results. This data display, shown in Figure 7.34, is essentially a win-
dow into the database, which is being updated behind the scenes.
To reiterate, the goal of the real-time data entry illustrated through Figures 7.33 and 7.34
is to inform the soil excavation process. Specifically, the data will determine whether addi-
tional excavation is needed to remove metal contamination. To track the remedial progress,
a Web map can be created through ArcGIS that is automatically updated with the database.
The associated Web map for the above example is presented in Figure 7.35. The excavation
area, abutting the coast line, is divided into grid cells that correspond to sample location
areas. Color coding is used to easily display which cells have been sampled and excavated or
proven to be clean by the field sampling. After a cell is excavated, samples that were collected
from that cell can be flagged as being excavated in the Web utility as shown in Figure 7.36.
This process automatically updates a risk assessment table in the database by removing the
soil samples flagged as excavated. Therefore, the risk assessment table is only representative
of samples that still remain in the ground and are indicative of concentrations that potential
receptors may encounter. In this manner, risk assessment can also be conducted in real time
to ensure that the excavation is protective of human health.

Data Excavations Map Upload

Please Choose a File to Upload


Browse... Upload

FIGURE 7.33
Screen shot of Web utility to upload XRF data to a geodatabase. (Courtesy of Ted Chapin, Woodard & Curran,
Inc.)
400 Hydrogeological Conceptual Site Models

Data Excavations Map Upload

Exposure Point Average Lead Count XRF Average XRF Count Lab Average Lab Count
EPC-237 5,628 86 0 5628 86
EPC-2/3 1,976 240 330 44 2346 196
EPC-234 561 8 1011 3 291 5
None 378 168 319 72 423 96
EPC-8 370 30 283 10 413 20
EPC-WW10 283 22 309 13 245 9
EPC-9A 239 9 124 3 297 6
EPC-9 216 19 373 9 75 10
Downloaded by [University of Auckland] at 23:45 09 April 2014

EPC-10 182 36 236 12 155 24


EPC-1SOIL 158 57 186 32 122 25
EPC-1SED 116 40 96 19 134 21

FIGURE 7.34
Screen shot of summary data at relevant exposure points, including field-based (XRF) and laboratory-based
measurements. This data screen is viewed in real time on the Web page. (Courtesy of Ted Chapin, Woodard &
Curran, Inc.)

FIGURE 7.35
Map showing grid cells that is viewed on the Web and updated in real time based on field and lab-entered
data. (Courtesy of Ted Chapin, Woodard & Curran, Inc. World Imagery courtesy of Esri®, i-cubed, USDA, AEX,
GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User Community.)
Site Investigation 401

Data Excavations Map Upload

Select Samples To Be Excavated Excavated Samples


Sample Starting Ending Location Starting Ending
Location ID Sample ID
ID Depth Depth Excavate -> ID Depth Depth
10_A10 A10-0.5-1 0.50000000 1.00000000 10_A9 A9-0.5-1.0 0.50000000 1.00000000

10_A8 A8_0.5-1 0.50000000 1.00000000 10_B8 B8-0.5-1 0.50000000 1.00000000


101 101 0.00000000 1.00000000
10_A8 A8-0.5-1 0.50000000 1.00000000
102 102 0.00000000 1.00000000
10_A9 A9_1-1.5 1.00000000 1.50000000
10B XRF-10 0.00000000 0.50000000
10_A9 A9-1-1.5 1.00000000 1.50000000
11B XRF-11 0.00000000 0.50000000
10_B10 B10_0.5-1 0.50000000 1.00000000 13A 13 0.00000000 1.00000000
Downloaded by [University of Auckland] at 23:45 09 April 2014

10_B10 B10-0.5-1 0.50000000 1.00000000 14A 14 0.00000000 1.00000000

10_B11 B11_0.5-1 0.50000000 1.00000000 17A 17 0.00000000 1.00000000


1B 1 0.00000000 2.00000000
10_B11 B11-0.5-1 0.50000000 1.00000000
1SD_B10 B10-1-1.5 1.00000000 1.50000000
10_B8 B8-1-1.5 1.00000000 1.50000000
1SD_B8 B8-1-1.5 1.00000000 1.50000000
10_B9 B9_1-1.5 1.00000000 1.50000000
1SD_B8 B8-2-2.5 2.00000000 2.50000000
10_B9 B9-1-1.5 1.00000000 1.50000000 1SD_B9 B9-0.5-1 0.50000000 1.00000000
10_C8 C8_0.5-1 0.50000000 1.00000000 1SD_C3 C3-2.5-3 2.50000000 3.00000000

10_C8 C8-0.5-1 0.50000000 1.00000000 1SD_C8 C8_2-2.5 2.00000000 2.50000000


1SO_C3 C3-2.5-3 2.50000000 3.00000000
10_C9 C9_0.5- 1 0.50000000 1.00000000
1SO_C4 C4-2.5-3 2.50000000 3.00000000
10_C9 C9-0.5-1 0.50000000 1.00000000
1SO_C5 C5_1.5-2 1.50000000 2.00000000
11A 11 0.00000000 1.00000000
1SO_C5 C5-1.5-2 1.50000000 2.00000000
12A 12 0.00000000 1.00000000 1SO_C5 C5-1.5-2.0 1.50000000 2.00000000
12B XRF-12 0.00000000 0.50000000 1SO_C7 C7-2-2.5 2.00000000 2.50000000
13B XRF-13 0.00000000 0.50000000 1SO_C7N C7N-1.5-2 1.50000000 2.00000000

FIGURE 7.36
Screen shot of utility to classify samples as being excavated in real time to update risk calculations. (Courtesy
of Ted Chapin, Woodard & Curran, Inc.)

All the above operations, demonstrated through Figures 7.33 through 7.36, are executed
on a Web site accessible to anyone with the requisite log-in and password. Handheld com-
puters with GPS devices, such as a Trimble, can also be used to facilitate real-time data col-
lection. These handheld instruments can also access Web utilities and update databases as
shown in Figure 7.37. The mobile ArcGIS utility, ArcPad, can be loaded onto these machines,
so that data can be updated and visualized in a GIS environment as shown by Figure 7.38.
This enables usage of complex ArcGIS tools, such as ModelBuilder, to perform mapping
and spatial data analysis. Sampling points or other relevant locations can be located with
GPS technology, uploaded to the database, and displayed on the ArcPad map—all on the
handheld device. An example operation created in ModelBuilder to immediately translate
X, Y points in a table to features on a map is displayed in Figure 7.39.
In summary, the rapid expansion of Web and handheld computing tools has greatly
facilitated real-time data collection, analysis, and mapping. These tools can increase proj-
ect efficiency, reduce risk of error, and help clearly demonstrate important data and con-
cepts to clients, regulators, and other stakeholders. Once again, these operations primarily
reside in the ArcGIS environment, illustrating how important it is for hydrogeologists to
become competent in GIS technology. Alternative software tools and examples of real-
time data visualization are presented in Chapters 8 and 9 for groundwater remediation
and groundwater resource applications, respectively.
402 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.37
Example of data entry utility in ArcPad as seen on a handheld Trimble device. In this case, excavation quanti-
ties are being tracked by the truckload. (Courtesy of Ted Chapin and Aaron Townsley, Woodard & Curran, Inc.)

FIGURE 7.38
Example of an ArcPad map with site features and sample locations as seen on a handheld Trimble device.
(Courtesy of Ted Chapin and Aaron Townsley, Woodard & Curran, Inc.)
Site Investigation 403
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.39
Design of a model to automatically convert X, Y locations into a feature class for display on ArcGIS maps. In
this manner, X, Y points can be displayed on a map in real time to inform site decision making. (Courtesy of Ted
Chapin, Woodard & Curran, Inc.)

Systematic planning and real-time data measurement are two of the three key elements
of the U.S. EPA’s Triad approach, which was created to help modernize and streamline site
investigation and remediation processes (U.S. EPA 2004). The third element is a dynamic
work strategy, which requires experienced field staff to make real-time decisions in the
field based on the objectives determined through systematic planning. A conceptual dia-
gram of the Triad approach is presented in Figure 7.40. The intent of the Triad approach is
to help regulators and consultants alike develop “an accurate CSM that delineates distinct
contaminant populations for which risk estimation and cost-effective remedial decisions
will differ” (U.S. EPA 2004).
Additionally, use of the Triad elements will help the hydrogeologist identify and man-
age uncertainties related to the CSM. The Triad approach is really the genesis of many of
the concepts discussed throughout this book and represents a major step by the U.S. EPA

FIGURE 7.40
Conceptual diagram showing the three elements of U.S. EPA’s Triad approach. (From United States
Environmental Protection Agency (U.S. EPA), Improving Sampling, Analysis, and Data Management for Site
Investigation and Cleanup, Office of Solid Waste and Emergency Response, 4 pp., 2004.)
404 Hydrogeological Conceptual Site Models

to encourage the use of CSMs, real-time data collection, and computer analytical tools to
expedite site investigations and make them more accurate and less wasteful.

7.6 Example Visualizations for Site Investigation Data


After performing data collection at a site and entering the data into the data management
system, it is possible to perform quantitative and qualitative analysis of the data to reach a
study conclusion. Two major forms of data analysis for site investigation projects are contour-
ing and groundwater modeling as described in detail in Chapters 4 and 5. After performing
this data analysis, the hydrogeologist must clearly present the related conclusions to clients,
Downloaded by [University of Auckland] at 23:45 09 April 2014

regulators, and other stakeholders. While written project summary reports are invariably
required, the best way to support technical decisions is through graphics such as maps,
geologic diagrams, and data charts and plots. Example graphics from real-life site investiga-
tion projects are presented and discussed in the following sections. The goal is to inform
the reader about the types of graphics that are generally used in professional practice and
to illustrate how well-conceived figures can clearly illustrate key hydrogeological concepts.

7.6.1 Plan-View Maps


Plan-view maps are obligatory figures that one will find in every site investigation report.
In general, plan-view maps are used for the following purposes:

• Depicting the regional physiographic setting of the site in question (e.g., using a
regional-scale USGS topographic map, for example)
• Displaying relevant site features such as roads, buildings, wells, and topography
(generally the elements that comprise a basemap)
• Presenting the locations and results of samples collected from groundwater, soil,
or surface water for field or laboratory analysis
• Presenting contour maps of potentiometric surface, contaminant concentration, or
geologic layering (i.e., the bedrock surface)
• Displaying the results of analytical or numerical models

For investigations related to hazardous-waste site cleanup, plan-view maps with sampling
results are often the key figures associated with investigation reports. These figures are
often subject to rigorous review by regulators and stakeholders, each of whom may have
competing ideas about what these figures should contain. For example, some regulators
may want all data to be presented on the figures in tabular form so that they contain
exact numeric results for all relevant analytes and can be used for one-stop referencing
purposes. This is commonly accomplished through the use of chemical (chem) boxes, or
callout boxes, with data tables that point to the associated sampling locations. While there
is some value to having all the tabular data in a figure, this practice often leads to figures
that are practically illegible and make it very hard to discern the overall concept presented
in the figure. For example, the intent of the figure may be to show the overall nature and
extent of contamination relative to key hydrogeologic features or water-supply wells.
Excessive labeling can obscure such concepts as shown in Figure 7.41.
Site Investigation 405
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.41
Figure with excessive labeling of sampling results at soil borings. It is very difficult to discern which sampling loca-
tions correspond to which chem boxes, and it is not immediately clear which results exceeded regulatory criteria.

FIGURE 7.42
Symbology can clearly depict sampling results as was performed in this example of PCB concentrations in
sediment. (From ENSR/AECOM, Sediment Sampling Summary Report 2004–2005: Marsh Island. New Bedford
Harbor Superfund Site, New Bedford Massachusetts, US Army Corps of Engineers. United States Environmental
Protection Agency Contract DACW33-00-D-0003-Task Order 12 Document No. 09000-350-720, 54 pp., 2006.
Available at http://www.epa.gov/nbh/data.html#1998RODESDs, accessed May 30, 2011.)
406 Hydrogeological Conceptual Site Models

As described in Chapter 3, symbology can be used to display sampling results in a


clearer, more elegant manner. Different shapes and colors can be used to represent sam-
ples based on measured concentrations. This makes excellent sense for site-investigation
projects as there are often one or two threshold concentrations that drive decision-making
processes. For example, one threshold number may represent the action level for soil
excavation, and another may represent the concentration above which soil is classified as
hazardous waste and requires special handling. Figure 7.42 uses different colors to rep-
resent five different tiers of PCB concentrations (ENSR/AECOM 2006). Similarly, Figure
7.43 uses different symbols to represent which samples exceeded criteria for the Toxic
Characteristic Leaching Procedure (TCLP) test, which measured the suitability of soil
for disposal in a landfill (NYSDECNYSDEC). It is strongly recommended that the exces-
sive use of label boxes be avoided when creating figures; instead, symbology and labeled
sample IDs can be used to clearly convey the significance of the data with respect to site
Downloaded by [University of Auckland] at 23:45 09 April 2014

objectives while also providing for easy reference to companion tables that contain all
requisite data.

FIGURE 7.43
Figure that clearly displays which locations exceeded TCLP criteria. No concentration labeling is necessary.
(From New York State Department of Environmental Conservation (NYSDEC), RI/FS Scope of Work, Old Upper
Mountain Road Site, Site Number 932112, Routes 31 & 93, Lockport, NY, 39 pp., 2009. Available at http://www​
.dec.ny.gov/docs/regions_pdf/oldupperscope.pdf, accessed June 20, 2011.)
Site Investigation 407

Contouring of the potentiometric surface of an aquifer is discussed at length in Chapter


4, and the presentation of these contours on plan-view maps is a required element of site-
investigation reporting. Potentiometric surface contours can be used to show the prevail-
ing direction of groundwater flow and assess the impact of changing site conditions on
hydrogeologic processes.
For example, Figure 7.44 shows the potentiometric surface of a groundwater basin in
Arizona prior to the initiation of pumping from two water-supply wells symbolized in
orange. Groundwater flow in the basin is bounded by mountain ranges to the west and
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.44
Potentiometric surface of a groundwater basin in Arizona prior to water-supply development at the two
wells represented by the orange crosshair symbols. The small blue circles represent water-level gaug-
ing locations used to generate the contours, which are presented in feet above mean sea level. World
Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and the GIS User
Community.
408 Hydrogeological Conceptual Site Models

north with flow moving toward the Santa Cruz River in the northwest corner of the map.
Figure 7.45 is a contour map of the same basin after the two wells have been pumping for
several years and shows a pronounced cone of depression. It is likely that discharge to
the Santa Cruz River is significantly reduced, and the effects of this pumping on surface-
water hydrology should be assessed. Figures 7.44 and 7.45 are representative of basin scale,
and contours should be representative of the primary boundary conditions acting on the
regional aquifer. As described in Chapter 4, honoring each of the potentiometric surface
measurements at the monitoring wells (represented by the blue circles) makes no practical
sense at this scale, and a nugget effect should be used. These contours can then form the
basis of a numerical groundwater model to assess the long-term impacts of this pumping.
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.45
Potentiometric surface of the groundwater basin shown in Figure 7.44 after several years of water-supply pump-
ing. World Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and
the GIS User Community.
Site Investigation 409

Contaminant concentration contour maps are also typical deliverables associated


with site-investigation projects. The development of these maps is discussed at length in
Chapter 4. Figure 7.46 is a typical example of a contour map used in professional prac-
tice. Contours represent the sum of the tetrachloroethene (PCE) and trichloroethene (TCE)
concentrations in groundwater. Contours are not depicted in the contaminant source
area, represented by the purple hatched polygon, where concentrations over 1 mg/L were
measured. This is common practice as lognormal transformation and detrending are not
widely used, and without these two steps, it is very difficult to incorporate high concentra-
tion areas (hot spots) in such maps.
As with the potentiometric surface, concentration contour maps at different times can
be used to demonstrate changes in plume orientation. This concept is discussed further
in Chapter 8 in the context of monitored natural attenuation. Figure 7.47 depicts the extent
of the Ashumet Valley plume at the Massachusetts Military Reservation (MMR) in 1996
Downloaded by [University of Auckland] at 23:45 09 April 2014

and in 2003 (AFCEE 2005). There have been significant changes in the overall extent of
the plume, most likely because of the operation of a pump-and-treat system. Over the
seven-year period, a portion of the plume became detached as highlighted in Figure
7.47. However, the plume also spread further to the south. A limitation of this map is the
absence of topographic or groundwater-elevation contours, which would better inform the
reader about why the plume has changed and answer key questions regarding plume sta-
bility. Labeling the concentration of the contours and potentially adding one more labeled
contour line would also significantly help demonstrate whether the plume is stable, con-
tracting, or expanding.

FIGURE 7.46
PCE and TCE concentration contours in a shallow bedrock aquifer. (Courtesy of Woodard & Curran, Inc.)
410 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.47
Outlines of the Ashumet Valley plume at MMR in 1996 and 2003. While it is clear that changes in the plume
extent have occurred, the absence of groundwater and/or topographic contours and pumping-well locations
makes it very difficult to discern the conceptual explanation for the changes. (From Air Force Center for
Environmental Excellence (AFCEE), Ashumet Valley Plume Outline, 2005. Available at http://www.mmr.org/
Cleanup/plumes/images/Ashumet.pdf, accessed May 13, 2011.)
Site Investigation 411

7.6.2 Boring Logs and Cross Sections


Boring logs and cross sections are critical elements of hydrogeological investigations that
can be used to present a wide variety of data. The fundamental data elements of both logs
and cross sections are lithology and stratigraphy with the emphasis of both diagrams being
the accurate classification of soils and rocks and the accurate interpretation of their posi-
tion in the subsurface. Boring logs represent point measurements of the subsurface, and
cross sections interpolate these point measurements into a continuous two-­dimensional
profile. This is similar to the interpolation of point measurements of hydraulic head into
a continuous potentiometric surface. Boring logs and cross sections can also be used to
present
• Measurements and contours of the potentiometric surface
• Field-screening results from PIDs and other instruments
Downloaded by [University of Auckland] at 23:45 09 April 2014

• Soil and groundwater laboratory analytical results (i.e., contaminant concentrations)


• Relevant surface and subsurface features, such as surface water bodies, wells, and
buildings

Figure 7.48 is a typical boring log used in a hydrogeological site investigation. The boring
log presents a classification of the soil or rock at each depth interval along with a cor-
responding graphic. Well-construction information, in this case an open-hole bedrock

FIGURE 7.48
Example boring log for a bedrock well drilled using air rotary technology.
412 Hydrogeological Conceptual Site Models

well, is also provided. Note that different soil classification methodologies are used in
professional practice with common examples including the United Soil Classification
System (USCS), the United States Department of Agriculture Classification System, and
the Burmeister Classification System. In general, one should use a consistent classifi-
cation system for each boring log at a site and include more information than is pro-
vided simply by the USCS nomenclature. For example, solely classifying a soil as SM:
Silty Sand under the USCS system does not convey any information about the following
characteristics:

• The estimated fraction of silt (i.e., 20%–50%)


• The gradation of the sand
• The presence of trace amounts of gravel or clay
Downloaded by [University of Auckland] at 23:45 09 April 2014

• The color, moisture content, density, and angularity of the soil


• The presence of odors or other indicators of contamination

Therefore, it is advisable to use a more robust classification system that incorporates the
above parameters. With knowledge of the above parameters, it is very easy to determine
the corresponding USCS classification, and commonly, the USCS term can be added to
the comprehensive soil description at the end in parenthesis. For the reader’s benefit, a
soil classification guide prepared by the Alaska Department of Transportation and Public
Facilities is included in the companion DVD.
Similar to the classification itself, soil boring logs should contain as much information as
possible regarding the drilling method, the well construction, field screening results, and
pertinent notes and observations, such as the presence of heaving sands and/or weathered
(rotten) rock.
After consolidating soil boring logs, the hydrogeologist can interpret the data to create
informative cross sections that become key elements of the CSM. Cross-section develop-
ment is an art form that requires a detailed understanding of both site-specific data and
the regional depositional environment. Knowledge and experience in geology, most nota-
bly surficial processes, are paramount. Because professional judgment and interpretation
are used to create cross sections, two different hydrogeologists may create significantly
different cross sections using the same data. This phenomenon is demonstrated in Figure
7.49, which is a cross section created in AutoCAD, a program commonly used to create dig-
ital cross sections. The connected, colored polygons represent the original interpretation of
stratigraphy; however, red-line markups indicate alternative, equally plausible interpreta-
tions. The question regarding which interpretation is correct depends upon which version
is more representative of the geological processes applicable to the site.
While AutoCAD sections can be made smoother than that of Figure 7.49, graphical pro-
duction programs such as Canvas can be used to make higher quality cross sections that
better reinforce interpretative concepts. For example, Figure 7.50 was created in Canvas
and clearly demonstrates how the two aquifers are separated by a confining unit that helps
maintain a higher hydraulic head in the deeper aquifer. Cross sections can also be used
to show contaminant concentration contours, which is very useful in demonstrating how
contaminant fate and transport depends on geologic structure. For example, Figure 7.51
shows TCE concentration contours in groundwater relative to soil layers and the bedrock
surface. The area of the highest TCE concentration (> 10 mg/L) sits at the interface of the
very dense sand and the bedrock, indicating the potential presence of DNAPLs. It is appar-
ent that a TCE plume exists in both the overburden and the bedrock. The likelihood of
Site Investigation 413
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.49
Geologic cross section created in AutoCAD with small letters in large “tablecloth” format with alternative inter-
pretations marked in red. (Courtesy of Pall Corporation.)

A A’
West
East
720 720

680 680
S Clay

640 640
Elevation amsl, in feet

Elevation amsl, in feet


S Aquifer
600 600

560 560
D Confining
Unit
520 520
?
480 480

Bedrock D Aquifer
440 440
?

0 1000 2000 ft
LEGEND
Groundwater flow Potentiometric level,
direction 23 September 2009
Monitoring well
with screen interval S Aquifer S Aquifer
D Aquifer D Aquifer

FIGURE 7.50
Example cross section created in a graphical production program that is more visually appealing. Note that
question marks indicate uncertainty in the interpretation because borings were not advanced to the bedrock
surface at those locations. Another advantage of this format, as opposed to typical AutoCAD cross sections, is
that it will look the same printed in hard copy as it does on the computer screen (i.e., very large-size printing
paper is not required). “S” signifies shallow, and “D” signifies deep.
414 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.51
Geologic cross section showing TCE concentrations in groundwater. These types of figures are critical in pre-
senting the CSM for contaminant fate and transport. (Courtesy of Woodard & Curran, Inc.)

DNAPL in the bedrock and the documented existence of a fractured bedrock TCE plume
have important implications with respect to the feasibility of groundwater remediation to
drinking-water standards. This concept is discussed further in Chapter 8.
Regardless of the exact content and scale of a cross section, it should contain the follow-
ing common elements:

• Vertical and horizontal scales with the corresponding vertical exaggeration


• A legend identifying the relevant symbols and layers used in the cross section
• A locus map showing the planar extent of a cross section, either as a box in a cor-
ner of the section or as a separate figure
• Direction (i.e., east, west) corresponding to the end points of the cross section
• Labeled wells and soil borings used to generate the sections

Cross sections can also be used to present geophysical data that can enhance the site
investigation process. In general, subsurface geophysical methods can be used to collect
subsurface data along a transect at a density that would be impractical for a drilling rig.
The broad categories of geophysical methods are surface, borehole, and waterborne (USGS
2009). Examples of geophysical methods include

• Electrical resistivity sounding, logging, and tomography


• Ground-penetrating and borehole radar
• Electromagnetic measurement
• Flowmeter logging

One example application of geophysics is the use of electrical resistance tomography to


determine the presence and 3D orientation of fractures in the subsurface. Figure 7.52 pre­
sents a typical setup for electrical resistivity tomography. The results of this survey are pre-
sented in cross-section panels in Figure 7.53. Colored, filled contours represent resistivity
Site Investigation 415
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.52
Setup for determining presence and 3D orientation of fractures using electrical resistivity tomography between
boreholes. (Courtesy of Peter Thompson, AMEC.)

FIGURE 7.53
Electrical resistivity tomography results shown in panels between boreholes. Concentration of COCs are for dis-
crete sampling points shown as spheres. (Courtesy of Peter Thompson, Scott Calkin, and Rod Rustad, AMEC.)
416 Hydrogeological Conceptual Site Models

data, and contaminant concentrations at discrete sampling points (i.e., soil borings) are
presented as spheres. As shown in Figure 7.53, areas of high contaminant concentration
generally correlate to areas of low resistivity (i.e., conductivity increases because of the
contaminants). In this manner, the tomography can be used to efficiently map contamina-
tion in the subsurface.
In addition to mapping contamination, resistivity geophysics can be used to delineate
bedrock surfaces and identify high-yielding fractures. For example, Figure 7.54 is a resis-
tivity cross section that shows the top of the bedrock surface to range from 15 to 45 ft below
ground surface (gold area). The resistivity low in the profile correlated well with a mapped
photolineament and purported high-yield residential wells in the vicinity. A water-supply
well was drilled at the 410-ft mark along the profile line.
Electrical conductivity logging (e-logging) is similar to resistivity surveying but can
be accomplished with direct-push drilling technology. The primary use of e-logging is
Downloaded by [University of Auckland] at 23:45 09 April 2014

to continuously delineate hydrostratigraphy at a site or, more simply, to characterize the


layering of overburden sediments. In general, fine-grained sediments (silts and clays)
have high conductivity, and coarse-grained sediments (sands and gravels) have low con-
ductivity. Figure 7.55 is an example electrical-conductivity log plotted against site stra-
tigraphy data obtained through conventional soil sampling and grain-size analysis. Data
from multiple e-logs can be interpolated into a cross section, as shown in Figure 7.56
(KGS 1999).
Hydrogeologists can also take advantage of the wealth of information available in the
public domain to create visually appealing and informative cross sections. For example,
Walsh (2009) explains how maps and cross sections published by the USGS can be over-
lain on three-dimensional images obtained from Google Earth. Figure 7.57 is an example
figure created by superimposing a soil map and three cross sections onto the Google Earth
window.

FIGURE 7.54
Example geophysical 2D resistivity cross section. (Courtesy of Peter Thompson and Scott Calkin, AMEC; data
courtesy of Northeast Geophysical Services, Inc.)
Site Investigation 417
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.55
E-log for a soil boring advanced with direct push technology. (From Kansas Geological Survey (KGS), Hydro­
stratigraphic Characterization of Unconsolidated Alluvial Deposits with Direct-Push Sensor Technology,
Open File Report 99-40, 1999. Available at http://www.kgs.ku.edu/Hydro/Publications/OFR99_40/index.html,
accessed July 5, 2011.)

FIGURE 7.56
Electrical conductivity cross section created through interpolation of individual E-logs. Note that fine-
grained layers at the Hcynd3 location are significantly thinner. (From Kansas Geological Survey (KGS),
Hydrostratigraphic Characterization of Unconsolidated Alluvial Deposits with Direct-Push Sensor Technology,
Open File Report 99-40, 1999. Available at http://www.kgs.ku.edu/Hydro/Publications/OFR99_40/index.html,
accessed July 5, 2011.)
418 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.57
Creation of a three-dimensional model in Google Earth using USGS soil maps and cross sections. Arrow A
shows the location of the model in the places window, arrow B points to a corner handle used to resize the
model, and arrow C points to the center handle used to resize the model. (From Walsh, G. J., A Method for
Creating a Three Dimensional Model from Published Geologic Maps and Cross Sections, U.S. Geological
Survey Open-File Report 2009-1229, 16 pp., 2009.)

7.6.3 Graphs and Charts


Aside from the complex, stylized logs, cross sections, and maps that can be created using
modern computer technology, simple data graphs and charts are required elements of any
site-investigation deliverable. Graphs and charts can convey elements of the CSM in a clear
and concise manner and are generally used to

• Plot changes in water-level elevations at individual monitoring wells or within an


aquifer (i.e., hydrographs)
• Show contaminant concentrations as a function of depth in the subsurface
• Plot changes in contaminant concentration over time
• Generate stream-rating curves that plot stream elevation (stage) as a function of flow
• Evaluate the effects of remedial interventions

Graphs can often be combined with plan-view figures to create nice visualizations. For
example, Figure 7.58 includes plots of contaminant concentrations as a function of depth
in the vadose zone for the boring locations shown in plan-view.
To reiterate, graphs are most useful in demonstrating key elements of the CSM. Figure
7.59 is a hydrograph of water levels in monitoring wells adjacent to a stream. Beaver
Site Investigation 419
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.58
Figure showing vertical profiles of vadose zone contamination in milligrams per kilogram. The magnitude of
contamination is greater at locations SB-3 and SB-7; however, contamination is deepest at location SB-4. These
differences may be attributable to recharge variation at the site.

108.00
Beaver Dam Removal
Winter 2008-2009
107.50
Water Level Elevation (ft AMSL)

107.00

106.50

106.00

105.50

105.00
MW-1
MW-2
104.50 MW-3
MW-4

104.00
02/05/96

10/12/96

06/19/97

02/24/98

11/01/98

07/09/99

03/15/00

11/20/00

07/28/01

04/04/02

12/10/02

08/17/03

04/23/04

12/29/04

09/05/05

05/13/06

01/18/07

09/25/07

06/01/08

02/06/09

10/14/09

06/21/10

Date

FIGURE 7.59
Hydrograph of water levels at monitoring wells in an unconfined aquifer adjacent to a stream before and after
the removal of large beaver dams.
420 Hydrogeological Conceptual Site Models

dams are located within the stream and create losing reservoirs that buffer against
water-table recession and effectively increase the potentiometric surface across the site.
However, over the winter of 2008–2009, the beaver dams were removed because of local
flooding concerns. The dam removal had a significant and immediate effect on the aqui-
fer as water levels dropped to record lows at several locations in 2009 despite normal
precipitation.
Figure 7.60 is a typical plot of contaminant concentrations along a plume centerline at
a hazardous-waste site. The primary contaminants of concern (COCs) are chlorinated
VOCs (CVOCs). To demonstrate the effects of natural attenuation at the site, the graph
plots concentrations of parent VOCs and daughter VOCs. Parent VOCs represent chlori-
nated solvents in their original form, including TCE and PCE. Daughter compounds are
the degradation products of these solvents with common examples, such as cis-1,2-dichlo-
roethene and vinyl chloride. As shown in Figure 7.60, the percent composition of VOCs is
Downloaded by [University of Auckland] at 23:45 09 April 2014

shifting toward daughter compounds moving downgradient from the source. This plot
clearly shows that chlorinated solvents are degrading in the source area and that natural
attenuation processes are being effective. Additional discussion regarding the demonstra-
tion of natural attenuation processes is presented in Chapter 8.
One key attribute of Figure 7.60 is the use of different line types for each constituent.
In this manner, despite being in black and white, it is still easy to distinguish the differ-
ent lines. The inclusion of many data sets in a graph is only useful in demonstrating an
overall trend that applies universally to all the data (e.g., concentrations are decreasing,
pumping rates in the basin are decreasing, and water levels are increasing; see Figure 7.61).
Otherwise, it is advisable to group data sets or to break up the graph into several different
plots, so that the data are legible and the original intention of the graphs is preserved.

(+)
PCE
Concentration or Mass

cis-1,2 DCE
O R P (mV)
TCE

ORP
VC
ne
he
Et

(-)
Background Source Downgradient
Distance and Direction of Groundwater Flow

FIGURE 7.60
Theoretical concentrations of parent and daughter CVOCs along a plume centerline at a hazardous-waste site.
(Modified from The Interstate Technology & Regulatory Cooperation Work Group (ITRC), Natural Attenuation
of Chlorinated Solvents in Groundwater: Principles and Practices, Technical/Regulatory Guidelines, 1999.)
Site Investigation 421

875 MW-1
Groundwater Elevation (feet, amsl)

MW-4
MW-5
MW-3
MW-7
870 MW-9
MW-8
MW-12
MW-2
MW-11
865 MW-13
1/1/2007 1/1/2008 1/1/2009
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.61
Groundwater elevations at a series of monitoring wells over time.

7.7 Toxic Gingerbread Men and Other Confounders


While conducting hydrogeologic investigations related to contaminated sites, it is impor-
tant to always consider confounding factors that may influence the collected data. In some
cases, the contaminant of interest may occur naturally in soil and groundwater at a site,
and it will be difficult to discern impacts that are solely related to site operations (see the
arsenic example in Chapter 1). Similarly, urban-fill material brought to a site may have
residual contamination, or groundwater and surface water at a site may be downgradient
of other hazardous-waste sites and be impacted by these external sources.
There are numerous confounding factors related to vapor intrusion, which is an emerg-
ing contaminant exposure pathway commonly associated with state-led sites. Vapor intru-
sion refers to the volatilization of organic contaminants from a groundwater plume and
subsequent migration in the vapor phase through the vadose zone and into overlying
structures. Vapors can enter structures such as single-family homes through cracks or
voids in floors and walls. Increased concern about this pathway has led to the reopening
of many formerly closed sites with the requirement of conducting soil vapor and indoor
air sampling for VOCs. Unfortunately, there are many household items and consumer
products that contain detectable levels of VOCs that complicate these investigations. For
example, TCE has been identified in household cleaners and polishes, electronics cleaners,
adhesives, oils, greases, lubricants, and fabric treatments (Doucette et al. 2009).
During an expansive vapor intrusion investigation associated with the Hill Air Force
Base in Utah, the contaminant 1,2-dichloroethane (1,2-DCA) was detected at numerous
residences that did not overlie contaminated groundwater. Room-by-room sampling and
emission chamber analysis at one such residence led the investigators to conclude that the
source of the 1,2-DCA was actually molded plastic Christmas ornaments manufactured
in China. A gingerbread man ornament contained the most 1,2-DCA; one such offender is
presented in Figure 7.62. Screening-level calculations indicate that the emission rates from
these ornaments alone can lead to indoor air 1,2-DCA concentrations of regulatory concern
(Doucette et al. 2009). For more information about the “case of the toxic gingerbread man”,
see Raloff (2009).
422 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 7.62
1,2-DCA emission rate from a single gingerbread man similar to the one shown was measured at 0.16 µg/min.
(Emissions data from Doucette, W. J. et al., Ground Water Monitor. Remediat., 30, 1, 65–71, 2009.)

Hydrogeologists must be aware of confounding factors, such as the gingerbread man,


and quantitatively assess concentrations representative of background conditions inde-
pendent of the site in question. Statistical methods such as hypothesis testing in programs
such as ProUCL can be used to compare site data to background data collected at a location
known to be unaffected by site operations.

References
Air Force Center for Environmental Excellence (AFCEE), 2005. Ashumet Valley Plume Outline. Available
at http://www.mmr.org/Cleanup/plumes/images/Ashumet.pdf, accessed May 13, 2011.
Doucette, W. J., Hall, A. J., and Gorder, K. A., 2009. Emissions of 1,2-dichloroethane from holiday dec-
orations as a source of indoor air contamination. Ground Water Monitor. Remediat. 30(1), 65–71.
ENSR/AECOM, 2006. Sediment Sampling Summary Report 2004–2005: Marsh Island. New Bedford
Harbor Superfund Site, New Bedford Massachusetts. US Army Corps of Engineers. United States
Environmental Protection Agency Contract DACW33-00-D-0003-Task Order 12 Document
No. 09000-350-720. Available at http://www.epa.gov/nbh/data.html#1998RODESDs, accessed
May 30, 2011.
Esri, Inc., 2010a. Georeferencing a CAD Dataset. ArcGIS 10 Help Library.
Esri, Inc., 2010b. Fundamentals for Georeferencing a Raster Dataset. ArcGIS 10 Help Library.
Esri, Inc., 2010c. Working with Basemap Layers. ArcGIS 10 Help Library.
Fetter, C. W., 2001. Applied Hydrogeology, Fourth Edition. Prentice-Hall, Inc., Upper Saddle River, NJ.
Foremost Industries, 2003. Benefits of Dual Rotary Drilling in Unstable Overburden Formations.
Available at http://pierregagnecontracting.com/images/dr_benefits.pdf, accessed May 12, 2011.
Geoprobe Systems, 2011a. What’s Behind Our Name. Available at http://geoprobe.com/whats-
behind-our-name, accessed June 15, 2011.
Geoprobe Systems, 2011b. Geoprobe® FAQs. Available at http://geoprobe.com/geoprobe-faqs,
accessed June 15, 2011.
The Interstate Technology & Regulatory Cooperation Work Group (ITRC), 1999. Natural Attenuation
of Chlorinated Solvents in Groundwater: Principles and Practices. Technical/Regulatory
Guidelines.
Site Investigation 423

Kansas Geological Survey (KGS), 1999. Hydrostratigraphic Characterization of Unconsolidated


Alluvial Deposits with Direct-Push Sensor Technology. Open File Report 99-40. Available at
http://www.kgs.ku.edu/Hydro/Publications/OFR99_40/index.html, accessed July 5, 2011.
MassGIS, 2011. Office of Geographic Information. Available at http://www.mass.gov/mgis/,
accessed May 20, 2011.
Matzke, B.D., Nuffer, L.L., Hathaway, J.E., Sego, L.H., Pulsipher, B.A., McKenna, S., Wilson, J.E.,
Dowson, S.T., Hassig, N.L., Murray, C.L., and Roberts, B., 2010. Visual Sample Plan Version 6.0
User’ Guide. United States Department of Energy, PNNL-19915, 255 p. Available at http://vsp​
.pnnl.gov/docs/PNNL%2019915.pdf.
National Oceanic and Atmospheric Administration (NOAA), 2006. Wyckoff Co./Eagle Harbor Beach
Investigation for Creosote. Available at http://response.restoration.noaa.gov/, accessed June
15, 2011.
New York State Department of Environmental Conservation (NYSDEC), 2009. RI/FS Scope of Work.
Old Upper Mountain Road Site, Site Number 932112, Routes 31 & 93, Lockport, NY. Available
Downloaded by [University of Auckland] at 23:45 09 April 2014

at http://www.dec.ny.gov/docs/regions_pdf/oldupperscope.pdf, accessed June 20, 2011.


Raloff, J., 2009. Case of the Toxic Gingerbread Man. Science News. Web Edition: Saturday, November
21st, 2009. Available at http://www.sciencenews.org/view/generic/id/49897. Accessed April
10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2004. Improving Sampling, Analysis,
and Data Management for Site Investigation and Cleanup. Office of Solid Waste and Emergency
Response, 4 pp.
United States Environmental Protection Agency (U.S. EPA), 2006. Guidance on Systematic Planning
Using the Data Quality Objectives Process. Office of Environmental Information, Washington,
DC, 110 pp.
United States Environmental Protection Agency (U.S. EPA), 2010. Low Stress (Low Flow) Purging
and Sampling Procedure for the Collection of Groundwater Samples from Monitoring Wells,
Revision 3. Quality Assurance Unit, US Environmental Protection Agency, Region 1, North
Chelmsford, MA, 30 pp.
United States Geological Survey (USGS), 2005. U.S. Geological Survey Manual: 120.1–Organization–
Creation, Mission, and Functions. Available at http://www.usgs.gov/usgs-manual/120/120-1.
html, accessed June 1, 2011.
United States Geological Survey (USGS), 2009. Geophysical Methods. USGS Groundwater
Information: Branch of Geophysics. Available at http://water.usgs.gov/ogw/bgas/methods​
.html, accessed July 5, 2011.
United States Geological Survey (USGS), 2011. USGS Publications Warehouse. Available at http://
pubs.er.usgs.gov/, accessed June 5, 2011.
Walsh, G. J., 2009. A Method for Creating a Three Dimensional Model from Published Geologic Maps
and Cross Sections. U.S. Geological Survey Open-File Report 2009-1229, 16 pp.
Woolfenden, L., and Koczot, K. M., 2001. Numerical Simulation of Ground-Water Flow and Assess­
ment of the Effects of Artificial Recharge in the Rialto–Colton Basin, San Bernardino County,
California. USGS Water Resources Investigation Report 00-4243, U.S. Department of the
Interior, U.S. Geological Survey, 156 pp.
Downloaded by [University of Auckland] at 23:45 09 April 2014
8
Groundwater Remediation

8.1 Introduction
The cleanup of hazardous-waste sites with groundwater contamination is a major element
of professional hydrogeology in the United States. It is likely that every professional hydro-
geologist will work on a groundwater remediation project at some point in his or her career.
The long-term remediation of these sites is largely governed by 1980’s Comprehensive
Environmental Response, Compensation, and Liability Act (CERCLA), commonly known
as the Superfund program. States often have similar regulatory programs to address sites
not listed on the National Priorities List (NPL), such as the Massachusetts Contingency
Plan. To the general public, invoking the term Superfund conjures up images of burning
rivers, rusted 55-gallon drums leaking fluorescent fluids on the ground, contaminated
wells, and mutated aquatic life. While in many instances these stereotyped attributes
(mutations aside) are, in fact, accurate, often the perception of contamination is just as
important as the actual data defining the nature and extent of the contamination, and the
risks associated with any existing or potential exposures to the contamination [i.e., the
Conceptual Site Model (CSM)]. Similarly, the perception of environmental cleanup is often
more important than rigorous quantification of the costs and true environmental, social,
and economic benefits of complicated remediation projects. This is discussed further in
Section 8.5 in the context of sustainable remediation, a very important emerging concept.
Because of the Superfund program and similar state-led initiatives, groundwater reme-
diation has become a lucrative field with innumerable technology vendors offering solu-
tions for a wide variety of contaminants. In general, remediation technologies fall into one
of the following categories (U.S. EPA 2010):

• Source treatment remedies: In situ or ex situ treatment of soil, sediment, sludge,


and/or nonaqueous phase liquids (NAPLs) at or near the point of contaminant
disposal. Examples of in situ technologies include soil vapor extraction (SVE), in
situ thermal treatment, bioremediation, in situ chemical oxidation, and flushing.
Technologies are often combined; for example, in situ thermal treatment typically
requires installation of SVE and multiphase extraction systems in addition to heat-
ing elements. Ex situ remediation primarily consists of soil excavation with subse-
quent treatment, incineration, and/or disposal.
• Source containment remedies: In situ containment of contaminated soil, sediment,
sludge, or NAPLs. Examples of technologies include landfill caps, barrier walls,
and solidification/stabilization.
• Groundwater treatment remedies: In situ treatment of contaminants primar-
ily existing in the dissolved aqueous phase at high concentrations within or

425
426 Hydrogeological Conceptual Site Models

immediately downgradient of the source area. Examples of technologies include


air sparging, permeable reactive barriers, bioremediation, and in situ chemical oxi-
dation (the latter two technologies are also used for source treatment).
• Pump and treat: Extraction and ex situ treatment of contaminated groundwater
to prevent further plume migration. Note that pump and treat was originally
conceived to be an effective means of groundwater treatment; however, years of
operational data demonstrate that it is better described as a plume containment
remedy with an ancillary benefit of enhanced groundwater flushing that results
in a minor reduction in overall cleanup time.
• Monitored natural attenuation (MNA): The reliance on natural attenuation fac-
tors, such as biodegradation, dissolution, dilution, sorption, and volatilization, to
achieve remedial objectives within a reasonable time frame compared to that of
Downloaded by [University of Auckland] at 23:45 09 April 2014

other remedial technologies. MNA involves a detailed, controlled, and compre-


hensive monitoring system and is often paired with source and/or groundwater
treatment or containment to expedite site cleanup.

For a technical overview of each technology, the reader is referred to Appendix B of U.S.
EPA (2010). The following technologies are discussed in greater detail later in this chapter
with example data visualizations:

• Pump and treat (Section 8.2)


• In situ thermal treatment (Section 8.3.2)
• In situ chemical oxidation (Section 8.3.3)
• Bioremediation and monitored natural attenuation (Section 8.3.4)

As we learn more and more over time about the behavior of contaminants in the subsur-
face and the efficacy of various technologies, the preferred technology from regulatory
and commercial perspectives changes frequently. Trends in technology usage over time
can be measured through public record of decision (ROD) documents produced by the U.S.
EPA for each Superfund site. As described in U.S. EPA (2011a)

“A ROD contains site history, site description, site characteristics, community participation,
enforcement activities, past and present activities, contaminated media, the contaminants
present, scope and role of response action and the remedy selected for cleanup.”

As a result, RODs and other CERCLA decision documents [ROD amendments and expla-
nations of significant differences (ESDs)] provide informative snapshots in time of the
remediation marketplace, illustrating changes in the preferred remedial approaches and
individual technologies. Figure 8.1 presents the number of Superfund technology selec-
tions over time for the overall remedy categories most applicable to hydrogeologists (and
bulleted above). Note that in situ source control includes both in situ treatment and in situ
containment remedies. Ex situ source technologies are not depicted in Figure 8.1 because
they are more often used for soil excavations for contaminants that are not impacting
groundwater quality, such as polychlorinated biphenyls (PCBs). Summarized here are sev-
eral interesting concepts highlighted in Figure 8.1.

• The use of pump and treat increased dramatically in the late 1980s and was the
dominant remedial approach throughout the 1990s. Note that the early 1990s also
Groundwater Remediation 427

80
In-Situ Groundwater Treatment Pump and Treat
In-Situ Source Control Monitored Natural Attenuation
70

60
Number of Selections

50

40

30

20
Downloaded by [University of Auckland] at 23:45 09 April 2014

10

Fiscal Year

FIGURE 8.1
Original graph illustrating total number of technology selections by fiscal year (FY) for the depicted remedy
categories at Superfund sites. Fiscal year 1985 includes all selections between 1982 and 1985. Raw data obtained
from Appendix A of U.S. EPA (2010) for all categories except MNA, for which raw data were obtained from
Figure 7 of U.S. EPA (2010) (2005–2008) and Appendix E of U.S. EPA (2007a) (1985–2004). Data for FY 1982–2004
are project-level data except for MNA, from which data are derived from ROD documents exclusively; data from
FY 2005–2008 are decision document-level data.

represented the time frame with the greatest number of RODs published, peaking
at 197 in 1991 (U.S. EPA 2010). The number of annual pump and treat selections
decreased dramatically between 1997 and 2001, but the frequency of selection
remained relatively constant throughout the 2000s. The late 1990s’ decline in
pump and treat selections reflects an increased understanding of the technology’s
limited efficacy in terms of mass removal and cleanup time reduction, while the
stabilization at approximately 20 selections per year in the 2000s reflects the ongo-
ing need for the technology as a containment measure to prevent further migra-
tion of contaminant plumes.
• After peaking in 1991 and immediately declining, the annual number of in situ
source control selections has remained relatively constant since 1993. The annual
number of in situ source control remedies has closely paralleled that of pump and
treat since 2001, and these two approaches are often used in tandem to reduce
cleanup time frames.
• The use of in situ groundwater treatment and MNA steadily increased throughout
the 1990s, and by the mid-2000s, these two technologies were selected more often
than in situ source control and pump and treat. Note that the reduced number
of MNA selections between 2001 and 2003 was at least partially triggered by the
1999 publishing of U.S. EPA guidance regarding the proper use of MNA (U.S. EPA
2007a). The rebound in 2004 was achieved after consultants and regulators alike
adjusted to the new MNA framework and figured out how to best incorporate the
technology.
428 Hydrogeological Conceptual Site Models

In addition to the general increase in knowledge and understanding of physical, chemi-


cal, and biological processes, the increased selection of in situ groundwater treatment and
MNA was driven by the following factors:

1. In situ groundwater treatment with technologies such as in situ chemical oxidation


can remove significantly more contaminant mass than conventional pump and
treat systems.
2. Cleanup time frames using MNA are often not remarkably different from those
with pump and treat systems or those in which source and/or groundwater treat-
ment have reached a point of diminishing returns.
3. MNA is often perceived by potentially responsible party (PRP) groups as a signifi-
cantly less costly alternative than active remediation. Note that this is not always
Downloaded by [University of Auckland] at 23:45 09 April 2014

the case as MNA requires extensive monitoring over long periods of time.
4. PRP groups with experience operating pump and treat systems for years and
years with no end in sight became more receptive to innovative technologies with
higher capital costs to avoid perpetual operation of pump and treat systems.

The current distribution of remedial approaches selected in Superfund decision documents, as


measured by the years 2005–2008, are indicative of a balanced approach in which technologies
can be selected or combined as appropriate to meet site-specific objectives. This represents sig-
nificant progress from the early 1990s when pump and treat was seemingly a default option.
Preferences regarding individual in situ technologies have also changed significantly
over the lifetime of Superfund as demonstrated in Figures 8.2 and 8.3 for source-control
remedies and groundwater-treatment remedies, respectively. Figure 8.2 demonstrates that
SVE remains the dominant in situ source-control option, but that its use has decreased sig-
nificantly since peaking in 1991. Aside from the reduced number of RODs, this decline was
likely caused by an increased understanding of the limitations of ambient-temperature
SVE, most notably the difficulty in treating contaminants with low volatility and those
that reside deep below the water table, such as dense NAPLs (DNAPLs). SVE also has dif-
ficulty removing contaminants within low-permeability strata.
The annual number of bioremediation, multiphase extraction, and solidification/stabi-
lization selections has remained relatively constant since the early 1990s; however, these
technologies now comprise an increased percentage of in situ source-control remedies.
Over the course of the 2000s, thermal treatment has emerged as a more prevalent technol-
ogy, rivaling the number of ambient temperature SVE selections in 2005. It is important to
remember that thermal treatment almost always requires an SVE system, and the technol-
ogy is therefore best thought of as thermally enhanced SVE. Therefore, even if the use of
thermal treatment increases significantly in the coming years, the in situ source-control
marketplace can still be classified as SVE-centric.
Figure 8.3 clearly demonstrates that the absolute number and distribution of technology
selections for in situ groundwater treatment have changed significantly over time. Similar
to SVE for source control, air sparging was the dominant groundwater technology dur-
ing the 1990s. However, unlike SVE, air sparging appears to be somewhat obsolete as its
frequency of selection was greatly surpassed by both bioremediation and chemical treat-
ment (most notably in situ chemical oxidation) in the 2000s. The limitations of air sparging
are very similar to those of SVE; in fact, air sparging typically requires an SVE system to
remove volatilized contaminants from the vadose zone, which is a limitation in its own
right. It remains to be seen if the increasing trend toward in situ groundwater remedies
Groundwater Remediation 429

40

Soil Vapor Extraction


35
Thermal Treatment
Bioremediation
30
Solidification/Stabilization
Number of Selections

Multi-Phase Extraction
25

20

15

10

5
Downloaded by [University of Auckland] at 23:45 09 April 2014

Fiscal Year

FIGURE 8.2
Original graph illustrating total number of technology selections by FY at Superfund sites for the five in situ source
control technologies with the most selections between 2005 and 2008. Fiscal year 1985 includes all selections
between 1982 and 1985. Note that these are also the five technologies with the most overall selections between
1982 and 2008 with the exception of thermal treatment, which has one less selection than chemical treatment
during that time span. (Raw data obtained from the United States Environmental Protection Agency (U.S. EPA),
Appendix A, Superfund Remedy Report, Thirteenth Edition. Office of Solid Waste and Emergency Response,
EPA-542-R-10-004, 2010. Available at http://www.clu-in.org/asr/, accessed July 10, 2011.)

25
Air Sparging
Bioremediation
20 Chemical Treatment
Permeable Reactive Barrier
Number of Selections

Multi-Phase Extraction
15

10

Fiscal Year

FIGURE 8.3
Original graph illustrating total number of technology selections by FY at Superfund sites for the five in situ
groundwater remediation technologies with the most selections between 1982 and 2008. Fiscal year 1985 includes
all selections between 1982 and 1985. (Raw data obtained from the United States Environmental Protection
Agency (U.S. EPA), Appendix A, Superfund Remedy Report, Thirteenth Edition. Office of Solid Waste and
Emergency Response, EPA-542-R-10-004, 2010. Available at http://www.clu-in.org/asr/, accessed July 10, 2011.)
430 Hydrogeological Conceptual Site Models

continues into the new decade. As with air sparging and SVE, there may be a backlash
against in situ chemical oxidation and bioremediation based on the increasing availability
of performance data, which may not be as encouraging as originally anticipated.
Despite the continually evolving technological and political landscape, one thing that
has remained constant in Superfund is the objective of restoring contaminant ground-
water to its beneficial use. This beneficial use is often considered to be drinking water as
measured by maximum contaminant levels (MCLs) to ensure that a pristine groundwater
resource is available to future generations.
The rest of this chapter outlines important concepts and provides example data visu-
alizations for established and emerging remedial technologies important to the current
commercial marketplace. Concepts and visualizations are also provided for technical
impracticability and alternative remedial metrics and endpoints. Lastly, the overall suc-
cess of groundwater remediation in the United States is discussed in light of recent cost
Downloaded by [University of Auckland] at 23:45 09 April 2014

and performance data with perspectives on future objectives and approaches.

8.2 Pump and Treat


8.2.1 Introduction
Pump and treat has been widely applied in Superfund and state-led groundwater reme-
diation programs for more than 25 years. As of the end of the 2008 fiscal year, pump and
treat has been selected as a Superfund remedy a total of 798 times, making it far and away
the most prevalent groundwater remediation technology in existence. The next most fre-
quently used technology is SVE with 276 selections (U.S. EPA 2010). Pump and treat is a
long-term remedy, which can last for decades and, for some sites, is expected to last hun-
dreds of years (i.e., indefinitely). For this reason, there is increasing emphasis on avoiding
the installation of pump and treat systems wherever possible and limiting the construc-
tion of new pump and treat systems to sites where hydraulic containment is necessary to
prevent human and/or ecological receptors from being exposed to contaminants at con-
centrations that present an unacceptable risk. For example, a site where a contaminant
plume is threatening to impact off-site residential wells would benefit from a pump and
treat system that intercepts the plume prior to reaching the wells.
There is also increased emphasis in professional practice on optimizing the performance
of existing pump and treat remedies to reduce operation and maintenance (O&M) costs
and decrease cleanup time frames to the extent practicable. This section presents design
concepts and example visualizations of pump and treat implementation and highlights
common strategies for pump and treat optimization.

8.2.2 Design Concepts


To reiterate a previous point, the most commonly stated objectives of a pump and treat system
are to provide hydraulic containment of contaminated groundwater to prevent further migra-
tion and to expedite restoration of groundwater quality to applicable regulatory criteria, such
as MCLs. However, years of operational experience have proven that using pump and treat
to achieve the latter objective, groundwater quality restoration, is usually infeasible within a
reasonable amount of time (e.g., <10 years). Tailing and rebound effects can result in concen-
trations persisting above cleanup criteria for decades. Tailing is the progressively slower rate
Groundwater Remediation 431

of contaminant concentration decline resulting from pump and treat operation, and rebound
is the relatively rapid increase in concentration after pumping cessation (U.S. EPA 1996a). A
conceptual diagram of tailing and rebound effects is depicted in Figure 8.4 for reference.
Some of the physical and chemical processes that result in the impracticability of using pump
and treat as a pure remediation measure are listed below (Cohen et al. 1994):

• Contaminant desorption and NAPL dissolution


• Precipitate dissolution
• Groundwater velocity variation, preferential pathways, and dead-end pores—all
caused by heterogeneity
• Matrix diffusion
Downloaded by [University of Auckland] at 23:45 09 April 2014

Matrix diffusion involves contaminant diffusion into and back-diffusion out of low-­
permeability media, such as clay and bedrock, and is a subject that has received consider-
able attention in academia. Additional discussion regarding the implications of matrix
diffusion with respect to the remediation of sites with DNAPL contamination is provided
in Section 8.4, and modeling examples are provided in Chapter 5.
One illustrative example regarding the inability of pump and treat systems to restore
groundwater to MCLs in reasonable time frames involves one of the longest and deepest
horizontal wells in the world. Installation and design details of the horizontal well, which
is approximately 4500 ft long, are presented in Figure 8.5. The well was installed as part of
a pump and treat design to control the migration of 1,4-dioxane and restore groundwater
to the applicable drinking water standard of 85 µg/L. Historic groundwater samples col-
lected at the site contained 1,4-dioxane concentrations above 200,000 µg/L. Concentration
contours and the overall extent of 1,4-dioxane in groundwater are presented in Figure 8.6

FIGURE 8.4
Schematic of tailing and rebound effects. (Modified from Cohen, R. M. et al., Methods for Monitoring Pump and
Treat Performance, EPA Contract No. 68-C8-0058, Office of Research and Development, EPA/600/R-94/123, Ada,
OK, 114 pp., 1994; Keely, J.F., Performance Evaluation of Pump and Treat Remediations, USEPA/540/4-89-005, Robert
S. Kerr Environmental Research Laboratory, Ada, OK, 19 pp., 1989.)
432 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.5
Installation and design details of one of the longest horizontal wells in the world installed to extract contami-
nated water from approximately 100 ft below ground surface. Bottom left: daylight of the drill bit after passing
through more than 2000 ft of subsurface sediments. Middle: custom-designed well screen. (Courtesy of Farsad
Fotouhi, Pall Corporation.)

at the time of horizontal well installation (top) and after three years of operation at flow
rates above 1000 gallons per minute (bottom). As shown in Figure 8.6, significant mass
removal was achieved with the elimination of plume concentrations above 10,000 µg/L.
However, concentrations above 1000 µg/L, which is over one order of magnitude above the
cleanup goal, persist across the plume extent after three years of pumping.
Groundwater Remediation 433
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.6
Plan-view concentration contour maps of the 1,4-dioxane plume in 2001 when the horizontal well was installed
and in 2004, three years after operation. (Courtesy of Farsad Fotouhi, Pall Corporation.)

A time series plot of 1,4-dioxane concentrations at one of the vertical extraction wells
associated with the pump and treat design is presented in Figure 8.7 and demonstrates
significant concentration tailing after approximately one year of operation. The concen-
tration in this well continues to decline at a very slow rate and remains above 1000 µg/L
as of early 2011. The site geology is comprised of heterogeneous glacial sediments, and it
is likely that high concentrations of 1,4-dioxane diffused into less permeable sediments,
creating a secondary contaminant source capable of feeding highly permeable zones for
434 Hydrogeological Conceptual Site Models

FIGURE 8.7
Downloaded by [University of Auckland] at 23:45 09 April 2014

Time-series plot of 1,4-dioxane concentrations at a vertical extraction well in the groundwater plume. (Courtesy
of Farsad Fotouhi, Pall Corporation.)

decades. Even with the installation of an advanced horizontal well costing more than
$1,000,000, attainment of the cleanup goal is not a realistic objective for this site unless
the applicable time frame is on the order of decades. It is also important to note that
1,4-dioxane is well suited to pump and treat remedies because it has very high solubility
and does not adsorb to soils. If this case study involved a more hydrophobic contaminant,
such as a chlorinated solvent, the initial pump and treat performance would have been
significantly worse.
Pump and treat’s optimal usage is as a plume containment, or management of migra-
tion, remedy. Two important terms with respect to plume containment are the pump and
treat system’s capture zone and radius of influence. These two terms are not synonymous,
and often there is significant confusion on this issue. The capture zone for a pumping well
is the three-dimensional volume (i.e., X, Y, Z) of the porous media within which ground-
water will flow to the pumping well for extraction and subsequent treatment. The radius
of influence of a pumping well is the farthest horizontal distance away from the well
where pumping causes a discernible aquifer response (e.g., a measurable drawdown). The
concept of a capture zone is illustrated in Figure 8.8 in plan view (top) and cross section
(bottom). The concept of a capture zone versus the radius of well influence is illustrated in
Figure 8.9. While MW-3 is within the radius of influence of pumping well PW (i.e., there
is measurable lowering of the hydraulic head), it is not within PW’s capture zone, and
it would be erroneous to conclude that contaminated groundwater at MW-3 would be
extracted by the pump and treat system.
Another important concept in the analysis of the hydraulic head distribution created
by a pump and treat system is the difference between drawdown as measured by the
water level in the pumping well and the theoretical drawdown, or formation loss, result-
ing from groundwater flow through the porous media. The difference between the actual
drawdown in the pumping well and the formation loss is termed the well loss, which is an
efficiency loss caused by factors such as poor well design, insufficient well development,
turbulent flow through the filter pack and screen, or unavoidable disturbance of the near-
well porous media during drilling. The water level based on the actual drawdown in the
pumping well is not representative of the true aquifer potentiometric surface and, there-
fore, should not be used to create contour maps to delineate capture zones. Consequently,
it is important to install monitoring wells or piezometers very close to extraction wells
Groundwater Remediation 435

Map View

Flowlines Capture Zone


970

2
97

974

976

984
982

988
986
980
978
Partially Penetrating
Extraction Well Ground Surface
Downloaded by [University of Auckland] at 23:45 09 April 2014

Capture Zone
Flowlines
970

972
974

976

984

986
978

988
982
980

FIGURE 8.8
Illustration of horizontal capture zone in plan view (top) and vertical capture zone in cross section (bottom),
highlighting the importance of a three-dimensional approach to pump and treat system design and data
analysis. (Modified from United States Environmental Protection Agency (U.S. EPA), A Systematic Approach
for Evaluation of Capture Zones at Pump and Treat Systems: Final Project Report, Office of Research and
Development, EPA 600/R-08/003, Washington, DC, 2008.)

FIGURE 8.9
Conceptual cross section illustrating the difference between the capture zone and the radius of influence of the
pumping well. Note also the effects of well loss on the water level of the pumping well.
436 Hydrogeological Conceptual Site Models

in order to measure the true aquifer response to the pumping. The concept of well loss is
also presented in Figure 8.9. The design of pump and treat systems for optimal hydraulic
containment involves specification of numerous parameters, the most important of which
are defined and described next (U.S. EPA 2005):

• Well layout and design: The number, locations, and design specifications for all
extraction wells. Design specifications include well-construction materials, screen
size and interval, filter-pack parameters, and drilling method.
• Design flow rate: The individual extraction well flow rates and the total cumula-
tive flow rate for the pump and treat system calculated from estimated extraction
rates necessary to achieve remedy goals (e.g., hydraulic containment). This value
should be used to calculate the design mass removal rate and evaluate discharge
Downloaded by [University of Auckland] at 23:45 09 April 2014

options, chemical usage requirements, and treatment system waste production


(e.g., sludge).
• Hydraulic capacity: The maximum expected flow rate of a pump and treat system,
generally calculated by multiplying the design flow rate by a safety factor greater
than 1.0. This value should be used to size treatment equipment and well-field
pumps and piping but should not be used to calculate the design mass removal
rate.
• Design influent concentration: The expected mixed influent concentration for each
constituent or class of constituents from all extraction wells. This value should be
used in conjunction with the design flow rate to calculate the design mass removal
rate.
• Maximum influent concentration: The maximum expected mixed influent concen-
tration for each constituent or class of constituents, typically calculated by multi-
plying the design influent concentration by a factor of safety between 1.0 and 2.0.
The treatment process should be selected and designed to handle this concentra-
tion. This value should not be used to calculate the design mass removal rate.
• Design mass loading rate: The estimated mass loading rate (e.g., in pounds per
day) from the extraction wells to the treatment plant of each constituent or class of
constituents, calculated by multiplying the design flow rate by the design influent
concentration. This value should be used to estimate treatment plant materials
and utilities requirements when producing cost estimates for various treatment
options.
• Design discharge system: The number, locations, design flow rates, and specifica-
tions for the treated effluent discharge system. Treatment-system effluent can be
discharged to surface water (e.g., a stream) or groundwater. Examples of design
elements include surface water outfalls and/or diffusers, rapid infiltration basins,
injection wells, infiltration trenches, and vertical wick drains. The discharge sys-
tem should be designed to handle the hydraulic capacity of the treatment plant
and is typically the regulatory compliance point of the treatment system.

Aquifer testing and groundwater modeling are the two primary methods of determin-
ing the above design parameters for a new pump and treat system and are typically used
in tandem. It is strongly recommended that a multiday (e.g., 72-hour) aquifer pumping
test be conducted to evaluate well yields and potential capture zones for the pump and
treat system. This is particularly important for unconfined aquifers with a delayed gravity
Groundwater Remediation 437

drainage response. Figure 8.10 depicts implementation of an aquifer pumping test at a


hazardous-waste site. If groundwater discharge is selected for the pump and treat design,
it is also recommended to perform long-term infiltration or injection pilot tests to evalu-
ate aquifer mounding and determine acceptable hydraulic loading rates (e.g., gallons per
day per square foot for infiltration basins or gallons per minute for injection wells). Figure
8.11 depicts a long-term groundwater discharge pilot test to support the design of a rapid
infiltration basin for use in a pump and treat system, and Figure 8.12 depicts the setup of
a long-term pilot test of an injection-well design.
Drilling conducted during installation of the pumping test well and adjacent monitor-
ing wells can be used to obtain soil samples for grain-size analysis, which are critical in
designing an appropriate well screen and filter pack. An example gradation chart of site-
specific data and commercially available filter packs is presented in Figure 8.13 for refer-
ence. In addition, aquifer pumping test data can be quantitatively analyzed to determine
Downloaded by [University of Auckland] at 23:45 09 April 2014

aquifer hydraulic properties and to calibrate a site-specific groundwater model.


One major barrier to the implementation of aquifer pumping tests in support of remedial
designs is the fact that groundwater is typically contaminated and, therefore, has myriad
management issues. Contaminated groundwater extracted during a pumping test can
either be stored, treated, and discharged onsite or transported off-site for subsequent dis-
posal at an appropriate facility. The permitting (e.g., for temporary groundwater or surface
water discharge) and additional infrastructure (e.g., frac tanks and temporary treatment
systems) can add significant costs to a project. Furthermore, there are many health-and-
safety concerns when handling contaminated groundwater, which require appropriate
management as demonstrated in Figure 8.14.

FIGURE 8.10
Photograph of a pumping test in support of a pump and treat design with a gray frac tank in the background for
water storage. The site is paved, and all wells are flush mounted, which is necessary for pump and treat systems
at operational industrial facilities.
438 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.11
Photograph of a long-term groundwater discharge test at a pilot-scale rapid infiltration basin (RIB). The pilot RIB
was excavated to a depth of 3 ft below ground surface and is approximately 100 ft long by 20 ft wide. The pilot test
flow rate of 75 gallons per minute only flooded a very small portion of the RIB, indicating that significant capacity
remains. The native coarse sand soils at the site are well suited for surface infiltration. Rip-rap was placed around
the RIB sidewalls and at the hose discharge point to prevent erosion. White PVC piezometers were installed within
and surrounding the RIB to monitor groundwater mounding during the test. (Courtesy of Woodard & Curran, Inc.)

FIGURE 8.12
Photograph of a 6-in. groundwater injection well fitted with a ball valve, flow meter, and upstream bag filters for a
long-term pilot test. Surface soils are fine sands that are not ideally suited for infiltration; however, the underlying
aquifer intercepted by the well’s screened interval contains highly conductive gravelly sand. During the pilot test,
sustained flow rates of up to 45 gallons per minute were injected into the well. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 439
Downloaded by [University of Auckland] at 23:45 09 April 2014

0 10 20 30 40 50 60 70 80 90 100 110 120

FIGURE 8.13
Grain size distribution chart created to select a commercial filter pack and help size the well-screen slot size for a
pump and treat extraction well. A commercial #2 sand reasonably matches the shape of the site sample and has a D70
(70% retained or 30% finer) multiplier of approximately 4. A rule of thumb is that the D70 multiplier or ratio of the
filter pack D70 to the site sample D70 should be between 4 and 6. A #1 sand pack would be overly conservative (too
fine) and potentially reduce well efficiency. The D70 line is highlighted in green. Another rule of thumb is to select a
well-screen size that retains between 85% and 100% of the filter pack. In this case, a 40-slot (0.040-in. slot opening, line
highlighted in red) screen is selected, which will retain 95% of the filter pack. Note that while custom filter pack and
well-screen sizes can be ordered, it is often advantageous for the project schedule and budget to use commercially
available materials (i.e., #00, #0, #1, and #2 sands and slot sizes divisible by 10).

FIGURE 8.14
Photograph illustrating extensive use of personal protective equipment (PPE) in performing a pumping test at
a contaminated site.
440 Hydrogeological Conceptual Site Models

Numerical groundwater modeling is extensively used in the design of pump and treat
systems as computer programs with graphical user interfaces (GUIs), such as Groundwater
Vistas and Processing MODFLOW, are readily available. Modeling can be used to deter-
mine the optimal locations and number of extraction wells and to determine necessary
flow rates to achieve hydraulic containment of the groundwater plume under different
seasonal conditions. Most importantly, modeling helps evaluate failure mechanisms that
cannot be properly assessed when limited field data exist. For example, Figure 8.15 depicts
a pump and treat design failure that a numerical groundwater model would easily identify.
The groundwater discharge design can also be incorporated into the groundwater
model and is necessary when groundwater discharge will be used to enhance contami-
nant flushing (through discharge upgradient of the plume) or to augment containment
(through discharge downgradient of the plume aimed at establishing a hydraulic barrier).
The most common means of assessing the viability of a pump and treat design with mod-
Downloaded by [University of Auckland] at 23:45 09 April 2014

eling is forward and reverse particle tracking, which helps delineate horizontal and verti-
cal capture zones. Examples of particle tracking used to support a pump and treat design
are provided in Figures 8.16 and 8.17.
Numerical fate-and-transport models are often used to help estimate cleanup time
frames for pump and treat systems. However, this is subject to considerable uncertainty
as the contaminant source term is very difficult to represent accurately, particularly
where NAPL or bedrock contamination exists. Therefore, the best application of fate-and-­
transport modeling in pump and treat design is creating comparative scenarios evaluating

FIGURE 8.15
Example graphic where flow will move between and beyond extraction wells (symbolized in red) despite the
fact that limited aquifer head measurements in the pumping area are higher than the potentiometric surface
at each pumping well, which is 360 ft. A numerical groundwater model can be used to evaluate the potential
for this occurrence. (Adapted from United States Environmental Protection Agency (U.S. EPA), Pump and Treat
Ground-Water Remediation: A Guide for Decision Makers and Practitioners. Office of Research and Development,
EPA/625/R-95/005, Washington, DC, 90 pp., 1996a; Cohen, R.M. et al., Methods for Monitoring Pump and Treat
Performance, EPA Contract No. 68-C8-0058, Office of Research and Development, EPA/600/R-94/123, Ada, OK,
114 pp., 1994.)
Groundwater Remediation 441

Surface Water Body


T5-5

300 feet
T5-3
TOW 32

T5-4 T4-3

T5-2 T4-2

T4-1 T1-7
T5-1

T4-4 T1-5 T3-1


T1-2
T1-3 T2-6
T1-6
T3-2
T2-4
T1-4 T2-6
T2-1
T1-1 T3-3 TOW-08

T2-5
T2-3
Downloaded by [University of Auckland] at 23:45 09 April 2014

T2-7 TOW-134 TCS-2

TCS-1
TOW-26
TDW

Source Area

FIGURE 8.16
Particle tracking simulation under ambient (nonpumping conditions). Particles, which run through the con-
taminant source area, will ultimately discharge to the surface water body. Modeling for this figure and Figure
8.17 was conducted in Groundwater Vistas.

FIGURE 8.17
Reverse particle tracking (i.e., running the model backwards) for a circle of particles released around a single
extraction well pumping at 20 gallons per minute. The source area is located entirely within the capture zone as
delineated by the reverse particle flow paths, indicating that this flow rate and well location should be sufficient
for the pump and treat design.
442 Hydrogeological Conceptual Site Models

the efficacy of different design configurations. In this manner, the relative impact of different
pump and treat designs (with or without source term remediation) on cleanup time frames
can be assessed. An example of this process is provided in Section 8.4 in the context of tech-
nical impracticability.
The treatment process design of a pump and treat system is equally important as the
extraction and discharge design, which has been the subject of this section as it directly
involves the hydrogeologist. Treatment plant design is typically the realm of environmen-
tal engineers with expertise in wastewater treatment.

8.2.3 System Optimization


Optimization of a pump and treat system can happen during the initial design stage or
after years of operation. The primary goals of pump and treat optimization are to reduce
Downloaded by [University of Auckland] at 23:45 09 April 2014

the time frame of system operation and to increase system efficiency to lower annual oper-
ating costs and the cumulative life-cycle cost. Strategies for pump and treat optimization
are listed and briefly described below:
Simulation optimization: Most commercially available numerical groundwater model-
ing programs have optimization algorithms that can be used to design the most efficient
pump and treat system with the lowest total flow rate, least number of wells, and the best
overall performance. Simulation optimization involves automated running of hundreds
of scenarios to determine a pumping configuration that results in the optimal capture of
targeted groundwater or that minimizes the time required for groundwater to reach a
threshold concentration. While simulation optimization sounds like the perfect mathe-
matical solution, in practice, it can be very difficult to implement and can fail to adequately
represent site-specific limitations or secondary objectives of the pump and treat system.
For example, pumping restrictions may be necessary to prevent dewatering an adjacent
stream during low-flow conditions. While it is possible to include such constraints in an
optimization algorithm, this further increases complexity beyond the standard trial-and-
error approach. As with most computer modeling exercises, a combination of mathemati-
cal optimization and trial and error with professional judgment will generally yield the
best solution.
Optimization of equipment sizing: One common problem in pump and treat design is the
overdesign of equipment, which consists of sizing equipment to be excessively large to con-
servatively ensure that peak flows and concentrations can be accommodated. Overdesign
can lead to higher than necessary capital costs, gross inefficiency in energy usage, and
even poor system performance for process elements when operating concentrations and/
or flow rates are significantly lower than design values. For example, using 5-HP pumps
in extraction wells when 1-HP pumps will suffice can result in excess energy costs of more
than $10,000 a year (U.S. EPA 2005). Optimization of treatment-system sizing should be
conducted during both the initial design and periodic reviews of system operation. One
key step that can prevent overdesign of treatment equipment is the use of concentration
data obtained from monitoring wells during pumping conditions at design flow rates (i.e.,
through preliminary aquifer tests), rather than under ambient conditions (U.S. EPA 2005).
Example concentration differences at individual monitoring wells measured under pump-
ing and ambient conditions and the effect of these differences on the design influent con-
centration are presented in Figure 8.18.
Phased extraction well construction: Installing and initializing operation of extraction wells
in phases can significantly increase design efficiency, particularly where limited predesign
data are present. With a phased approach, the aquifer response data from the first phase of
Groundwater Remediation 443

60,000
57,000
Ambient Conditions

Sustained Design Pumping Conditions


50,000

40,000

30,000

20,000 18,230
16,300
Downloaded by [University of Auckland] at 23:45 09 April 2014

10,000
7,700

4,200 4,200
1,980 1,932
1,190 1,130
0
MW-1 (20 gpm) MW-2 (20 gpm) MW-3 (30 gpm) MW-4 (30 gpm) Design Influent

FIGURE 8.18
Graph illustrating how TCE concentrations at monitoring wells within a TCE plume adjacent to proposed
extraction wells are significantly lower when sampled under sustained, 100 gallons per minute pumping condi-
tions versus ambient conditions (for example, during a remedial investigation sampling event). If the ambient
monitoring results are used for the pump and treat design, the actual influent concentration at system startup
(and equivalent TCE mass loading rate) will be nearly an order of magnitude lower than the design value. This
will result in a significantly overdesigned system with unnecessarily high operational costs. Such concentration
reductions are commonly seen during sustained pumping because of changes in groundwater flow patterns,
plume dilution with clean groundwater, and potential changes in redox conditions. (Adapted from United
States Environmental Protection Agency (U.S. EPA), Cost-Effective Design of Pump and Treat Systems, Office of
Solid Waste and Emergency Response, EPA 542-R-05-008, 2005. Available at http://www.clu-in.org/download/
remed/hyopt/factsheets/cost-effective_design.pdf, accessed July 15, 2011.)

well installation and operation can be used to better design well locations and target flow
rates of subsequent phases (U.S. EPA 1996a).
Adaptive and pulsed pumping: Adaptive pumping involves the design of the well field
such that operating conditions can be varied to minimize the buildup of stagnation zones.
Under adaptive pumping, wells are periodically shut down and operated at varying flow
rates. Pulsed pumping is similar, but it involves temporarily shutting down wells with
the specific objective of allowing contaminant concentrations to increase through dissolu-
tion, diffusion, and desorption before resuming pumping (U.S. EPA 1996a). Use of pulsed
pumping improves the ratio of contaminant mass removed to the ratio of groundwater
pumped and treated, thereby improving system efficiency and lowering operating costs
(but not decreasing cleanup time frames).
Long-term operational changes: As illustrated in Figure 8.7, significant dissolved-phase con-
centration reductions are generally realized within a few years of pump and treat opera-
tion. This is especially true when a pump and treat system is combined with a short-term
in situ source area remediation. After this period of significant decline, tailing effects lead
to a prolonged pump and treat with concentrations generally remaining above cleanup
goals and changing very slowly over a long duration. However, the postsource remedia-
tion/postsignificant decline period has a substantially lower influent concentration than
444 Hydrogeological Conceptual Site Models

the design condition, and it is likely that the spatial distribution of contaminants is con-
siderably different as well. As a result, after several years of operation, it may be possible
to eliminate certain process treatment elements for contaminants that no longer exceed
regulatory standards, or it may be possible to shut down peripheral extraction wells and
focus more on remaining areas of relatively high concentration.
Alternatively, it may be more beneficial to replace original extraction wells with new
ones in locations better suited to the current nature and extent of contamination. If the
flow rate can be lowered significantly by removing wells, it may also become possible to
switch the mechanism of treated effluent disposal. For example, groundwater discharge
may become feasible at a lower flow rate, which would enable the elimination of certain
treatment elements as regulatory concentration standards for groundwater discharge are
generally significantly higher than those for surface water discharge, primarily as a result
of the absence of ecological receptors.
Downloaded by [University of Auckland] at 23:45 09 April 2014

Optimization may also involve making the pump and treat monitoring program more
efficient by removing redundant wells and eliminating unnecessary analyses. As previ-
ously mentioned, the public-domain programs Monitoring and Remediation Optimization
System (MAROS) and Visual Sample Plan can be used to evaluate well redundancy. The
formalized framework for conducting a postconstruction pump and treat optimization
analysis is outlined in U.S. EPA (2007b), which also lists other strategies for consideration.

8.3 In Situ Remediation


8.3.1 Introduction
As discussed in Section 8.1, the use of in situ remediation technology has grown signifi-
cantly over the past 20 years. Currently, there are numerous options available to the hydro-
geologist to remove contaminant mass in situ from source zones and dissolved-phase
groundwater plumes. Technology screening and selection is a critical process required of
every remedial project, and during this phase, the optimal technology is selected consid-
ering the following factors:

• The hydrogeological CSM


• Protectiveness of human health and the environment
• Applicable regulatory standards
• Short- and long-term implementability and effectiveness
• Cost
• State and community acceptance
• Energy and other resource requirements

For Superfund sites, technology assessment and selection generally occur during the
Feasibility Study (FS) and/or predesign phases when treatability studies can be conducted
and site-specific cleanup levels developed, considering the above factors. The treatment
technology, cleanup criteria, and performance standards are finalized in the ROD. While
exact cleanup standards can vary significantly based on a site-specific risk characterization
and the beneficial use of groundwater, a general expectation of the Superfund program is
Groundwater Remediation 445

for remediation systems to achieve a 90% or greater reduction in the concentrations or


mobility of the COCs (U.S. EPA 1989a).
There is no one size fits all in situ technology, and the hydrogeologist should actively
review all technologies to determine which one best suits the CSM. The reader is referred to
U.S. EPA (2010) and the work of Kresic (2009) for a comprehensive survey of available source
and plume remediation technologies to consider at hazardous-waste sites. The remainder of
this section highlights a selection of groundwater in situ remediation technologies that are
increasingly used in the marketplace and are actively supported by the regulatory commu-
nity. In particular, in situ thermal treatment and in situ chemical oxidation are often touted
as technologies capable of achieving closure at DNAPL sites (U.S. EPA 2009a). Emphasis in
this section is placed on describing important remedial concepts for each technology and
presenting example data visualizations from real-life remediation projects.
Downloaded by [University of Auckland] at 23:45 09 April 2014

8.3.2 In Situ Thermal Treatment


8.3.2.1 Design Concepts
Thermal remediation technology has its origins in the petroleum industry, where, for many
years, subsurface heating and steam injection have been used to enhance oil recovery from
high-gravity deposits, oil sands, and oil shales. The application of thermal technology to
groundwater remediation projects is a logical extension as, in many cases, contamination
is caused by NAPLs such as oils and chlorinated solvents. To reiterate an earlier point, in
situ thermal treatment is an enhanced form of SVE as volatilized contaminants must be
captured and treated in the vapor phase. Thermal treatment also typically involves multi-
phase extraction wells. Thermal treatment has many advantages over ambient-­temperature
SVE as thermal heating of contaminated zones greatly enhances mass removal (United
States Army Corps of Engineers [USACE] 2009) by

• Increasing contaminant vapor pressures


• Decreasing NAPL viscosities
• Increasing contaminant solubilities and diffusivities
• Increasing biological activities

In addition, the thermal conductivity of soil is generally much less variable than conventional
remediation parameters such as soil permeability. Therefore, in situ thermal treatment can
remediate low-permeability materials, such as silt and clay, in which advective flow cannot
be established. Another important advantage of thermal heating is that the combined boiling
point of a volatile organic compound (VOC) immersed in water is lower than that of the VOC
in air as governed by Dalton’s law of partial pressures. For example, the boiling point of tetra-
chloroethene (PCE) is 121°C in air but only 87°C in water. This means that PCE will boil and
volatilize at a temperature lower than that of water (Beyke and Fleming 2005).
The three primary commercially available forms of in situ thermal treatment are

• Electrical resistivity heating (ERH)


• Steam-enhanced extraction (SEE)
• Conductive heating

Each of these technologies is described below, highlighting important performance


considerations.
446 Hydrogeological Conceptual Site Models

Electrical Resistivity Heating


ERH has been demonstrated as an effective technology for the removal of volatile and
some semivolatile constituents from soil and groundwater. ERH heats the subsurface
by passing electrical current between electrodes through saturated or unsaturated soil
(United Facilities Criteria [UFC] 2006). The target temperature in the saturated zone is
typically the boiling point of water, which is also the maximum achievable temperature
with the technology. ERH heating evaporates VOCs in situ and steam-strips them from
the subsurface (Powell et al. 2007). At first, electrical current passes preferentially through
areas with high concentrations of chlorides, which are generally the most conductive soil
or groundwater zones. Less electrically conductive soil horizons heat up as water in the
high-conductivity zones is boiled away. When heat losses are minimal or well controlled, a
relatively uniform heating of the desired remediation region is achieved (UFC 2006).
The predominant design concerns for ERH systems are moisture loss and high ground-
Downloaded by [University of Auckland] at 23:45 09 April 2014

water flows (Kingston et al. 2009). To avoid excessive soil drying and the resultant loss of
electrical conduction, ERH systems typically incorporate wetting systems around elec-
trodes. High groundwater seepage velocities in excess of several feet per day can cause
significant heat losses as warm water is continuously flushed out of the treatment area and
replaced with cool water. This limitation is typical of karst aquifers with preferential flow-
paths for example. A management system consisting of upgradient pumping wells and/or
downgradient injection wells can help reduce excessive groundwater flux and associated
heat losses (Kingston et al. 2009).

Steam-Enhanced Extraction
SEE for hazardous-waste remediation involves the use of steam injection wells to create
a pressure gradient for recovery of NAPLs and to heat the subsurface to volatilize and
extract contaminants in the vapor phase (UFC 2006). Similar to ERH, SEE is capable of
heating the subsurface to a maximum temperature equal to the boiling point of water or
the steam distillation temperature at approximately 100°C. After NAPL has been displaced
and recovered in the liquid phase, pressure cycling may be used to enhance mass removal
in the vapor phase. Pressure cycling promotes contaminant volatilization by creating ther-
modynamically unstable conditions within soil pores (UFC 2006).
The primary design parameter for SEE is the permeability of the material to be treated as
this dictates injection flow rates and pressures. Unlike ERH, SEE can have difficulty treating
fine-grained soils because the permeability may be insufficient to conduct steam. However,
it may be possible to remediate a low-permeability zone if it is of limited thickness such that
steam can be injected above and below the zone, and heat (not steam) will conduct through it
(USACE 2009). It can also be difficult to treat shallow soils with SEE as steam can escape to the
surface. One rule of thumb is that the injection pressure should not exceed 0.5 psig/ft of over-
burden located above the injection screen. Injection at higher pressures may lead to formation
fracturing and surface escape of steam. In general, sites with thick clay zones or competent,
minimally fractured bedrock are not amenable to SEE treatment (Kingston et al. 2009).

Conductive Heating
As the name implies, this technology heats the subsurface primarily through conductive
heat transfer. Heat is provided at point-source vertical or horizontal heater units installed
in the subsurface, termed wells and blankets, respectively. Heaters are typically operated
at temperatures above 500°C, and heat spreads from these units through the subsurface by
thermal conduction and fluid convection (i.e., hot water flow; Kingston et al. 2009).
Groundwater Remediation 447

Unlike both ERH and SEE, thermal conductive heating is capable of heating the subsurface
to temperatures significantly greater than the boiling point of water. Therefore, in addition to
chlorinated solvents and light petroleum hydrocarbons, conductive heating can be used to
remediate high boiling–point contaminants with low volatility such as coal-tar products
and PCBs (USACE 2009). Similar with ERH, sites with high groundwater velocities are chal-
lenging to remediate with conductive heating as heat losses may be excessive and prevent
achievement of the target temperature. Installation of a groundwater management system
for flux control can help mitigate this limitation (Kingston et al. 2009).

Performance Expectations
The performance of thermal remediation projects completed to date is comprehensively
reviewed in a state of the practice overview presented in the work of Kingston et al. (2009
[overview report], 2010 [detailed final report]). With respect to remedial performance,
Downloaded by [University of Auckland] at 23:45 09 April 2014

Kingston et al. (2009) states

“Despite the relatively large number of applications to date, there are limited data
on post-treatment monitoring. Of the 182 sites, there was sufficient documentation to
assess post-treatment groundwater quality improvements and source zone mass dis-
charge reductions for only 14 applications.”

This lack of available performance data makes it very difficult to define accurate perfor-
mance expectations. This problem is not necessarily limited to thermal remediation technolo-
gies as a majority of site-remediation projects are either confidential or have involved parties
that are very sensitive about the dissemination of site data and treatment results. Therefore, it
becomes very difficult to practice evidence-based remediation as the successes and failures of
previous remediation projects are not well documented, and too much reliance is placed on
the judgment of regulators and technology vendors. There is somewhat of a conflict of interest
here as both regulators and technology vendors have incentives to portray every remedia-
tion as a success regardless of its ability to efficiently meet the original objectives. While it is
acknowledged that entities responsible for remediating sites do not want it widely known that
they are involved (most likely because of fear of additional litigation), the practice of minimiz-
ing data distribution to the general public is very damaging to their own interests, and addi-
tional efforts should be expended to publish and consolidate data from remediation projects.
Based on the 14 applications with sufficient documentation, Kingston et al. (2009) con-
clude that in situ thermal treatment can achieve one to two orders of magnitude reduc-
tions in dissolved groundwater concentration and mass discharge when the source zone is
appropriately defined. Better performance can be achieved by overdesigning the system to
extend beyond the delineated source zone and by optimizing systems and allowing them
to run for longer durations (Kingston et al. 2009). However, extending the treatment zone
size and heating duration result in significant cost increases for a technology that already
typically costs several or more million dollars to implement. For example, SEE heating at the
Visalia Pole Yard site in Rosemead, CA, lasted for approximately three years and resulted in
a thermal remediation cost of $21.5 million (USACE 2009). Additional discussion regarding
this site is provided in Section 8.5. In summary, while thermal remediation is able to effec-
tively treat DNAPLs and overcome many of the difficulties presented by soil heterogeneity
and contaminant partitioning, it is not a perfect solution capable of uniformly achieving
MCLs and rapid site closure, and when attempting to do so, costs can escalate significantly.
Given the lack of reliable performance data in the literature, one approach to estimat-
ing remedial performance and cost is thermal bench testing, which is a relatively simple
448 Hydrogeological Conceptual Site Models

FIGURE 8.19
Typical energy pie illustrating consumptive uses of energy during ERH remediation. (Courtesy of TRS Group, Inc.)
Downloaded by [University of Auckland] at 23:45 09 April 2014

and inexpensive process. Figure 8.19 is a typical energy pie, illustrating the percentage of
contribution of different components to the total energy consumed during a typical ERH
application. While heat losses and the energy required to heat the subsurface typically
fall within consistent ranges, the energy required to boil VOCs is a factor of site-specific
geology and contaminants, and the site-specific remedial goals. A thermal bench test can
be used to quantify the approximate boiling energy required to reach remedial goals,
thereby enabling estimation of remedial costs and performance expectations. One bench-
testing approach employed by TRS Group, Inc., the leading ERH technology vendor, is to
measure concentrations of target VOCs in site soil samples after boiling off incremental
quantities of water from the samples. The boil-off quantities can easily be converted to
boiling-energy densities (in kilowatt hours per cubic yard), using the latent heat of evapo-
ration of water, and the result is a plot of target VOC concentrations versus boiling-energy
density as shown in Figure 8.20. This graph can then be used to estimate boiling energy

40

35

30

25

20

15

10

0
0 20 40 60 80 100 120 140
Boiling Energy Density (kWh/yd3)

FIGURE 8.20
Boiling energy plot for a VOC determined from thermal bench test results. The remedial goal is a soil concentra-
tion of 10 mg/kg, which would require a boiling energy density of approximately 100 kWh/yd3 based on linear
interpolation.
Groundwater Remediation 449

requirements to achieve different levels of mass removal. Thermal bench testing is an


important step in verifying the applicability of the technology to a site as pilot testing is
typically not a realistic option for most sites because of its very high cost.

8.3.2.2 Case Study


The ERH remediation at Fort Lewis, WA, is believed to be the most studied application
of in situ thermal remediation completed to date (Kingston et al. 2010). The remediation
was performed by TRS and consisted of thermal treatment of three light non-aqueous
phase liquid (LNAPL)/DNAPL areas (areas 1, 2, and 3) within the East Gate Disposal
Yard, a former landfill that was part of the site logistics center. The total remediation area
was approximately 1.5 acres. Maximum pretreatment TCE concentrations in ground­
water ranged from 10,000 to 100,000 µg/L, depending on the area (Bussey 2007). Site geol-
Downloaded by [University of Auckland] at 23:45 09 April 2014

ogy generally consists of heterogeneous glacial outwash, till, and alluvial deposits with
groundwater typically encountered between 9 and 12 ft below the ground surface. NAPL
was observed at depths ranging from near the ground surface to as much as 52 ft below
ground surface. Hydraulic conductivity varies with soil type and ranges from 10 to 1000
ft per day. Gravel zones with high conductivity and seepage velocities presented a signifi-
cant challenge for the remediation because of heat losses, described further later in this
section (USACE 2007).
The performance standards for the Fort Lewis remediation were as follows (Beyke and
Fleming 2005):

• Minimize the time to implement the remedy while maximizing mass removal.
• Establish and verify that the subsurface reaches target temperatures of 90ºC in the
vadose zone and 100ºC in the saturated zone.
• Maintain these target subsurface temperatures for a minimum of 60 days.
• Establish, maintain, and verify control of contaminant migration in groundwater,
soil vapors, and air emissions.
• Provide a system for near real-time data delivery, performance and compliance
monitoring, and project communications.

In total, 304 electrodes with colocated multiphase extraction wells, 62 monitoring wells,
58 temperature monitoring points, and 16 hydraulic control wells were installed for the
remediation. The maximum electrode depths below ground surface were 39, 52, and 37 ft
in areas 1, 2, and 3, respectively (USACE 2007). Extracted vapors were treated with a ther-
mal oxidizer unit. The ERH layout for area 2 is depicted in Figure 8.21. A conceptual depic-
tion of the ERH process is provided in Figure 8.22, and a photograph of the installed ERH
system at Fort Lewis is provided in Figure 8.23.
The heating durations at areas 1, 2, and 3 were 231, 172, and 107 days, respectively
(USACE 2007). The total energy consumption for the remediation was approximately
23,000 MW-hours, which is equivalent to the annual energy requirements of approximately
1800 single-family homes (Bussey 2007; U.S. EPA 2011b). In terms of performance, the ERH
remediation removed approximately 4500 kg of TCE from the subsurface before reaching
a point of diminishing returns (Bussey 2007). A chart of TCE mass removal from each of
the remedial areas is presented in Figure 8.24. In area 1, the maximum post-treatment con-
centration was less than 10 µg/L while the average post-treatment TCE concentrations in
areas 2 and 3 were less than 150 µg/L (Bussey 2007). As shown by the TCE concentration
450 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.21
ERH layout for Fort Lewis area 2, depicting locations of electrodes, hydraulic control wells, temperature moni-
toring points, and the treatment system. (From United States Army Corps of Engineers (USACE), Cost and
Performance Report: In Situ Thermal Remediation (Electrical Resistance Heating) East Gate Disposal Yard, Ft.
Lewis, WA, 2007.)

plot in Figure 8.25, minor rebound occurred at area 3 following the treatment; however,
this rebound was only temporary as concentrations continued to decline significantly in
the subsequent months, as shown by Figure 8.26.
As indicated by the average treatment zone temperature plot in Figure 8.25, the boil-
ing point of water was not achieved throughout the treatment zone for area 3, nor was
it in the other two areas. This was because of unexpectedly high seepage velocities (up
to 10 ft per day) through gravel seams in the treatment zone (Beyke and Fleming 2005).
After it became clear that high groundwater flux made attainment of the temperature
performance standard infeasible, a modified temperature goal was prescribed based on
the boiling temperature of TCE. The following excerpt is from the August 2007 Cost and
Performance Report for the Fort Lewis site (USACE 2007):

“The initial contract expectation for consistent heat-up throughout the defined treat-
ment area proved unrealistic. In all three areas hydrological and thermal equilibrium
were reached before the contract temperature specifications were achieved for all
depths. The intervals that proved the most difficult to heat-up were locations with the
highest groundwater flow velocities and lowest potential residual contamination.”

The modified temperature specification ensured that the most important remedial objec-
tive was achieved (removal of TCE DNAPL) while also allowing the remediation to proceed
Groundwater Remediation 451
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.22
Conceptual cross section of an ERH application showing colocated multiphase extraction wells and electrodes,
and above-ground infrastructure. (Courtesy of TRS Group, Inc.)

FIGURE 8.23
Photograph of ERH system at Fort Lewis. Note that the surface was paved to minimize cooling effects of aerial
recharge. (Courtesy of TRS Group, Inc.)
452 Hydrogeological Conceptual Site Models

3,500
Area 1 Total: 2,981 kg
3,000 Area 2 Total: 1,334 kg
Area 3 Total: 1,132 kg
2,500
Total Kilograms

2,000

1,500

1,000
Downloaded by [University of Auckland] at 23:45 09 April 2014

500

0
0 28 56 84 112 140 168 186 224
Days of Treatment

FIGURE 8.24
Plot of cumulative TCE mass removal for each of the three ERH remedial areas at Fort Lewis. Note that for area
3 in particular, but also area 2 to a lesser extent, thermal treatment continued for many days despite diminished
TCE mass removal. (From United States Army Corps of Engineers (USACE), Cost and Performance Report: In
Situ Thermal Remediation (Electrical Resistance Heating) East Gate Disposal Yard, Ft. Lewis, WA, 2007.)

100000 90
C07
80
E03
10000 Average Temperature 70

60

1000 50

40

100 30

20

10 10

Date

FIGURE 8.25
Plot of groundwater TCE concentrations at two source-area wells before, during, and after the ERH remedia-
tion at area 3, using data from Kingston et al. (2010). The interval of thermal treatment is shaded in gray, and
the dashed black line represents the average temperature of the treatment zone, which peaked in the high 80s.
Groundwater Remediation 453
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.26
Bar graph of average TCE groundwater concentrations in the three treatment areas before, immediately after,
and 20 months after completion of the remediation. The continued decline at area 3 resulted in a nearly four-log
removal of TCE. (Courtesy of TRS Group, Inc.)

in a financially responsible manner that prevented waste of money and energy resources
(USACE 2007). Contours of the remedial temperatures achieved for a target depth at area
2 are presented in Figure 8.27, and a plot of treatment-zone temperature during a typical
ERH application is provided in Figure 8.28.
Note that if the target contaminant was a more recalcitrant compound with a higher
boiling point, such as naphthalene, the failure to reach the boiling point of water may have
resulted in poor remedial performance as steam-stripping, rather than pure compound
boiling, would be the primary contaminant-removal mechanism.
While the Fort Lewis remediation project was successful in removing significant
DNAPL mass, the total remedial cost was approximately $15 million, and TCE concen-
trations remain above the MCL, requiring continued operation of the site’s preexisting
pump and treat system. However, modeling indicates that the ERH application will likely
reduce the operating duration of the pump and treat system from centuries to decades
(Bussey 2007). While this is an important accomplishment, a 2009 (post-thermal remedia-
tion) cost and performance report using actual site data performed by the Environmental
Security Technology Certification Program (ESTCP) indicates that over this time frame
(i.e., decades), in situ bioremediation may have achieved similar performance results at a
lower life-cycle cost (ESTCP 2009). It is, therefore, always important to evaluate technolo-
gies other than in situ thermal remediation taking into consideration the likelihood that
MCLs will not be met in a few years even with aggressive thermal treatment.
454 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.27
Contours of treatment zone temperature for a target depth in area 2 created using data measurements from TRS
(2009). World Imagery Source: Esri®, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, and
the GIS User Community.

Temperature (°C)
0 20 40 60 80 100 120 140

275

265
Elevation (ft AMSL)

255

245

235

225
Coolest Point
Average Temperature
215
Hottest Point
Temperature Goal

FIGURE 8.28
Example graph of temperatures achieved during an ERH remediation. (Courtesy of TRS Group, Inc.)
Groundwater Remediation 455

8.3.3 In Situ Chemical Oxidation


8.3.3.1 Design Concepts
In situ chemical oxidation (ISCO) involves the application of chemical oxidants to the sub-
surface to break down groundwater or soil contaminants into innocuous substances, such
as carbon dioxide, water, or chloride. ISCO is based on redox reactions in which the oxi-
dant is reduced by accepting electrons released from the transformation (oxidation) of the
target organic contaminant. The major benefits of ISCO are that minimal remedial waste
material is generated, and treatment can be completed in a relatively short time frame (i.e.,
weeks or months versus years). In addition, when applied appropriately, ISCO can be a
very cost-effective mass removal technology. Key limitations of ISCO are that oxidation
happens primarily in the dissolved aqueous phase, and as a result, treatment kinetics may
become desorption limited, and that the destruction of NAPLs, although possible, may
Downloaded by [University of Auckland] at 23:45 09 April 2014

require a cost-prohibitive oxidant dose (Interstate Technology and Regulatory Council


[ITRC] 2005).
ITRC (2005) lists four main factors influencing whether a contaminant will be success-
fully oxidized in the field, presented and described next:

• Stoichiometry: dictating the theoretical mass of oxidant required to mineralize


the target contaminants through electron-transfer balances. Note that ISCO with
potassium permanganate, a common oxidant, involves direct electron transfer,
and other oxidants such as persulfate and hydrogen peroxide involve free radicals
such as hydroxyl (OH–) and sulfate (SO4–).
• Kinetics: the most important factor at a microscale dictating the rate of oxidation
reactions based on temperature, pH, reaction byproducts, natural organic matter,
and the presence of oxidant scavengers.
• Thermodynamics: dictating whether an ISCO reaction will theoretically proceed
based on the oxidation potential of the reagent in question.
• Delivery technique: dictating how oxidants are introduced to the target treatment
zone. Typical delivery methods include injection through temporary or perma-
nent wells, gravity infiltration through trenches, direct mixing or tilling into soil,
and placement in natural or engineered fractures (Tsitonaki and Bjerg 2008). A pri-
mary limitation of ISCO is the difficulty in applying reagents to low-permeability
soils in the saturated zone as only very low injection rates can be achieved.

Evaluation of the above design factors is largely dependent on the selected oxidant.
The most common oxidants currently used to remediate groundwater are permanga-
nate (potassium or sodium), catalyzed hydrogen peroxide (CHP; e.g., modified Fenton’s
Reagent), and activated sodium persulfate. Each of these technologies is briefly described
below, highlighting important benefits and limitations.

Permanganate
Chemical oxidation of pollutants in drinking water with the permanganate ion (MnO4–)
has been used by engineers for decades. Permanganate, primarily potassium permanga-
nate (KMnO4) but also sodium permanganate (NaMnO4), is now the most widely used
ISCO reagent for groundwater remediation projects (Tsitonaki and Bjerg 2008). This is
most likely because of its ease of handling and use and its relatively low cost. Additional
benefits of permanganate are as follows:
456 Hydrogeological Conceptual Site Models

• Permanganate is a highly stable oxidant that can persist in the subsurface for
months after an application. This facilitates permanganate distribution in ground-
water and provides time for diffusion into low permeability layers.
• Permanganate is effective over a wide range of pH.
• No activation method is required as oxidation occurs via direct electron transfer.
• The ability of permanganate to degrade chlorinated ethenes (PCE, TCE, and asso-
ciated degradation products) and other unsaturated organic compounds has been
extensively documented at the field scale (ITRC 2005).

The primary limitations of permanganate technology are as follows:

• Permanganate is ineffective at degrading chlorinated alkanes (e.g., 1,1,1-TCA), car-


Downloaded by [University of Auckland] at 23:45 09 April 2014

bon tetrachloride, and aromatic compounds such as benzene.


• Creation of manganese oxide precipitates during permanganate injection can
reduce subsurface permeability and permanently foul injection wells.
• While sodium permanganate can be mixed at higher concentrations than potas-
sium permanganate, it is significantly more reactive and can result in hazardous
conditions such as heat evolution.
• Permanganate is highly reactive with natural organic matter. In diffusion-limited,
low-permeability settings, it is estimated that natural oxidant demand may con-
sume up to 90% of the applied permanganate (Tsitonaki and Bjerg 2008).

In addition, a potential problem with permanganate and all other ISCO reagents is that
treatment may mobilize metals because of redox reactions and/or pH changes. However,
this metal mobilization period is often short-lived and, therefore, rarely results in signifi-
cant migration beyond the treatment zone (ITRC 2005).

Catalyzed Hydrogen Peroxide


The CHP ISCO process involves the catalyzed decomposition of hydrogen peroxide (H2O2) by
soluble iron, iron chelates, or iron minerals to produce the hydroxyl radical (OH–), which is
one of the strongest oxidants in existence. A chain-propagating reaction of the hydroxyl radi-
cal with hydrogen peroxide produces other important reactive oxygen species, such as perhy-
droxyl radical (HO −2), superoxide radical anion (O −2 ), and hydroperoxide anion (HO −2). These
diverse oxygen species represent a mixture of oxidants, reductants, and nucleophiles that can
degrade almost all organic contaminants (Watts et al. 2006). Therefore, the major advantages
of catalyzed hydrogen peroxide are its great reactive strength and its versatility in degrading
a wide range of contaminants. Note that ITRC (2005) does list chloroform and pesticides as
recalcitrant to CHP remediation. Additional benefits of catalyzed hydrogen peroxide are as
follows (Watts et al. 2006):

• Compared to other ISCO technologies, CHP achieves significantly greater treat-


ment of sorbed contaminants and DNAPLs, primarily as a result of the activity of
superoxide. The rate of sorbed contaminant or DNAPL destruction can be up to
100 times faster with CHP than with technologies in which treatment is desorption- or
dissolution-limited.
• CHP is insensitive to the presence of natural organic matter, and overall CHP
stability is a significantly more important design factor than soil oxidant demand.
Groundwater Remediation 457

• The range of oxygen species produced by the CHP process includes both oxidants
and reductants, increasing the likelihood of complete contaminant mineralization.

Significant limitations of CHP technology are as follows:

• CHP is the least stable of the conventional ISCO reagents and rapidly decomposes
in the subsurface. Half-lives for CHP during ISCO applications can vary signifi-
cantly but rarely exceed 48 hours (Watts 2011). This significantly limits the achiev-
able injection well radius of influence.
• Without proper stabilization, CHP ISCO can result in rapid and dangerous heat
and gas evolution, which can also cause unwanted contaminant migration.
• Iron mineral catalysts require pH adjustment (lowering) with acids.
Downloaded by [University of Auckland] at 23:45 09 April 2014

• Carbonate ions can exert non-target demand on the hydroxyl radical.


• Overdosing of hydrogen peroxide can result in the scavenging of the hydroxyl
radical by the hydrogen peroxide itself (ITRC 2005).

The best current CHP stabilization practice to avoid unwanted gas and heat evolution is to
combine H2O2 with iron chelates to allow reactions to occur at a neutral pH. Proper stabi-
lization enables application at high concentrations (>1%), facilitating generation of super-
oxide, which has been proven critical in expanding the capability of CHP to remediate
compounds traditionally recalcitrant to oxidation and in enhancing destruction of NAPLs
and sorbed contaminant mass (Watts et al. 2006).

Activated Sodium Persulfate


Activated sodium persulfate (Na2S2O8) has recently emerged as a viable ISCO alternative
to permanganate and CHP by resolving the primary limitations of each of the aforemen-
tioned technologies:

• CHP’s instability and rapid decomposition in the subsurface


• Permanganate’s narrow treatment spectrum and rapid consumption by natural
organic matter

Activated sodium persulfate treats the same range of contaminants as CHP but is much
more stable with typical subsurface half-lives between 10 and 20 days. Activated sodium
persulfate is also less reactive with natural organic matter than permanganate, and its
chemical reactions have been shown to increase subsurface permeability in some instances.
On its own, the persulfate anion is a more powerful oxidant than both hydrogen peroxide
and permanganate, although reaction kinetics can be slow for contaminants of interest
such as TCE and PCE (Watts 2011). To further increase the oxidative strength of the reagent
and enhance treatment kinetics, persulfate can be activated to produce the free sulfate
radical (SO −4 ), which is second only to the hydroxyl free radical in oxidative strength (ITRC
2005).
There are different available methods of persulfate activation, including iron activa-
tion, hydrogen peroxide activation, and alkaline (i.e., base) activation. The most common
method in current use is alkaline activation, the success of which is dependent on pH and
is typically most effective when the pH is maintained above 10. Similar to CHP, alkaline-
activated persulfate has the advantage of producing multiple reactive species, including
reductants (Watts 2011).
458 Hydrogeological Conceptual Site Models

Although a less sensitive parameter for persulfate than permanganate, soil oxidant
demand is still a design consideration for persulfate and must be evaluated to ensure that
non-target demand does not result in application failure.

Bench and Pilot Testing


Bench and pilot testing are critical steps in the ISCO design process. In general, the bench-
scale evaluation focuses on selection of the appropriate ISCO reagent, activation and/or
stabilization method, and design dose. Pilot-scale tests are conducted to verify the results
of the bench testing and, more importantly, to evaluate different oxidant delivery meth-
ods. Typical parameter tests conducted during bench testing with soil, groundwater, and
potentially NAPL samples collected from a site include the following:

• Acid and base buffering capacity: most relevant for CHP and alkaline-activated per-
Downloaded by [University of Auckland] at 23:45 09 April 2014

sulfate technologies, respectively


• Reagent stability: using different activation and/or stabilization methods
• Soil oxidant demand: using different reagent concentrations, most relevant for per-
manganate and activated persulfate technologies
• Gas and heat evolution: using different reagent concentrations and stabilization
methods, typically performed for CHP
• Injection simulations: testing the efficacy of contaminant destruction using different
reagents at different concentrations with different additives

ISCO pilot tests are best thought of as full-scale applications performed over a small por-
tion of the site, and they should be as representative of the full-scale approach as possible
(ITRC 2005). While a specific reagent may be conclusively selected based on the bench test
results, it is not uncommon for pilot tests to evaluate two different technologies over dif-
ferent portions of the site. For example, CHP may be pilot tested in a NAPL area, while per-
manganate may be pilot-tested in plume areas of high concentration. Typical parameters
evaluated during ISCO pilot testing are listed below, adapted from ITRC (2005):

• Oxidant concentrations: verifying the results of the pilot test and determining field-
scale modifications
• Injection rates, pressures, and volumes: determines the duration of the full-scale
implementation in conjunction with the radius of influence and the oxidant
concentration(s)
• Water-quality changes: including temperature, pH, specific conductance, and oxida-
tion–reduction potential
• Injection well radius of influence: note that this is not equivalent to the hydraulic
radius of influence but is better thought of as the radius of ISCO reagent travel
around a well or, alternatively, the radius of successful treatment around a well
• Reagent stability: analysis for the ISCO reagents themselves or indicator parameters
can be used to evaluate achieved half-lives in the subsurface
• Gas and heat evolution and metal dissolution: evaluating unwanted effects of the ISCO
application
• Treatment efficiency and rebound effects: evaluating the suitability of the design in
achieving short- and long-term performance objectives, used to determine the
total amount of oxidant required
Groundwater Remediation 459

Unfortunately, in many instances, bench and pilot testing are not conducted, and a
reagent is selected solely based on contaminant compatibility charts published in litera-
ture. More alarming is when consultants and/or contractors ignore factors such as reaction
kinetics, activation or stabilization requirements, total contaminant mass, and non-target
demand. Proceeding to a full-scale ISCO remedy without bench and pilot testing and blindly
using default quantities of CHP, for example, has a very low likelihood of success. One com-
mon justification for this practice is the lack of financial resources; however, default ISCO
remediation will ultimately increase costs over the correct, conceptual approach when regu-
lators demand additional rounds of reagent application and associated monitoring.
Figure 8.29 presents a three-dimensional visualization created in ArcScene of an unsuc-
cessful ISCO concept that resulted in considerable financial loss to the client. This example
illustrates the negative consequences of neglecting to perform a bench or pilot test to inform
the remedial design. Groundwater contamination was discovered when an old underground
Downloaded by [University of Auckland] at 23:45 09 April 2014

storage tank was removed from the site, and significant historic leakage was observed. The
green polygon in Figure 8.29 represents the area excavated around the former underground
storage tank in an attempt to remove all LNAPL contamination. Unfortunately, monitoring
wells installed downgradient of the tank found that groundwater was impacted by petro-
leum hydrocarbons, and ISCO was selected as the optimal remedy to eliminate the resid-
ual contamination and restore site groundwater to drinking water-quality standards. The
brown layer represents the bedrock surface, which is above the water table (blue layer) for
most of the year. The bedrock surface nearly touches the former tank graveyard, indicating
the potential for leaked oil to have entered the fractured bedrock aquifer directly.
The client hired an ISCO contractor to remediate the site. The contractor did not develop a
thorough CSM, assumed that no petroleum mass remained in the tank excavation area, and
installed an injection trench, represented by the black polygon, in the overburden downgradi-
ent of the former tank area. CHP was selected as the ISCO reagent, yet no attempt was made
to quantify contaminant mass or to understand and manage CHP stability in the presence
of site soils. As previously stated, no bench testing or pilot testing was performed to better
understand treatment mechanisms in the complex fractured bedrock environment. Instead,
more than 1000 gallons of CHP were injected into the subsurface over three separate injec-
tion periods spanning multiple years. Despite these repeated efforts, hydrocarbon concen-
trations remained above cleanup standards in the ground­water, and the site remained in the
remedial action stage with significant annual monitoring and reporting costs.

FIGURE 8.29
Three-dimensional visualization of an ISCO design created in ArcScene that did not meet remedial goals
despite multiple rounds of injections over the course of several years. Blue vertical lines are monitoring wells,
and brown vertical lines are soil boring locations.
460 Hydrogeological Conceptual Site Models

It is highly likely that the majority of unstabilized CHP was rapidly consumed in the
vadose zone before even reaching the bedrock surface, resulting in minimal contact with
the residual petroleum source and minimal contaminant destruction. Furthermore, even if
the injection strategy were better conceived and resulted in direct contact with the plume,
residual LNAPL in the bedrock beneath the former tank would have recontaminated the
treated groundwater and resulted in significant rebound. This example represents a com-
mon occurrence in industry when contractors oversimplify complex CSMs and take a
default approach to remediation. An example of a conceptual approach to ISCO design
and implementation, with much better results, is presented in the following section.

8.3.3.2 Case Study


This case study describes the successful application of ISCO at a complex fractured bedrock
Downloaded by [University of Auckland] at 23:45 09 April 2014

site in southern New England, completed by Woodard & Curran, Inc. The contaminant
source at the site consists of a former drum disposal area located on top of a hill. A solvents
plume, composed primarily of PCE, extends more than 2000 ft downgradient of the drum
disposal area and discharges in a large glacial pond. The plume is located in both glacial
till and fractured bedrock materials. The CSM is presented in Figure 8.30, and a geologic
cross section through the source area is presented in Figure 8.31. The source area lies in a
significant groundwater recharge area with water table fluctuations of up to 10 ft, moving
above and below the bedrock surface into and out of the glacial till. The average pretreat-
ment source-area groundwater concentration of PCE was approximately 5000 µg/L in the
glacial till, and PCE was historically detected above 1000 µg/L in the fractured bedrock.
The Remedial Investigation/Feasibility Study (RI/FS) completed for the site developed
a defensible, holistic approach to remediation that did not require a long-term pump and

FIGURE 8.30
Visual representation of the CSM illustrating contaminant release mechanisms, regional hydrogeology, and
locations of potential human and ecological receptors. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 461
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.31
Geologic cross section through the source area. The presence of residual DNAPL was suspected in the glacial
till and fractured gneiss unit. (Courtesy of Woodard & Curran, Inc.)

treat component. The conceptual design for the site is presented in Figure 8.32 and illus-
trates how the following elements are integrated into the cleanup:

• Source-area treatment of overburden soil and bedrock groundwater


• MNA of the downgradient plume
• Institutional controls to restrict usage of contaminated groundwater

FIGURE 8.32
Graphical depiction of the conceptual design of the soil and groundwater remedy illustrating fate-and-transport
mechanisms and key remedial components. A conceptual design should be prepared prior to completing the
detailed design for all remedial applications. (Courtesy of Woodard & Curran, Inc.)
462 Hydrogeological Conceptual Site Models

The ROD selected ISCO with permanganate for source-area remediation, prescribing the
following remedial methods and objectives:

• Source-area soil: ISCO applied through soil mixing to reduce PCE and TCE concen-
trations to below prescribed standards and to facilitate seepage of reagents into
the fractured bedrock
• Source-area groundwater: ISCO applied through injection wells to remove 90% of
the residual dissolved contaminant mass and to reduce concentrations of PCE and
TCE to below their MCLs

The remedial design process was augmented by completion of ISCO bench and pilot test-
ing to evaluate parameters such as soil oxidant demand, injection well radius of influence,
and achievable injection flow rates. The first phase of the source area remediation was
Downloaded by [University of Auckland] at 23:45 09 April 2014

mechanical mixing of potassium permanganate into the impacted vadose zone and inter-
mittently saturated glacial till materials. Mixing occurred over two application periods, as
additional vadose zone impacts were discovered during the first mixing period. A photo-
graph of the permanganate mixing process is provided in Figure 8.33. In total, more than
30,000 lb of potassium permanganate was mixed with impacted soils, and cleanup criteria
were satisfied after completion of the second mixing round.
The second phase of source-area remediation was installation of horizontal and vertical
groundwater injection wells in the source area and subsequent groundwater injection of

FIGURE 8.33
Photograph of direct permanganate mixing with PCE-contaminated glacial till. Note that surficial soils were
excavated and stockpiled to expose the glacial till. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 463

sodium permanganate. A photograph of horizontal well installation in the glacial till is


provided in Figure 8.34. Some vertical injection wells screened the glacial till or bedrock
exclusively while others screened both formations. As a preliminary measure, 7000 gallons
of 4% sodium permanganate solution was injected into the subsurface. Figure 8.35 depicts
the remedial area, showing tanks with permanganate solution and injection wells in the
background. Figure 8.36 depicts a typical injection well-head installation. Approximately
eight months later, a polishing injection of another 7000 gallons was completed.
The results of the ISCO remediation are presented in Figures 8.37 and 8.38 for PCE.
Figure 8.37 depicts PCE concentrations at a well couplet in the middle of the source area
with one well screening the glacial till (overburden) and another screening the fractured
rock. As shown in Figure 8.37, the PCE concentration in the glacial till was reduced by
approximately two orders of magnitude with the second polishing injection successfully
mitigating concentration rebound. PCE concentrations in the fractured bedrock were
Downloaded by [University of Auckland] at 23:45 09 April 2014

significantly reduced by the soil-mixing phase, demonstrating successful percolation of


permanganate into the bedrock. However, concentrations rebounded significantly despite
the multiple groundwater injection rounds, indicating the persistence of PCE mass in the
fractured bedrock. Source-area PCE concentrations are now higher in the bedrock than in
the glacial till.
Figure 8.38 presents PCE concentrations at a well couplet in the center of the plume,
approximately 1500 ft downgradient of the source area. Concentrations in the glacial
till have been apparently reduced significantly by the source-area remediation and are
approaching the MCL of 5 µg/L. While the PCE concentration in the fractured bedrock has
noticeably declined, it remains over one order of magnitude above the MCL.
The well-conceived, well-executed ISCO application has accomplished the important
objective of stabilizing the PCE source area and retracting the downgradient plume
without requiring a pump and treat system. The total cost of the remediation, including

FIGURE 8.34
Photograph of horizontal injection-well installation in a gravel-filled trench in the glacial till. (Courtesy of
Woodard & Curran, Inc.)
464 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.35
Photograph of ISCO treatment area. The trailer contains a pump to inject the permanganate under pressure.
(Courtesy of Woodard & Curran, Inc.)

FIGURE 8.36
Photograph of injection well with associated piping and valve assembly. (Courtesy of Woodard & Curran, Inc.)
Groundwater Remediation 465

10000
Overburden
Bedrock
PCE Concentration, in ug/L

1000

100

10 MCL

1
Downloaded by [University of Auckland] at 23:45 09 April 2014

0.1
5/7/08 10/4/08 3/3/09 7/31/09 12/28/09 5/27/10 10/24/10 3/23/11 8/20/11
Date

FIGURE 8.37
Graph of PCE concentrations in micrograms per liter before, during, and after remediation at an overburden
and bedrock well in the middle of the source area. The two vertical green bars represent the two soil-mixing
applications, and the two vertical yellow bars represent the two groundwater-injection applications.

operations, maintenance, and monitoring, was approximately $2.4 million, which is very
reasonable considering the size of the site and the extent of the contamination. Remedial
goals for source-area soils were definitely achieved; however, PCE concentrations remain
above MCLs, particularly in the fractured bedrock. It is likely that significant PCE mass
remains in the bedrock, which is very difficult to contact with ISCO injections, and matrix

1000
PCE Concentration, in ug/L

100
Overburden
Bedrock

10
MCL

1
9/20/04 2/2/06 6/17/07 10/29/08 3/13/10 7/26/11
Date

FIGURE 8.38
Graph of PCE concentrations in micrograms per liter before, during, and after remediation at an overburden
and bedrock well in the downgradient plume area. The two vertical green bars represent the two soil-mixing
applications, and the two vertical yellow bars represent the two groundwater-injection applications. Evidence
of residual permanganate was found several hundred feet downgradient of the source area. Combined with the
relatively stable nature of concentrations prior to the ISCO, the rapid drop in overburden PCE concentration is
likely attributable to the ISCO application.
466 Hydrogeological Conceptual Site Models

diffusion will likely make the attainment of MCLs impracticable in the former DNAPL
source area (see additional discussion in Section 8.4). MNA is a component of the overall
remedy, and the ISCO work completed to date has successfully set the stage for full tran-
sition to MNA. Completion of additional rounds of injections may achieve slightly more
mass removal, but the value of these applications would be questionable as the plume has
already been stabilized, exposure to human and ecological receptors has been mitigated,
and MCLs will not be reached in the bedrock in the short term.

8.3.4 Bioremediation and Monitored Natural Attenuation


8.3.4.1 In Situ Bioremediation
In situ bioremediation has been used to degrade petroleum hydrocarbons in groundwater
Downloaded by [University of Auckland] at 23:45 09 April 2014

for more than 40 years. However, within the past 20 years (see Figure 8.3), use of the tech-
nology has rapidly increased as a result of research showing the technology’s potential
effectiveness in degrading a wide range of contaminants, including chlorinated solvents,
PCBs, dioxin, and metals (Hazen 2010). A groundbreaking study was published in Science
magazine by Maymó-Gatell et al. (1997) describing the isolation of a novel bacterium,
Dehalococcoides ethenogenes 195, capable of completely dechlorinating PCE to the innocu-
ous end product ethene under anaerobic conditions. The process of bacterial growth using
PCE as the terminal electron acceptor (TEA) is termed dehalorespiration (Magnuson et al.
1998). The identification of Dehalococcoides (Dhc) was a watershed moment in the advance-
ment of in situ bioremediation technologies and led to the proliferation of bioremediation
technologies in the 2000s.
In situ bioremediation strategies are classified either as natural attenuation strategies or
engineered strategies. MNA is described further in Section 8.3.4.2. Engineered bioremedi-
ation involves either biostimulation, bioaugmentation, or remedial designs that use both.
Biostimulation is the process of adding organic or inorganic compounds to the subsurface to
stimulate indigenous organisms (i.e., organisms that are already present) capable of degrad-
ing the target contaminant(s) (Hazen 2010). Biostimulation is useful when the required
organisms are present in insufficient numbers to degrade contaminants at the scale and rate
required for successful remediation. The primary biostimulation additives are substrates
(electron donors), nutrients, and buffering agents. Substrates provide an electron donor and
carbon source for cell growth. Hydrogen is the preferred electron donor for dehalorespira-
tion. Nutrients, such as nitrogen and phosphorus, and buffering agents are important in
establishing prime conditions for bacterial growth and metabolism (ITRC 2008).
Biostimulation additives can be gases, such as ambient air, oxygen, and methane; liq-
uids, such as lactic acid, molasses, vegetable oil, and hydrogen-release compound (HRC);
or solids, such as bulking agents like saw dust. Selection of the optimal additives depends
on the biogeochemistry of the hydrogeologic system with the most important parameters
being the redox states of the current environment and that of the target degradation path-
way. TEAs will be utilized in order of the amount of energy that can be derived from
their utilization (most to least). Oxygen is the preferred TEA, followed by nitrate, iron (III),
sulfate, and carbon dioxide—the latter being the necessary TEA for methanogenesis. The
implications of microbial energetics are that in order to stimulate dehalorespiration pro-
cesses in an aerobic environment, sufficient electron donors would be required to deplete
all TEAs preceding carbon dioxide such that methanogenic conditions favorable to Dhc
may develop (Hazen 2009). Figure 8.39 presents a summary of common geochemical pat-
terns found during anaerobic biodegradation of chlorinated solvents.
Groundwater Remediation 467

(+)
Dissolved Iron
CH 4
Chlor
ide
(e.
g., BO
Concentration or Mass

SO4
me D

ORP (mV)
tha
no
l)
NO3

Ac
et
at
e
DO
(-)
Background Source Downgradient
Downloaded by [University of Auckland] at 23:45 09 April 2014

Distance and Direction of Groundwater Flow

FIGURE 8.39
Common geochemical patterns relative to a chlorinated solvents source area that is degrading under anaer-
obic conditions. (Modified from Interstate Technology and Regulatory Council (ITRC), Natural Attenuation
of Chlorinated Solvents in Groundwater: Principles and Practices, Interstate Technology and Regulatory
Cooperation Work Group, In Situ Bioremediation Work Team, and Industrial Members of the Remediation
Technologies Development Forum (RTDF). Available at http://www.itrcweb.org/Documents/ISB-3.pdf, accessed
June 2, 2011.)

There are four primary requirements for successful implementation of biostimulation:

• The correct microorganisms must be naturally present (i.e., Dhc in the case of PCE).
• Additives must be able to stimulate the target microorganisms.
• Additive delivery to the target remediation area must be feasible.
• A carbon-to-nitrogen-to-phosphorus ratio of 100:10:2 should be achievable (Hazen
2010).

Bioaugmentation involves the injection of microbial cultures into the subsurface to degrade
target contaminants in situ and is required when the correct organisms are either not pres-
ent naturally or are present in such low numbers that biostimulation alone is infeasible
(ITRC 2008). The most commonly used organisms are pseudomonads for oil spills and Dhc
for chlorinated solvent remediation, both of which have several commercially produced
cultures (Hazen 2010). Bioaugmentation is particularly useful in controlled engineering
applications, such as recirculation systems, in which a highly selected culture can rapidly
increase degradation kinetics. A conceptual diagram of a recirculation design for in situ
bioremediation is presented in Figure 8.40. Bioaugmentation with Dhc cultures can often
take advantage of an existing anaerobic community to enhance dechlorination of PCE as
illustrated in Figure 8.41.
One common problem in evaluating the success of bioaugmentation applications is
that biostimulation additives are typically applied in addition to the microbial cultures.
Therefore, it is often difficult to prove that degradation was caused by the added culture
alone, rather than stimulated native populations. As a result, there is only one bacterium
that has been conclusively demonstrated to perform better than biostimulation, D. etheno-
genes. The success of Dhc as a bioaugmentation additive is likely a result of its classification
468 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.40
Conceptual diagram of a groundwater recirculation design for in situ bioremediation. (Courtesy of Sam Fogel,
PhD, of Bioremediation Consulting, Inc.)

as a strict anaerobe, its sole applicability to established methanogenic environments, its


common presence in nature, and its ability to penetrate the subsurface because of its small,
irregular size (Hazen 2010). A graph illustrating the successful degradation of cis-1,2-di-
chloroethene to ethene after bioaugmentation of a fractured bedrock aquifer is presented
in Figure 8.42.
The primary advantage of in situ bioremediation over other technologies is its relatively
low capital cost and O&M requirements. However, there are a number of limitations that
continue to present challenges in full-scale applications:

Nitrate Reducers NO3 + Organics N2


Sulfate Reducers SO4 + Organics SH Lower ORP

Organic Acids

Fermentors + Organics H2

Methanogens (Low ORP) B12

Dhc + DCE + B12 + H2 + Low ORP Ethene

FIGURE 8.41
Conceptual representation of how the existing anaerobic microbial community can provide electron donors
and nutrients for amended Dhc cultures to degrade dichloroethene (DCE) to ethene. (Courtesy of Sam Fogel,
PhD, of Bioremediation Consulting, Inc.)
Groundwater Remediation 469

140

BCI Culture
Amendment

105

70
Downloaded by [University of Auckland] at 23:45 09 April 2014

35

0
3/3 4/2 5/2 6/1 7/1 7/31 8/30 9/29 10/29 11/28 12/28 1/27 2/26 3/28

FIGURE 8.42
Concentrations of CVOCs in a monitoring well in contaminated fractured rock before and after bioaugmenta-
tion of a Dhc culture provided by Bioremediation Consulting, Inc. (BCI). This is a full-scale recirculation system
installed for a Superfund site in Pennsylvania. BCI provided 80 L of Dhc culture that was also resistant to tri-
chloroethane (TCA) and was grown in site groundwater to ensure acclimation. Cell density was about 1 × 1011
cells/L. (Courtesy of Sam Fogel, PhD, of Bioremediation Consulting, Inc.)

• Aquifer permeability and preferential pathways inhibit distribution of biostimula-


tion or bioaugmentation amendments.
• Performance is highly sensitive to aquifer geochemical conditions. For example,
low or high pH may inhibit biological activity.
• Biofouling can reduce the efficacy of injection and recirculation wells.
• A long time frame is required (several months or years) to develop conditions
capable of complete contaminant degradation.
• The potential exists for incomplete degradation and the buildup of toxic interme-
diates, such as vinyl chloride, when conditions are not optimal (ITRC 2008).

The most critical limitation of in situ bioremediation is its infeasibility for sites with low
hydraulic conductivity. Similar to ISCO, if low-permeability deposits will not accept injected
amendments or cultures at required flow rates, appropriate conditions for biodegradation
will not develop, and the application will fail. A general rule of thumb is that for in situ
bioremediation to be a viable option, the hydraulic conductivity of the target formation
should be at least 10 –4 cm/s (0.28 ft/day). This means that in situ bioremediation is generally
470 Hydrogeological Conceptual Site Models

not an option for sites with contamination in clay, silts, or glacial till. For bioaugmentation
applications, conductivities may need to be an order of magnitude higher (i.e., 10 –3 cm/s)
depending on the size and adherence properties of the amended organism (Hazen 2010).
Because of the number of variables that can affect selection of the appropriate enhance-
ment approach, bench testing and field pilot testing should be conducted prior to full-scale
design of an in situ bioremediation. Diagnostic indicators commonly evaluated during bio-
remediation bench and pilot testing include dissolved oxygen concentration and oxidation–
reduction potential (ORP), parent/daughter compound ratios, electron-donor concentrations,
concentrations of competing electron acceptors, and pH. Classifying the redox state of the
groundwater is an essential step in determining the feasibility of bioremediation as it dic-
tates what biodegradation processes are currently occurring and informs the design team
as to the level of effort required to manipulate redox potential to create conditions favorable
for degradation of the target contaminant(s). For example, if groundwater with chlorinated
Downloaded by [University of Auckland] at 23:45 09 April 2014

solvent contamination is selected for bioremediation and is currently aerobic, electron-donor


addition is required until all oxygen and nitrate is depleted at a minimum (Hazen 2010).
To help hydrogeologists correctly classify groundwater redox processes, the USGS has
created a spreadsheet program available in the public domain as documented in the work
of Jurgens et al. (2009). This program classifies the overall redox category (i.e., aerobic or
anoxic) and the specific redox process (i.e., nitrate-reducing or sulfate-reducing) based
on electron acceptor concentrations in groundwater at the site in question. The USGS
spreadsheet establishes thresholds for quantifying the transition between these processes.
Concentration-based criteria for the redox classification performed by the spreadsheet
model are presented in Figure 8.43. The USGS spreadsheet is included in the compan-
ion DVD for the reader’s benefit. Determining redox potential is especially important for
MNA evaluations because it helps verify that intrinsic biodegradation processes are occur-
ring without the need for biostimulation.

Criteria for Inferring Process from Water Quality Data


Electron Acceptors (mg/L) Mass Ratio
Redox Redox Dissolved Nitrate, as
Manganese Iron Sulfate Iron/Sulfide
Category Process Oxygen Nitrogen
Oxic O2 ≥0.5 --- <0.05 <0.1 --- ---
Suboxic Suboxic <0.5 <0.5 <0.05 <0.1 --- ---
Anoxic NO3 <0.5 ≥0.5 <0.05 <0.1 --- ---
Anoxic Mn(IV) <0.5 <0.5 ≥0.05 <0.1 --- ---
Anoxic Fe(III)/SO4 <0.5 <0.5 --- ≥0.1 ≥0.5 no data
Anoxic Fe(III) <0.5 <0.5 --- ≥0.1 ≥0.5 >10
Mixed
Fe(III)-SO4 <0.5 <0.5 --- ≥0.1 ≥0.5 ≥0.3, ≤10
(Anoxic)
Anoxic SO4 <0.5 <0.5 --- ≥0.1 ≥0.5 <0.3
Anoxic CH4gen <0.5 <0.5 --- ≥0.1 <0.5 ---

FIGURE 8.43
Criteria and threshold concentrations used to classify redox processes in ground water. Redox process: O2, oxy-
gen reduction; NO3, nitrate reduction; Mn(IV), manganese reduction; Fe(III), iron reduction; SO4, sulfate reduc-
tion; CH4gen, methanogenesis. Notes: mg/L, milligram per liter; —, criteria do not apply because the species
concentration is not affected by the redox process. (From Jurgens, B. C. et al., An Excel Workbook for Identifying
Redox Processes in Ground Water. US Geological Survey Open-File Report 2009–1004. 2009. Available at http://
pubs.usgs.gov/of/2009/1004/, accessed November 12, 2010.)
Groundwater Remediation 471

8.3.4.2 Monitored Natural Attenuation


MNA is defined as the use of naturally occurring concentration-reducing processes to
achieve site-specific remediation goals. U.S. EPA (1999a) lists the following processes as
key components of MNA:

• Biodegradation: also termed intrinsic bioremediation or bioremediation that is


occurring without any engineered enhancement
• Dispersion: more appropriately defined as mixing and dilution as a result of field-
scale heterogeneity (see related discussion in Chapter 5)
• Dilution
• Sorption
• Radioactive decay
Downloaded by [University of Auckland] at 23:45 09 April 2014

• Chemical or physical processes resulting in contaminant stabilization, transfor-


mation, or destruction

A common misconception that environmental professionals and regulators alike have


been battling for years is that MNA is a do-nothing alternative that is selected solely to
benefit the polluters. This is far from the truth as the U.S. EPA (1999a) directive regarding
the appropriate use of MNA specifically states that source control and long-term perfor-
mance monitoring are required elements of every MNA remedy to ensure protection of
human health and the environment. The directive goes on to state explicitly “Sites where
the contaminant plumes are no longer increasing in extent, or are shrinking, would be the
most appropriate candidates for MNA remedies” (U.S. EPA 1999a).
Therefore, the intention of MNA is to efficiently meet remedial objectives for sites where
it can be technically demonstrated that concentrations are already stable or decreasing,
and risks to human health and the environment are nonexistent or easily mitigated until
MNA is complete. Documenting the short- and long-term effectiveness of MNA at reach-
ing remedial goals requires extensive field investigation and rigorous quantitative analy-
sis. A cartoon produced by the Network for Industrially Contaminated Land in Europe
(NICOLE) illustrating this concept is provided in Figure 8.44.
The three required elements of an MNA feasibility and effectiveness evaluation are as
follows, adapted from Parsons (2009):

• Evaluation of plume stability


• Estimation of remedial time frame
• Evaluation of the sustainability of MNA processes

Plume Stability
Evaluation of plume stability involves quantification of both concentration-based and mass-
based metrics. Concentration-based methods are more traditional and typically include plots
of contaminant concentrations over time at key monitoring wells, contour maps of contaminant
concentrations at different times, and statistical trend analyses at individual wells. Mass-based
metrics are more computationally intensive and include changes in total plume mass over time
(zeroth spatial moment), changes in the location of the center of contaminant mass over time
(first spatial moment), and changes in the spread of the plume about the center of mass (second
spatial moment; Parsons 2009). Because of the labor-intensive nature of these calculations, com-
puter programs, such as MAROS, can be used to perform statistical trend analyses and spatial
472 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.44
Cartoon from TNO, NICOLE, and Loet van Moll (www.loetvanmoll.nl) explaining the MNA process and the
field, laboratory, and quantitative analysis required to document its efficacy. (With kind permission of Loet van
Moll.) For full cartoon booklet see Sinke and van Moll (2010).
Groundwater Remediation 473

moment calculations. See Chapter 3 for a description of MAROS and example tabular and
graphical output. While MAROS is an excellent computational tool and can be used to make
defensible arguments regarding plume stability, it has two significant limitations:

• The absence of a visual processor


• The inability to perform trend analyses using statistical methods for censored
data (i.e., data with nondetect results)

MAROS does not have a user-friendly, high-quality GUI capable of producing nice visualiza-
tions that easily demonstrate the results of concentration- and mass-based analyses. One way
around this limitation is to import MAROS results into a program such as ArcGIS to manually
create visualizations. An alternative to MAROS that overcomes this limitation entirely is the
Ricker MethodSM performed by Earth Consulting Group, Inc., and documented in the work of
Downloaded by [University of Auckland] at 23:45 09 April 2014

Ricker (2008). This method integrates plume stability analysis with a robust visual processor
by performing calculations using grid math tools in Surfer by Golden Software (see discussion
in Chapters 3 and 4). As with MAROS, the Ricker MethodSM includes calculation of the total
plume mass and the center of mass at different time increments and trend analyses on these
parameters; however, equally important parameters, such as the overall plume area and aver-
age concentration, are also calculated for constituents of interest (Ricker 2008). More impor-
tantly, all these parameters are easily visualized in Surfer and other graphical processors such
that the results are immediately significant to the nonstatistician. Statistical results and data
visualizations created using the Ricker MethodSM are presented in Figures 8.45 through 8.48
for a site with TCE-contaminated groundwater.

Dec-01 Trichloroethylene Isopleth Map Mar-11 Trichloroethylene Isopleth Map

Plume Area:

Plume Average Concentration:

Plume Mass:

Concentraon (µg/l)

5 55 105 155 205 255 305 355 405 455 Monitoring Well and
Concentration (ug/l)

Center of Plume Mass

FIGURE 8.45
Comparison of TCE concentration isopleth maps between December 2001 and March 2011 for a site where
MNA is part of the remedial design. Figures 8.45 through 8.47 apply to the same site and represent a successful
application of the Ricker MethodSM in demonstrating the efficacy of MNA. (Courtesy of Joe Ricker, PE of Earth
Consulting Group, Inc.)
474 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.46
TCE plume stability analysis completed as part of the Ricker MethodSM showing plots and associated Mann-
Kendall trend and linear regression statistics for the plume area, average concentration, and mass. (Courtesy of
Joe Ricker, PE of Earth Consulting Group, Inc.)
Groundwater Remediation 475
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.47
TCE plume center of mass analysis showing a visualization of the center of mass location over time and a
time-series plot and associated trend analysis of the distance from the contaminant source area (represented
by MW-4) to the center of mass. The uniformly decreasing trends shown in Figures 8.46 and 8.47 clearly dem-
onstrate that the plume can be classified as shrinking, or decreasing. (Courtesy of Joe Ricker, PE of Earth
Consulting Group, Inc.)

MNA processes are likely working effectively at the TCE site in question, and the plume
stability analysis leads to the following observations:

• Concentrations of TCE exhibit statistically significant decreasing trends at key


monitoring wells.
• The total plume area, mass, and average concentration are decreasing over time.
• The plume’s center of mass is retracting as indicated by a decreasing trend in the
distance between the source area and the center of mass.

The absence of an appropriate method to account for nondetect data in trend analyses is a
limitation shared by many statistical packages available in the public domain, including
the robust ProUCL software. The significance of nondetect data is often entirely lost in
professional practice, and many experienced professionals often resort to substitution or
fabrication methods using one-half the laboratory reporting limit (Helsel 2005). When per-
forming plume stability analysis, substituting a nondetect result with one-half the report-
ing limit can result in completely erroneous results that can result in rejection of MNA as
a viable alternative or vice versa.
For illustrative purposes, a Mann-Kendall trend analysis was conducted in ProUCL,
using one-half the reporting limit for nondetect results for a real-life data set of PCE
476 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

Concentraon Difference (µg/l)

EXPLANATION
Dec-01 Plume Extent RED SHADING INDICATES AREAS WHERE
CONCENTRATIONS INCREASED FROM DEC-01 TO MAR-11
Mar-11 Plume Extent
BLUE SHADING INDICATES AREAS WHERE
CONCENTRATIONS DECREASED FROM DEC-01 TO MAR-11

FIGURE 8.48
Visualization of the spatial difference in TCE plume concentration between the first and last sampling dates
completed as part of the Ricker MethodSM. This is generated by subtracting the recent grid file from the older
grid file and allows the user to evaluate portions of the plume that expanded versus portions that contracted.
For example, in the graphic shown, concentrations increased near the upgradient portion of the plume (not
unexpected because TCE is a daughter product of PCE, which is also present at the site). However, the vast
majority of the plume decreased in concentration and area, which is clearly seen in the graphic. This type of
analysis is very useful in demonstrating that the overall plume is clearly decreasing, even though one or some
wells are stable or increasing. This has always been one of the biggest drawbacks of well by well trend analysis
without visualization. (Courtesy of Joe Ricker, PE of Earth Consulting Group, Inc.)

concentrations over time at a monitoring well at a hazardous-waste site. The ProUCL


output is presented in Figure 8.49 and indicates that there is statistically significant evi-
dence of an increasing trend at the monitoring well. An alternative analysis of the same
data set was conducted in Minitab using the Kendall’s tau statistic to evaluate trend. As
described in Helsel (2005), Kendall’s tau is a robust test statistic that does not require any
Groundwater Remediation 477
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.49
ProUCL Mann-Kendall graphical output indicating that concentrations are increasing with visual and statisti-
cal significance.

distributional assumption (e.g., normality or log-normality) and can accommodate cen-


sored data at multiple different detection limits. The Kendall’s tau statistic indicates that
there is a weak decreasing trend (p-value of 0.26), which completely contradicts the results
of the substitution method. It is important to note that ProUCL does have excellent meth-
ods of incorporating censored data for other statistical analyses, such as upper confidence
limit calculations and hypothesis testing. An example illustrating differences in hypoth-
esis testing results when using substitution versus appropriate methods is also included
in the companion DVD for reference.
The reason for this discrepancy is that several of the samples collected at this well were
significantly diluted because of the presence of a different constituent at very high concen-
trations. The results of these diluted samples were nondetect but at a significantly elevated
reporting limit (50 or 200 µg/L, both well above historic detections and reporting limits).
Substituting one-half the reporting limit therefore resulted in erroneously high detections
that skewed the trend analysis and led to a completely erroneous interpretation. While
someone familiar with both the data and the Mann-Kendall method may catch this error,
when such analyses are conducted in batches for many different wells and analytes, it is
not uncommon for these mistakes to be made. The real mistake was the conceptual failure
to account for the significance of nondetect data in the planning of how the analysis was
to be conducted. The raw data used in this example are included in the companion DVD
478 Hydrogeological Conceptual Site Models

for reference. The significance of this error and potential mitigation strategies are also
discussed in the work of Parsons (2009).

Remedial Time Frame


The two predominant methods of evaluating the remedial time frame required for
MNA are rate constant analysis and fate-and-transport modeling. Newell et al. (2002)
outline different methods of determining rate constants that are potentially useful in
determining when natural processes will result in the attainment of cleanup goals.
However, a very important distinction must be made between the overall attenuation
constant and the effective biodegradation constant. The latter metric is the rate of con-
centration decay that is solely attributable to biological processes, rather than the myr-
iad physical and chemical processes also relevant to MNA that also apply to the former
metric. When linear regression is simply performed for a concentration plot at a moni-
Downloaded by [University of Auckland] at 23:45 09 April 2014

toring well, the associated rate constant is more representative of an overall attenuation
constant rather than a biodegradation constant. The biodegradation constant should be
estimated using site-specific data and can be accomplished through laboratory micro-
cosm studies, field-based tracer studies, or during fate-and-transport model calibration
(Kresic 2009).
Groundwater modeling is discussed at length in Chapter 5. The most important model
parameter when assessing MNA remedial time frames is the contaminant source term,
which is notoriously difficult to estimate. Source migration processes that merit consid-
eration in MNA modeling are identified in the work of Parsons (2009) and include verti-
cal leaching, dissolution, diffusion, volatilization and gas-phase diffusion, and gas-water
partitioning.

MNA Sustainability
Evaluation of the short- and long-term sustainability of MNA processes has only recently
been identified as an important step in conducting MNA feasibility and effectiveness
studies. Chapelle et al. (2007) identify quantification of mass and energy balances as the
primary means of assessing MNA sustainability. The mass balance refers to the mass of
contaminants in the system over time and is a measure of the short-term sustainability of
MNA that determines whether the rate of contaminant transformation exceeds the rate
of contaminant loading. The energy balance is a novel concept that measures the long-
term sustainability of MNA and is related to the amount of metabolizable organic carbon
in the groundwater system. Natural attenuation is sustainable in the long-term when the
pool of bioavailable organic carbon is large relative to the carbon flux required of electron
acceptors to drive biodegradation to completion (Chapelle et al. 2007). This speaks to the
importance of collecting electron-acceptor data and classifying redox conditions when
evaluating the viability of MNA at a site. A conceptual diagram of MNA sustainability is
provided in Figure 8.50.
One model available in the public domain that can help determine MNA cleanup time
frames and assess MNA sustainability is Natural Attenuation Software (NAS), avail-
able for download at http://www.nas.cee.vt.edu/index.php and originally documented
in the work of Chapelle et al. (2003). A key parameter calculated by NAS is the natural
attenuation capacity, which is defined as the capacity of a system to absorb or trans-
form contaminants through dispersion, advection, biodegradation, sorption, volatil-
ization, and/or plant uptake. NAS utilizes code from SEAM 3D, which is a numerical
model for three-­dimensional solute transport and sequential electron acceptor-based
bioremediation.
Groundwater Remediation 479

Human activity

Chemical wastes released to environment

Chemical/biochemical
transformation
to innocuous
byproducts

Waste loading Transformation


exceeds transformation exceeds waste loading
Downloaded by [University of Auckland] at 23:45 09 April 2014

Waste accumulation No waste accumulation

Energy insufficient to Energy sufficient to


complete transformation complete transformation

Unsustainable
natural attenuation Sustainable
natural attenuation

FIGURE 8.50
Visual representation of the MNA sustainability evaluation. (Modified from Chapelle, F. H. et al., A Framework
for Assessing the Sustainability of Monitored Natural Attenuation, US Geological Survey Circular 1303, 2007.)

8.4 Alternative Remedial Endpoints and Metrics


8.4.1 Motivation
Despite advances in technologies applicable to groundwater remediation (many termed
innovative technologies), aquifer restoration for sites with complex geologic and contami-
nant characteristics has rarely been achieved. Several national advisory panels have stud-
ied the difficulties associated with cleanup of contaminated groundwater, including the
particular problems posed by DNAPLs, and have issued summary reports of their find-
ings. In 1994, the National Research Council (NRC) completed the report “Alternatives for
Ground Water Cleanup.” This report recommended that sites be categorized according to
the “Relative Ease of Cleaning Up Contaminated Aquifers as a Function of Contaminant
Chemistry and Hydrogeology” and gave an example of such a categorization scheme
(NRC 1994, Table ES-1), which clearly indicates that DNAPLs are the most difficult type
of contaminant problem to clean up (NRC 1994, p. 5). Among other findings, this report
included the following findings regarding “Setting Cleanup Goals” (NRC 1994, p. 18):

“Existing procedures for setting ground water cleanup goals do not adequately account
for the diversity of contaminated sites and the technical complexity of ground water
cleanup. . . Although the committee recognizes that different agencies must operate
under different authorities, all regulatory agencies should recognize that ground water
480 Hydrogeological Conceptual Site Models

restoration to health-based goals is impracticable with existing technologies at a large


number of sites.”

The NRC findings also indicate that there is considerable uncertainty about whether goals
established under existing procedures are overprotective or underprotective of public
health and the environment and about the overall costs to society when these goals can or
cannot be achieved.
As mentioned above, DNAPLs present unique challenges to site remediation because
of their physical and chemical properties. As of 2003, there were no documented, peer-
reviewed case studies of DNAPL remediation below the water table where concentrations
were permanently reduced below MCLs (U.S. EPA 2003). The following excerpt describing
the general recalcitrance of DNAPLs to remediation is taken from U.S. EPA (2009a):
Downloaded by [University of Auckland] at 23:45 09 April 2014

“Due to their specific gravity, DNAPLs tend to sink in the subsurface. Their migration
pathways tend to be complex and hard to predict due to the heterogeneous nature of the
underlying soil and fractured bedrock. As a result, a complicated DNAPL architecture
(shape and size) can develop that is made up of pools, ganglia, and globules in multiple
soil layers and bedrock fracture zones. And because of their low solubility, tendency to
displace water from larger soil pores, and tendency to diffuse into silt and clay [and the
bedrock matrix (added)], DNAPLs can release dissolved constituents for long periods of time
forming large groundwater plumes. Constituents in the migrating plume can diffuse into
aquifer materials under certain conditions only to back diffuse out at a later time.”

Matrix diffusion is the term given to the phenomena when DNAPLs diffuse into low-
permeability layers only to back-diffuse out into higher flux zones once the concentration
gradient has been reversed. Matrix diffusion and matrix storage of DNAPL constituents
are most problematic when transmissive zones comprise a small fraction of the aquifer’s
total volume, which is most often the case in fractured bedrock settings. Where significant
contaminant mass has diffused into the bedrock matrix, attainment of MCLs in bedrock
fractures may take hundreds of years or more. Case studies regarding matrix diffusion
and DNAPL contamination of bedrock are presented in Section 8.4.2, related to technical
impracticability.
Taking these challenges into consideration, the Executive Summary of a 2003 national
panel report on DNAPL remediation provides the following conclusions regarding
“Appropriate Metrics for Performance Assessment” (U.S. EPA 2003, p. xi):

“The Panel assessed the technical basis for using drinking water standards, such as
Maximum Contaminant Levels (MCLs), as the single performance goal for successful
DNAPL source-zone remediation and the use of chemical analyses in ground water
samples from monitoring wells as the primary metric by which to judge performance
of ground water remediation systems. Although an MCL goal may be consistent with
prevailing state and federal laws for all ground water considered a potential source of
drinking water and is a goal that is easily comprehended by the public, this goal is not
likely to be achieved within a reasonable time frame in source zones at the vast major-
ity of DNAPL sites. Thus, the exclusive reliance on this goal inhibits the application of
source depletion technologies because achieving MCLs in the source zone is beyond the
capabilities of currently available in situ technologies in most geologic settings.”

However, despite the recommendations of the panel, since 2003 there has been renewed
interest in DNAPL source-zone remediation because of recent experiences with aggressive
Groundwater Remediation 481

technologies such as in situ thermal remediation and in situ chemical oxidation (both of
which are profiled earlier in this chapter). In a 2009 status update report, the U.S. EPA
declares that there are now five DNAPL sites where MCLs have been achieved through
source-zone remediation (U.S. EPA 2009a). However, it is important to note than none of
these five sites involved contamination of bedrock groundwater, which remains the most
complex remedial scenario. In addition, it is questionable whether two of these sites should
even be classified as complex because of limited contaminant mass (Dry Clean USA No.
11502, Orlando, FL, and Pasley Solvents and Chemicals, Inc., Hempstead, NY). In general,
the most complex sites will have large, deep pools of DNAPL and/or discharges of DNAPL
to bedrock (U.S. EPA 2009a). The most complex project highlighted in the U.S. EPA report
is the Visalia Pole Yard remediation, the true benefits of which are examined in Section 8.5.
Regardless of these few successes profiled by the U.S. EPA, the recommendations of the
2003 national panel report remain valid and should be given further consideration. The
Downloaded by [University of Auckland] at 23:45 09 April 2014

U.S. EPA (2000) estimates that site owners will spend billions of dollars over the next sev-
eral decades remediating chlorinated-solvent contamination. Similarly, the Department of
Defense has an estimated 3545 sites requiring further investigation and remediation, a num-
ber of which have high complexity (Deeb et al. 2011). Therefore, given the paucity of com-
plex DNAPL sites that have been remediated to drinking-water standards, there remains
great demand for alternative remedial endpoints and metrics that can be used to set real-
istic performance expectations while also protecting human health and the environment.
Deeb et al. (2011) present an overview of alternative remedial endpoints (i.e., other than
MCLs) discussed in published literature, listing the following available options:

• Technical impracticability (TI) waivers


• Other applicable or relevant and appropriate requirement (ARAR) waivers (greater
risk, interim measures, equivalent standard of performance, inconsistent applica-
tion of state standards, fund balancing)
• Alternative concentration limits (ACLs)
• Groundwater management/containment zones
• Groundwater reclassification/classification exemptions
• MNA over long time frames
• Adaptive site management
• Remediation to the extent practicable

The rest of this section highlights important concepts and visualizations related to TI
waivers, risk-based alternative endpoints, and mass flux as an emerging remedial metric.

8.4.2 Technical Impracticability


Figure 8.51 is a paraphrased e-mail from an independent technical reviewer of groundwa-
ter remediation efforts at a karst site in the United States. This is an example for a carbon-
ate aquifer that is karstified and a spring that is being impacted with concentrations of
TCE, DCE, and VC that are significantly above MCLs. A pump and treat system is in place,
and the site environmental consultant has completed numerous other investigatory and
remedial measures, including several rounds of ISCO injections at multiple locations and
dye-tracer tests. The site consultant has contracted the independent reviewer for help in
482 Hydrogeological Conceptual Site Models

Jane Doe, PG, PhD


To: jsmith@environmentalconsulting.com
Subject: Re: More injections?

Hi John,

I just glanced through the graphs you sent and saw some telling
things (first of all, the presentation of data is very nice and
illustrative – you did a great job on that!):

• Some wells responded more favorably, some seem not impacted


at all by the injections.

• Unfortunately, the spring is responding only marginally to


everything you are trying. I noticed cycling of the concentration
at the spring which seems more related to the pumping than any
Downloaded by [University of Auckland] at 23:45 09 April 2014

injections – when you are pumping more, the concentrations at the


spring are going down and vice-versa; this seems to indicate
pumping is effective in containing a portion of the plume.
However, the concentrations at the spring are still well above MCL
regardless of pumping/no-pumping. You could try to estimate which
areas were “arrested” by the pumping and did not migrate to the
spring; maybe this is possible by looking at the hydraulic head
data. Unfortunately, in karst this is not a slam-dunk. It also
seems that the spring did not respond at all to the injections.

• You are getting accumulation of DCE and VC in quite a few


places. This may indicate that less oxygenated portions of the
aquifer are not favorable for degradation of the two (e.g.,
groundwater is not receiving enough oxygenated recharge from the
land surface, is not moving as fast, etc.). This needs more
analysis such as correlating DO-ORP data with the locations of
these wells and then what else do we know about the wells (lower
relative hydraulic conductivity, or less response to rainfall, or
both). Well MW-XY is a good example of accumulation of the two
COCs but the DO-ORP data show reasonably high values so the
hypothesis does not hold here…

• Some wells like MW-YZ are only seeing what is happening


upgradient and are not being remediated by the injections (Figure
8.52). This well shows the same TCE concentration before and after
the injection, and a slug of DCE and VC that went through; the
well quickly returned to its pre-injection levels, still
significantly higher than MCL. Wells like this one can tell us
something about how quickly things are moving through the aquifer
if we can (with some degree of confidence) find out connections
between the injection point(s) and the well; the timing of the
breakthrough curve may tell us about the groundwater velocity.

I can probably add a few more things if I spend more time on the
review (you have to tell me how much you can bear in terms of
time/cost). However, the bottom line is this:

• If the measure of ultimate success is the spring below MCL,


I think it cannot be achieved any time soon even with repeated
injections. The spring graphs and the whole history tell us this.

FIGURE 8.51
Paraphrased e-mail from Jane Doe, PG, PhD (the independent technical reviewer, fictitious name), to John
Smith (the environmental consultant for the site, fictitious name) regarding potential future injections at the
karst site.
Groundwater Remediation 483

• There are portions of the aquifer still contributing


significant concentrations to the groundwater and ultimately to
the spring; those are currently not showing any response to the
injection. It is important to figure out why is this happening. It
may be that the stuff diffused into the rock matrix and is
contributing back significantly; or the aquifer is tighter in
places and the flow velocity is low, or both. One thing is certain
though – so far no one seems to be able to point the exact
locations of these “contributors” (I do not want to use word DNAPL
since someone may want to kill me…). Consequently, repeating the
injection yet again will not solve the problem for sure.

More questions than answers, but my immediate recommendation is


this: don’t repeat the same injection protocol (same locations,
etc.) without trying to answer four critical things (and these
HAVE to be discussed with the client, and possibly the regulators
knowing they are very adversarial to anyone bringing up technical
Downloaded by [University of Auckland] at 23:45 09 April 2014

impracticability…):

(1) What is the measure of ultimate success of remediation (MCL


or ?)
(2) Can we guarantee it?
(3) How much money we have to spend to achieve what we
guarantee?
(4) Is there a reasonable alternative still protective of human
health and the environment?

Please let me know how to proceed and when do you want to have a
call.

Thank you,
Jane

FIGURE 8.51 (Continued)

FIGURE 8.52
Graph of VOC concentrations at well MW-YZ referenced in the fourth bullet of the e-mail presented in Figure
8.51.
484 Hydrogeological Conceptual Site Models

determining what the next remedial steps should be, specifically if another round of ISCO
injections is merited.
Following is an excerpt from a five-year review report for a Superfund site that illus-
trates a similar point:

“The TCE mass calculations in the ‘Final Comprehensive Groundwater FS for OU


1, SAIC, April 2008’ determined that there are millions of pounds of TCE in the SIA
[Southeast Industrial Area] aquifer system. The calculations determined that in dis-
solved phase, sorbed phase, free phase and diffused mass of the residuum interval at
the SIA, there could be 23,874,125 pounds of TCE. Even if these calculations are off by
50% there is a tremendous amount of contamination present in the SIA aquifer system.
If only one million pounds of TCE is present, removing 625 pounds/year would take
1600 years to remove one million pounds if the removal and flow rate remain constant
Downloaded by [University of Auckland] at 23:45 09 April 2014

which is unlikely. If the removal rate could be increased to 5,000 pounds/year it would
take 200 years to remove one million pounds, which is likely only a fraction of the con-
taminants present. Considering the contaminant mass, complexity and heterogeneity
of the aquifer, and the limited success of previous remediation technologies, it appears
that the SIA aquifer system is going to remain grossly impacted for hundreds of years.
It would seem that a more effective course of action, rather than treatment actions that
have a low expectation of real, positive results, would be to focus on the protection of
off-site receptors by increased off-site monitoring” (USACE 2010).

What the above five-year review failed to emphasize is that this is a typical karst site
together with a major impacted karst spring (flow rate > 30 million gallons per day or 1.3
m3/s), hydraulic conductivity of deep preferential flow paths (>300 ft below ground sur-
face) greater than 1000 and even 10,000 ft/day, and contaminated groundwater detected
deep in the karst aquifer with COC concentrations several orders of magnitude higher
than MCLs. In spite of the overwhelming evidence of karst, the U.S. EPA is still character-
izing the impacted groundwater system as residuum, weathered bedrock, and competent
bedrock (Wischkaemper 2007). The failure to classify the system as karst is a fundamental
error in the CSM that obscures the remedial decision-making processes.
This site has been investigated for more than two decades now, including multiple itera-
tions and versions of the RI report, FS, groundwater-remediation pilot tests, and interim
correction measures including pump and treat, with the estimated cost to this point
upward of $100 million. Despite all this, the U.S. EPA has not yet issued a ROD for the
site. In other words, the agency has not yet decided what the cleanup of the site should be.
Additional investigations and pilot tests are being planned and are likely being performed
as this is being written, partially because previous pilot tests were qualified as “hastily
conceived and implemented” (Wischkaemper 2007).
The above two karst examples are classic cases when remediation to MCLs is technically
impracticable, which means that remedial actions are either infeasible from an engineer-
ing perspective or unreliable in terms of meeting ARARs. One of the authors participated
on a panel on technical impracticability held at a national groundwater conference coor-
ganized by the U.S. EPA. Incidentally, the same $100 million site was a subject of discus-
sion. After a remark by the author that more than $100 million spent on the site so far
could have been used to better the lives of the people living in the impacted community,
including building libraries, schools, and possibly even a regional opera house (the com-
munity is drinking safe water provided by the PRP), one of the U.S. EPA regulators replied
that people in Europe cannot drink water from their faucets because it is contaminated.
Groundwater Remediation 485

After this perplexing statement by the U.S. EPA regulator, the panel discussion changed
the subject.
The previous two examples are not exceptions. There are numerous other sites in the
United States contaminated with DNAPLs in difficult hydrogeologic settings such as frac-
tured rock and karts aquifers with a very similar history of failed attempts to clean ground-
water to MCLs (drinking-water standards). The U.S. EPA recognized this early on and, in
1993, issued a directive titled “Guidance for Evaluating the Technical Impracticability of
Ground-Water Restoration” (U.S. EPA 1993).
The following citation is from a memorandum by Elliott P. Laws, assistant administra-
tor of the U.S. EPA dated July 31, 1995, and addressed to the regional administrators of all
10 U.S. EPA regions at the time and the directors of various programs within the regions:

“During our meeting, we discussed the fundamental changes that have occurred in
Downloaded by [University of Auckland] at 23:45 09 April 2014

the program’s approach to sites with contaminated groundwater where contamination


may be ‘technically impracticable’ to restore to drinking water standards (e.g., where
contaminants such as dense non-aqueous phase liquids (DNAPLs) warrant our use
of a waiver of Federal and/or State clean-up standards (ARARs)). Based on the infor-
mation now available on the special problems associated with DNAPL sites, OSWER
expects that Technical Impracticability (TI) waivers will generally be appropriate for
these sites. These situations demand a flexible, phased approach to groundwater reme-
diation such as use of interim RODs, “no action” alternatives, natural attenuation, TI
waivers, etc.
To reiterate a major point of our discussion, I expect each Region to employ the TI
waiver in appropriate remedy selection documents this fiscal year. I am concerned with
preliminary data, indicating that about 30 out of 90 groundwater RODs planned for this
fiscal year address sites with DNAPLs, but fewer than 10 TI waivers of ARARs have
been planned for these RODs to date. I am concerned that these RODs may not fully
reflect the current state of information about sites with DNAPLs present.
Beginning immediately, RODs addressing DNAPL contamination that do not follow
the policy in favor of TI waivers at such sites must include written justification for that
departure from this policy. If you feel the data are incomplete on whether a TI waiver
is justified, or that there is insufficient time this fiscal year to coordinate ROD changes,
I am directing you to utilize an interim ROD or to postpone signing the ROD until
the data become available and/or sufficient coordination among Federal/State/Tribal/
community/PRP/other stakeholders can occur. I will adjust Regional Superfund accom-
plishment planning targets accordingly.
Our Superfund policy guidances recognize that we can protect our groundwater
resources and, at many sites, remediate large quantities of contaminated groundwater.
However, they also identify situations, such as those described above, where techni-
cal, time, and cost limitations demand a more limited approach. I want to be sure you
are taking command of these critical groundwater remedy selection decisions at both
Federal facility and non-Federal facility Superfund sites. I have asked the Headquarters
Superfund Regional Coordinators to follow up with Regional staff on this and other key
remedy selection issues (land use designation, presumptive remedies, and adherence to
lead policy) over the next few weeks” (Laws 1995).

In an extraordinary rejection of this policy statement, as of 2010 only 57 additional TI waiv-


ers for groundwater contamination have been issued since the first and only U.S. EPA TI
directive was published in 1993, equating to an annual average of 3.6 waivers per year
(Deeb et al. 2011). For reference, 20 TI waivers were issued prior to the directive’s issuance
in 1993 with an annual average of 3.3 waivers per year between 1988 and 1993. Thus, the
486 Hydrogeological Conceptual Site Models

annual number of TI waivers issued is approximately equal before and after the TI direc-
tive. As the number of ROD documents has decreased over time, it is also important to
look at the TI statistics on a percent-selected basis. Figure 8.53 shows that the percent of
CERCLA decision documents granting TI waivers increased only by approximately one
percentage point in the late 1990s, which may or may not be attributable to the 1993 guid-
ance document. More alarmingly, in 2008 and 2009, the number and percent of TI waivers
granted dropped significantly. This recent trend proves that the U.S. EPA has so far com-
pletely ignored the recommendations of its own Ground Water Task Force (2007):

“The decision-making process involved in determining cleanup goals appropriate for a


DNAPL source zone, and whether remediation efforts should be undertaken to remove
or treat the DNAPL source zone was a common theme in the comments received. Most
task force members agreed that the current Superfund guidance on technical imprac-
Downloaded by [University of Auckland] at 23:45 09 April 2014

ticability (TI) should be updated. Also, there was support for identifying mechanisms
for acknowledging complex site conditions that would be useful in the decision-making
process for cleanup programs other than Superfund.
Recommendation. Develop guidance on how to acknowledge technical limitations
posed by DNAPLs in EPA cleanup decisions, including updated guidance on the use
of technical impracticability (TI) decisions in the Superfund program. The guidance
should also discuss mechanisms for acknowledging technical limitations posed by site
complexities other than DNAPLs.
Long-term cleanup goals for Superfund sites and RCRA Corrective Action facilities
do not always include attaining MCLs throughout the plume. For ground waters that
are not designated by states as current or future sources of drinking water, drinking
water standards are generally not used as cleanup levels and alternative cleanup goals
are typically established, such as control of sources and containment of the plume. Also,
where the remedy calls for on-site management of waste materials (such as a landfill),
cleanup levels generally do not need to be attained in ground water beneath the waste
management area. In such cases, attaining MCLs throughout the plume applies only to
that portion of the plume outside the waste management area.

FIGURE 8.53
TI waivers for groundwater contamination granted between 1988 and November 2010, using data published in
the work of Deeb et al. (2011).
Groundwater Remediation 487

Furthermore, both the Superfund and RCRA Corrective Action programs generally
allow alternative cleanup goals to be established at sites where attaining MCLs through-
out the plume is determined to be technically impracticable (TI). Both of these EPA
cleanup programs also establish alternate cleanup limits (ACLs) in lieu of MCLs, under
appropriate circumstances. However, ACLs defined under CERCLA are somewhat differ-
ent from those in RCRA Corrective Action. Some state cleanup programs have provisions
for establishing contaminated ground water containment or management zones. Within
such a zone, active cleanup of contaminated ground water may be deferred or may not be
required. The specifics of how containment or management zones are defined, and what
alternative cleanup goals are applied, differ from state to state.
For the reasons discussed above, sites where DNAPLs are present in the subsurface
are very difficult to clean up to drinking water standards. Cleanup technologies appli-
cable to these sites often include individual approaches or various combinations of
approaches intended to control migration of contaminants (containment), remove con-
Downloaded by [University of Auckland] at 23:45 09 April 2014

taminants from the subsurface (extraction), or treat contaminants in place (in situ treat-
ment). Each of these technology types have been used (with varying degrees of success)
on DNAPLs in the source zone or on dissolved contaminants in the plume.”

Unfortunately, not only did the agency fail to update its own TI waiver policy and thus
withhold related technical assistance to working professionals, but it appears that any
references to TI, including related case studies, are completely absent from various U.S.
EPA Web sites. One cannot find any list of RODs where TI is a part of the remedy and can-
not use any search engine to learn more about the TI process or somehow stumble upon a
Superfund site with the issued TI waiver. The agency has mostly ignored TI in its annual
reviews of applied technologies or any other technical publication (U.S. EPA 2010). It is also
indicative that some U.S. EPA regions have issued only a handful of TI waivers in almost
20 years, and region 4 has yet to issue a single standing TI waiver. Interestingly, region 4
covers several main karst states, including Tennessee, Kentucky, Alabama, and Florida. It
therefore appears that U.S. EPA region 4 is afraid of setting a precedent by acknowledging
groundwater remediation at a karst site may be technically impracticable. This policy is,
for the most part, followed by the states in region 4, including at their own sites. The only
known document to provide a consolidated listing and description of sites with TI waivers
is that of Deeb et al. (2011), which is a study funded by the ESTCP through the Department
of Defense, which, as previously mentioned, has a great number of complex sites for which
cleanup would be funded by federal tax dollars.
The decreasing trend in annual TI waivers granted and the absence of substantive tech-
nical guidance regarding TI indicate that the U.S. EPA adopted an informal policy against
the use of TI waivers many years ago. This policy was formalized through two U.S. EPA
documents published in summer 2011. The first document, published in July 2011, is a
Groundwater Road Map that summarizes the groundwater evaluation and remediation
process at Superfund sites through a conceptual flowchart and related narrative (U.S. EPA
2011f). This road map document is included in the companion DVD for the reader’s ben-
efit. Surprisingly, TI documentation is an element of the road map. However, the road only
leads to TI after the following steps have been completed:

• Remedial design
• Remedial action
• Remedy operation complete with performance monitoring, five-year reviews, and
potential system optimization
488 Hydrogeological Conceptual Site Models

• A feasibility evaluation for other technologies in the event that remedial goals are
not being achieved

In other words, the road map indicates that TI is a last resort option only to be considered
after remedial actions have been conducted. Even if the best available technology has a
very low probability of success in meeting MCLs (e.g., karst sites with DNAPL), some form
of a costly remedial effort is recommended.
The second document, published in September 2011, is a memorandum from the direc-
tor of the U.S. EPA Office of Superfund Remediation and Technology Innovation clarifying
the previously referenced July 31, 1995 memorandum from Elliott P. Laws, the former assis-
tant administrator of U.S. EPA. The following text is taken directly from the September
2011 memorandum (Woolford 2011):
Downloaded by [University of Auckland] at 23:45 09 April 2014

“This (memorandum) is to clarify that: 1) the 1995 memorandum was intended to apply
only to remedy decisions made in Fiscal Year 1995 and 2) DNAPL contamination in and
of itself should not be the sole basis for considering the use of a TI waiver at any given
site. Improvements in science and technology have shown much progress in characteriz-
ing and successfully treating/extracting DNAPLs from subsurface areas. Recent remedy
decisions have selected a variety of in-situ technologies to address DNAPL contamination
such as in-situ chemical oxidation, in-situ bioremediation and in-situ thermal remedia-
tion. . .For the reasons stated above, the 1995 memorandum entitled, Superfund Groundwater
RODs: Implementing Change This Fiscal Year, July 31, 1995 (OSWER Directive 9335.3-03P)
should no longer be considered when making current site decisions (emphasis added).”

This memorandum rescinds the 1995 mandate that U.S. EPA regions utilize TI waivers at
DNAPL sites or provide written justification otherwise. While it seems as though the 1995
directive was never truly followed, U.S. EPA rejection of its policy prescription is now in
writing. In addition, the memorandum formally endorses the use of aggressive in situ tech-
nologies such as thermal remediation and ISCO despite limited evidence that these tech-
nologies lead to cost-effective, better outcomes in the long term. While this memorandum
is decidedly anti-TI, there is one important statement, quoted below, that conflicts with the
road map publication:

“In situations where groundwater restoration is unattainable from an engineering per-


spective, considering a TI waiver may be an appropriate part of the remedy selection
process” (Woolford 2011).

In other words, a TI waiver can theoretically be obtained at the remedy selection phase
in lieu of aggressive remedial action. While DNAPL alone does not imply TI, “sufficient,
science-­based justification” can be used to invoke a TI waiver where appropriate (Woolford
2011). The U.S. EPA should therefore revise its road map to reflect that TI is a viable option
at the initial remedy selection phase. Regardless of this caveat, it is likely that the 2011
memorandum will further diminish the usage of TI at complex Superfund sites.
As a consequence of inadequate technical and regulatory guidance from U.S. EPA,
including inconsistencies between various regions regarding TI waivers, working profes-
sionals developing CSMs for groundwater remediation at difficult sites are left between a
rock and a hard place. It is therefore not surprising that many of them are referring to TI as
“PI” (political impracticability) and are producing for-internal-use-only flowcharts similar
to the one shown in Figure 8.54. Note that a colleague not working for either of the authors’
Groundwater Remediation 489
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.54
Political impracticability. Because of the lack of substantive guidance, regulations, or general policy, decisions
cannot be made without wasteful spending on excessive field investigation and data analysis.

consulting companies volunteered this figure; this colleague wishes to remain anonymous
as he or she is afraid of alienating some of the regulators she or he must interact with on
a regular basis.
One of two criteria needs to be met in order to apply for a TI waiver: (1) engineering
infeasibility or (2) unreliability (U.S. EPA 1993). A remedial action can be considered infea-
sible from an engineering perspective if current engineering methods designed to meet
the ARARs cannot be reasonably implemented. An action can be considered unreliable if
it is shown that existing remedial alternatives are not likely to be protective in the future.
Together, these two criteria define technical impracticability from an engineering perspec-
tive. Furthermore, a TI waiver would only be granted based on demonstration that cleanup
cannot be achieved within a reasonable time frame using the best available technology.
As discussed by the U.S. EPA, a reasonable time frame for restoring groundwater to ben-
eficial uses depends on the particular circumstances of the site and the restoration method
employed. A comparison of restoration alternatives from the most aggressive to passive
will provide information concerning the approximate range of time periods needed to
attain groundwater cleanup levels. An excessively long restoration time frame, even with
490 Hydrogeological Conceptual Site Models

the most aggressive restoration methods, may indicate that groundwater restoration is
technically impracticable from an engineering perspective (U.S. EPA 1996b). Notably, how-
ever, the various U.S. EPA regions have differing views on the use of TI waivers, including
the definition of reasonable.
It should be noted that a reasonable time frame is sometimes generically applied to
be 100 years, based on the order of magnitude number provided by the U.S. EPA (1993).
However, there is no accepted definition of reasonable as applied to groundwater restora-
tion because it is dependent on the applicable technologies and site-specific conditions,
such as hydrogeology. This leads to confusing and contradictory interpretations of what
reasonable really means. In some instances, stakeholders have interpreted time scales on
the order of 600 years to be reasonable, whereas others have viewed time scales greater
than 30 years to be technically impracticable (Deeb et al. 2011).
While there is little that can be done if a cleanup time frame of several hundred years
Downloaded by [University of Auckland] at 23:45 09 April 2014

is considered reasonable, there are two primary methods of demonstrating technical


impracticability at a site: field methods and modeling-based methods. Either approach
can be applied before implementing a remedial measure. A case study in each method is
presented below.

Case Study: Use of Field Methods to Demonstrate Technical Impracticability


A case study in DNAPL matrix diffusion is presented below from one of the earliest
known full-scale field studies on the subject completed by AMEC E&I, Inc. The following
is adapted from Thompson et al. (2000).
The Loring Air Force Base Superfund site is situated in northern Aroostook County, ME,
approximately 3 mi from the United States/Canada border. Extensive aircraft maintenance
and refueling operations along the flight line resulted in widespread chlorinated solvent
and jet-fuel contamination in groundwater over approximately 235 acres. An extensive site
investigation was conducted using a variety of methods, including the installation of 449
bedrock wells and borings completed between 1988 and 1997. Because of its large size, the
site is divided into many operating units (OUs), which are essentially subsites with distinct
cleanup plans. Most of the bedrock at the site consists of argillaceous limestone identi-
fied as the Carys Mills Formation and assigned to the upper Ordovician to lower Silurian
age. Bedrock is overlain by approximately 32 ft of glacial till. The DNAPL entry zone was
within a shallow river channel, and groundwater vertical gradients are upward.
An essential component of the field investigation was the collection and analysis of rock
core samples for VOC and matrix porosity analysis to quantify the contaminant mass dif-
fused in the rock matrix. The vertical distribution of contaminant mass in the rock matrix
within the residual DNAPL source area was determined by collecting methanol extracted
rock chip (MERC) samples from rock core adjacent to the fractures. Rock chip samples (see
photograph in Figure 8.55) were collected from the fracture margin for methanol extrac-
tion and chemical analysis of VOCs and organic carbon content. Rock cores (Figure 8.55)
were also collected from the fracture margin for determination of matrix porosity by a
geotechnical laboratory using American Petroleum Institute recommended methods.
A graph of VOC concentrations measured in the MERC samples is presented in Figure
8.56. Groundwater analytical results from low-flow groundwater straddle packer sampling
are consistent with rock matrix data and generally show a decreasing pattern of dissolved
VOC concentrations with increasing depth. Although a majority of the contaminant mass
is in the upper bedrock, groundwater impacts persist to depths of more than 200 ft into
bedrock and are a function of deep DNAPL penetration along several steep bedding plane
fractures.
Groundwater Remediation 491
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.55
Left: Rock chip samples for methanol extraction (MERC) and determination of matrix diffusion. Right: Rock
cores for determination of matrix porosity by helium and mercury intrusion methods. (Courtesy of Peter
Thompson, AMEC.)

Matrix porosity analysis was conducted for 43 rock samples initially selected from 11
cored boreholes within OU 12. The resulting porosity measurements ranged from 0.2%
to 2.4%. Eighty-eight percent of these values were less than 0.9%. A weak correlation of
decreasing matrix porosity with increasing depth was evident (sample depths ranged
between 20 and 300 ft below ground surface). Two methods of determining porosity were
used: low-pressure helium injection and high-pressure mercury injection. The mercury
method confirmed the results of the helium method.
Additional rock matrix samples were collected at a later date from a different part of
OU 12, termed the Quarry Site, where PCE DNAPL was inferred in the fractured bedrock.
Weathered rock matrix surrounding specific fractures exceeded 3% porosity and, when
extracted and analyzed, showed the presence of substantial contaminant mass in some
samples.

FIGURE 8.56
Concentration of COCs diffused into rock matrix; composite of six borings. (Courtesy of Peter Thompson,
AMEC.)
492 Hydrogeological Conceptual Site Models

In 1999, the OU 12 ROD was finalized with remedial alternatives selected for 18 ground-
water plumes. This included two TI waivers based on the presence of DNAPL in fractured
bedrock at one site and matrix diffusion in weathered bedrock at another. The field inves-
tigation described above successfully demonstrated that the presence of significant VOC
mass in the rock matrix made remediation to MCLs infeasible. Without this conclusive,
field-based proof, it is likely that the TI waivers would not have been obtained, and some
form of aggressive in situ source remediation would have been erroneously applied.

Case Study: Use of Modeling to Demonstrate Technical Impracticability


Contaminant fate-and-transport modeling can be used to demonstrate technical impracti-
cability by evaluating potential differences in cleanup time frames after applying various
remedial measures. Typically, this involves an assumption regarding mass removal in the
source area. For example, the modeler may assume that in situ thermal remediation is able
Downloaded by [University of Auckland] at 23:45 09 April 2014

to remove 90%–99% of the source area contaminant mass in one year of treatment. The
resulting time it takes for the groundwater plume to reach MCLs may then be compared
to that of the baseline alternative. In this manner, it may be determined if the remedial
intervention results in a reasonable cleanup time.
The following text describes a real-life modeling exercise conducted to evaluate how
cleanup times are affected by an aggressive in situ remediation in a karst aquifer system.
At the site in question, a significant DNAPL mass resides in the residuum in multiple dis-
perse source areas, and dissolved-phase contaminant concentrations in the bedrock aqui-
fer are several orders of magnitude above MCLs in these areas. A dissolved-phase plume
more than 1 mi in length has developed in the bedrock aquifer. No DNAPL was observed
in the bedrock aquifer during site investigation work; however, based on dissolved-phase
concentrations significantly above 10% of the contaminant solubility limit, it is highly
likely that DNAPL is present in the bedrock. To be consistent with the field investigation
findings and avoid potential controversy, all DNAPL was assigned to the residuum layer
in the model. Even with this liberal assumption, baseline modeling scenarios indicate that
it will take more than 1000 years for natural attenuation processes to reduce concentrations
to below MCLs across the site.
An alternative modeling scenario was conducted to simulate the effects of a highly suc-
cessful and aggressive in situ remediation. In this scenario, 90% of DNAPL mass and 100%
of the dissolved-phase mass was removed from all residuum source areas in year 1 (the
first year of the simulation). Once again, this is a very liberal assumption as these levels
of contaminant destruction in a karst residuum represent a highly optimistic remedial
outcome for any technology. Following the year-long remediation, the model predicts it
will take 28.5 additional years for all remaining DNAPL in the residuum to be depleted
by natural dissolution into infiltrating water from aerial recharge. While this may seem
like a major success, because of the extent and high concentration of the bedrock plume,
dissolved-­phase concentrations in the bedrock aquifer are predicted to remain signifi-
cantly above the MCL well after 130 years. In some places, concentrations are still more
than two orders of magnitude above the MCL. The results of the source-remediation mod-
eling scenario are presented in Figure 8.57.
For the above site, the model was successful in demonstrating that cleanup could not be
achieved within a reasonable time frame using the best available technology. Therefore, the
aggressive source-zone remediation has not yet been selected, and emphasis was placed
on risk reduction and creating conditions favorable to long-term MNA. Note that this out-
come is by no means guaranteed for a different site with the exact same modeling results.
For example, a remedial measure estimated to reduce the cleanup time to 250 years may
Groundwater Remediation 493
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.57
Fate-and-transport simulation results for the remedial action alternative. All slides show dissolved concentra-
tions of the COC in the bedrock aquifer underlying the residuum. The width of the shown portion of the model
is approximately 5000 ft.

be considered reasonable and practicable versus a 500-year cleanup time estimate under
baseline conditions. It is not clear how a cleanup time of 250 years is reasonable. Consider
for example that 250 years ago the year was 1761, and the United States of America did not
even exist.
There are also definitive technical reasons why choosing between remedies with cleanup
time frames in the hundreds of years is fundamentally wrong. Economically, the majority
of the net present value of any remedial alternative will be captured in the first 30 years of
costs, indicating that long-term remedial cost differences 100 or more years in the future
494 Hydrogeological Conceptual Site Models

are irrelevant. The accuracy of numerical modeling at these time scales is also question-
able, as quoted below from U.S. EPA (1999b):

“The longer the time frame simulated, the greater the uncertainty associated with the
modeling result. While the time to reach remedial objectives at all points in the Joint Site
groundwater will likely be on the order of 100 years, simulations greater than the order
of 50 years into the future are generally not reliable or useful. EPA has used simulations
of 10–25 years for comparing remedial alternatives, even though the remedial action is
not complete in that time frame under any of the alternatives. This provides a measure
of each alternative’s relative performance and progress at 25 years toward meeting the
remedial objectives.”

This U.S. EPA statement directly contradicts other ROD documents in which modeling is
used to justify selection of one remedial approach over another when both alternatives
Downloaded by [University of Auckland] at 23:45 09 April 2014

have estimated cleanup times greater than 100 years. Inconsistent regulatory guidance
regarding technical standards of practice and, more fundamentally, inconsistent interpre-
tation of what constitutes a reasonable timeframe will continue to hinder the advancement
of technical impracticability as a viable remedial alternative until social and economic fac-
tors are considered in the cleanup process. This concept is discussed further in Section 8.5.

8.4.3 Risk-Based Cleanup Goals


Human health and ecological risk assessment is an essential component of any site inves-
tigation and remediation project. In fact, the entire remedial process is predicated on the
assumption that the COCs present unacceptable risks to human health and/or the envi-
ronment. Risk assessment at any site is composed of four primary steps:

• Hazard identification (through data collection and evaluation)


• Exposure assessment
• Toxicity assessment (or dose-response assessment)
• Human health and ecological risk characterization (U.S. EPA 1989b)

The end result of a human health–risk assessment is a hazard index (HI) for noncarcino-
gens and an excess lifetime cancer risk (ELCR) for carcinogens. Typical risk limits for life-
time exposure are a cumulative HI of 1 and an ELCR between 10 –4 and 10 –6 (U.S. EPA
1991). An ELCR of 10 –6 equates to one excess cancer per 1 million exposed individuals.
These ELCR thresholds are extremely conservative considering that in the United States
today, men have slightly less than a one in two lifetime risk of developing cancer, while
women have slightly more than a one in three lifetime risk of developing cancer (ACS 2011).
Additional discussion on the public health benefits of remediation is presented in Section
8.5. Ecological-risk characterization is generally more difficult to quantify at a site-specific
level as it often requires fish tissue analyses and other methods. Therefore, ecological-risk
characterization is often based on the comparison of contaminant concentrations in sur-
face water and sediment to established regulatory ecological benchmark values.
A fundamental element of the CSM and the human health– and ecological-risk assess-
ment is the conceptual exposure model, composed of the following elements:

• Contaminant release mechanisms to media of concern


• Contaminant migration pathways, describing the interconnection of relevant media
Groundwater Remediation 495

• Receptors and exposure pathways, such as ingestion, inhalation, dermal contact,


and food chain accumulation

The conceptual exposure pathway may be presented as a traditional schematic/flowchart


(Figure 8.58) or as a cartoon (Figure 8.59). The goal of these figures is to clearly demonstrate
the components of the conceptual exposure model and to determine whether or not a com-
plete exposure pathway exists that merits risk quantification.
At CERCLA sites, risk assessment is one of the two sources of chemical-specific prelimi-
nary remediation goals, which are long-term cleanup targets used during analysis and
selection of remedial alternatives. The other source is concentrations based on ARARs,
including MCLs set under the Safe Drinking Water Act (U.S. EPA 1991). Where current
or potential drinking water is not a beneficial use of groundwater at a site, risk-based
cleanup levels can represent an alternative remedial end point to MCLs. For example, risk-
Downloaded by [University of Auckland] at 23:45 09 April 2014

based decision making has been successful in expediting the cleanup of Underground
Storage Tank (UST) sites that pose a low risk to human health and the environment (U.S.
EPA 1995). It is the hope that in the future, a similar risk-based approach may be used
for more complex CERCLA sites where contamination presents very low risk to human
health and the environment. Unfortunately, for the time being, it is unlikely that low risk
will be accepted by the regulators as justification for an alternative remedial approach or
end point for a site with groundwater that can be classified as a potential future drinking-
water source.
Regardless, there is increasing interest in using risk-based metrics where MCLs may
traditionally apply. One such mechanism is a greater risk ARAR waiver, which may be
obtained when remedial actions completed to meet an ARAR would result in greater risk
to human health and the environment than an ARAR waiver and selection of another

Release Release Migration


mechanism mechanism pathway/
Potential 1 2 exposure Exposure
sources Media media routes Receptors
maintenance

Construction

Site visitors
workers

workers

Infiltration/
Site

percolation

Marina Incidental ingestion


Infiltration/ Ground
releases Soil percolation water Inhalation
and spills Dermal contact

Ingestion
Runoff Sediment Inhalation
Dermal contact

Ingestion
Soil Dermal contact
Potentially complete exposure Fugitive dust inhal.
pathway evaluated in HHRA
Pathway deemed incomplete Vapor emission
Soil vapor inhalation
because no volatile COPCs are present

FIGURE 8.58
Schematic/flowchart of a conceptual exposure model for human health–risk assessment. (Courtesy of Laura
Smith, AMEC.)
496 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.59
Cartoon diagram of a conceptual exposure model for ecological-risk assessment. (Courtesy of Lisa Campe,
Woodard & Curran, Inc.)

alternative. A groundwater remediation site may be a candidate for a greater risk ARAR
waiver if one or more of the following conditions can be demonstrated (Deeb et al. 2011):

• Greater risk to drinking-water aquifer(s) resulting from potential contaminant mobi-


lization or redistribution during remediation—a particular concern for DNAPL
sites.
• Greater risk to nearby wetlands, agriculture, and ecosystems presented by a pump
and treat system that may cause dewatering and/or land subsidence.
• Greater risk to sensitive ecosystems in areas where remediation activities would
cause physical damage and otherwise be a disturbance.
• Greater risk posed by explosive hazards or other health-and-safety hazards asso-
ciated with the only suitable remedial technologies. For example, if ISCO with
catalyzed hydrogen peroxide is the only viable technology, yet it is demonstrated
that gas and heat evolution cannot be adequately controlled such that significant
hazards arise. Other examples would be unacceptable truck traffic, accident risk,
and fugitive dust emissions resulting from a soil excavation.
• Liner or capping requirements that reduce natural flushing and dissolution in the
source zone could potentially extend the time for groundwater to reach ARARs,
resulting in greater risk.
Groundwater Remediation 497
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.60
Remediation site workers must thoroughly protect themselves against exposure to contaminants at high con-
centrations. The risks they face may outweigh those of less intense, nonoccupational on- or off-site exposures.

Another critical and often overlooked example of a situation where greater risk may
apply is related to the occupational exposures of on- and off-site workers who perform
remediation activities. Remedial technologies involving the extraction of concentrated
contaminant streams can be very dangerous to implement and require significant health-
and-safety planning and expenditure (Figure 8.60). In some cases, the occupational risk
faced by these workers may be greater than the theoretical lifetime cancer risk of the tradi-
tional receptors (e.g., potential future consumption of contaminated groundwater) that is
driving remediation in the first place (Holland et al. 2011). Developing risk thresholds for
remediation workers and other stakeholders who may be directly or indirectly at risk of
injury or fatality because of the remediation project is an essential element of sustainable
human health–risk assessment, a term introduced in the work of Holland et al. (2011).
In sustainable human health–risk assessment, the remedial strategy is considered in
light of both the risks posed by the contamination and those presented by the remedial
actions themselves. Another key tenet of sustainable human health remediation is quan-
tification of the life-cycle human health risks associated with a remediation (for example,
risks caused by the manufacture and transportation of remedial materials and waste;
Holland et al. 2011). It is likely that interest in sustainable human health assessment will
greatly increase over the coming years. Additional discussion regarding sustainable reme-
diation concepts is provided in Section 8.5.

8.4.4 Mass Flux


Mass flux is an emerging remedial metric that quantifies source or plume strength at a
certain time and location. More often than not, mass discharge is actually the parameter
of interest; however, mass flux is the more commonly used term. Mass flux is a solute
flow rate measurement specific to an area, which is usually a subset of a plume cross
section and is expressed as mass per time per unit area (e.g., g/day/m2). Mass discharge is
498 Hydrogeological Conceptual Site Models

calculated by integrating these mass flux measurements across the defined plane or cross
section. Mass discharge, therefore, represents the total mass of the solute conveyed by
groundwater through a defined plane and is expressed in the useful units of mass per time
(e.g., g/day; ITRC 2010).
There are three primary methods of mass flux determination as adapted from ITRC
(2010):

• Transect methods, in which individual monitoring points are used to integrate


concentration and flow data along the plume cross section of interest
• Well capture/pump test methods, which use flow and concentration data from
extracted groundwater
• Passive flux meters, which are new in situ devices designed to directly estimate
Downloaded by [University of Auckland] at 23:45 09 April 2014

mass flux within monitoring wells

Transect methods are the most commonly used because they are easier to implement in
the field, and data are less subject to significant temporal fluctuations. A conceptual dia-
gram of the transect method for mass flux and mass discharge measurement is provided
in Figure 8.61.
Potential uses of mass flux and mass discharge estimates include

• Evaluating source area strength, key to the overall CSM.


• Evaluating plume stability, which is a critical step in MNA feasibility analysis (i.e.,
mass flux can help determine whether MNA will work or not).
• Assessing the relative contribution of different source areas to a contaminant
plume. In this manner, sites may be prioritized in terms of the benefits of remedia-
tion to overall groundwater quality.
• Assessing the relative contribution of different geologic layers within a source
area to a contaminant plume as shown in Figure 8.62. In this manner, remediation
can be performed more efficiently by targeting strata of particular concern.

Mass Flux Transect

FIGURE 8.61
Conceptual diagram of the transect method of calculating mass flux and mass discharge. (Adapted from
Interstate Technology and Regulatory Council (ITRC), Use and Measurement of Mass Flux and Mass Discharge,
Technology overview document, 2010. Available at http://www.itrcweb.org/Documents/MASSFLUX1.pdf,
accessed July 20, 2011.)
Groundwater Remediation 499

Mass Flux (J) = KiC

Fine Sand
Source K = 1.0 m/day
Zone

J = 0.03 g/d/m2

Gravelly Sand
K = 33.3 m/day

J = 1 g/d/m2
Downloaded by [University of Auckland] at 23:45 09 April 2014

Sand
K = 5 m/day

J = 0.15 g/d/m2

FIGURE 8.62
Conceptual cross section illustrating the use of mass flux in prioritizing treatment zones. The contaminant con-
centration (C = 10,000 µg/L) and hydraulic gradient (I = 0.003 m/m) are identical in all three layers. However,
variation in hydraulic conductivity (K) leads to a range in mass flux estimates (J) that spans nearly two orders
of magnitude with the gravelly sand layer contributing the greatest contaminant flux to the downgradient
groundwater plume. This analysis can help justify remediating the gravelly sand layer first and/or to the great-
est extent. (Adapted from Interstate Technology and Regulatory Council (ITRC), Use and Measurement of
Mass Flux and Mass Discharge, Technology overview document, 2010. Available at http://www.itrcweb.org/
Documents/MASSFLUX1.pdf, accessed July 20, 2011.)

• Predicting and monitoring remedial performance, focusing on identifying the


optimal transition points between remedial technologies (e.g., where in situ source
remediation has reached a point of diminished return and transition to MNA is
merited; ITRC 2010).

Mass discharge is now an official alternative remedial metric as its use was specified in
the ROD for Superfund Site 12A located in Tacoma, WA, as quoted below from U.S. EPA
(2009b):

“The primary goals for the first tier of [Remedial Action Objective (RAO)] compliance
are to address residual sources, minimize the risk to receptors due to contaminated
surface soils and achieve a contaminant discharge reduction of at least 90% from the high
concentration source area [emphasis added] near the Time Oil building to the dissolved-
phase contaminant plume. Soil removal, ITR [in situ thermal remediation] and EAB [in
situ enhanced anaerobic bioremediation] will be considered complete and the Remedy
will be considered operational and functional when the tier 1 criteria have been met.
Once the tier 1 criteria have been met, the operations and maintenance of OU1 will be
turned over to the State of Washington.”
500 Hydrogeological Conceptual Site Models

This specific ROD (U.S. EPA 2009b) is also more progressive in that it provides a frame-
work for discontinuing the existing pump and treat system and transitioning to MNA
following the mass discharge reduction and specifically mentions TI as an option if MNA
cannot be shown to comply with ARARs in a reasonable timeframe. However, consid-
eration of TI would only occur after implementation of the $16,210,000 net present value
source-remediation remedy and, therefore, is something of a moot point (U.S. EPA 2009b).
Regardless, this site strikes a positive chord by using mass discharge as an adaptive site-
management alternative end point in that reaching the 90% mass discharge reduction trig-
gers the transition from one remedial technology or approach to another (and, in this case,
from one regulatory approach to another).
In summary, mass flux and discharge monitoring over time could lead to better compli-
ance strategies, optimized remediation approaches, and updated long-term monitoring
guidance (Kram et al. 2011). However, the fine spatial and temporal resolution of monitor-
Downloaded by [University of Auckland] at 23:45 09 April 2014

ing along one or more transects can make mass flux estimation expensive. Furthermore,
the accuracy of the method is dependent not only on contaminant concentration data but
also on precise characterization of the groundwater flow field (i.e., hydraulic conductivity
and gradient are components of the calculation). For this reason, relative changes in mass
flux are commonly calculated as opposed to depending on absolute numbers.
One novel approach that can significantly improve the utility and accuracy of mass
flux estimation while also reducing the costs of data collection, analysis, and visualiza-
tion is presented in the work of Kram et al. (2011). The patented process, developed by
Groundswell Technologies, Inc., uses integrated environmental monitoring sensors,
telemetry, GIS, and geostatistical algorithms to calculate and visualize mass flux and dis-
charge in real time. This technology has great potential in evaluating remediation system
performance and contaminant discharges from aquifers to surface water receptors. The
software component, named Waiora, is a multimedia data acquisition, management, and
visualization tool with the ability to monitor sensors measuring hydraulic head and solute
concentrations through a Web interface (Kram et al. 2011).
The Waiora platform has a user-friendly GUI that can be used to produce contour maps,
time series plots, 2D and 3D visualizations and animations, transect slices, and statisti-
cal analyses. Waiora can also assist in groundwater model calibration and functions as a
document repository with sharing capabilities that can integrate historical data with real-
time measurements from sensors in the field.
Waiora calculates mass flux through the transect method described in ITRC (2010); how-
ever, the use of sensors and telemetry enables more accurate estimation than traditional
methods. This is because the high density of hydraulic head and solute measurements helps
capture the temporal variability inherent to the groundwater system. Traditional manual
sampling events may fail to capture seasonal effects that increase or reduce contaminant
concentrations or alter hydraulic head distributions. The continuous data processed by
Waiora enables integration of mass flux over time such that the average mass discharge
over durations relevant to the time scale of plume formation can be more accurately quan-
tified. The use of sensors and telemetry also significantly reduces data collection costs, for
example, labor, laboratory costs, and investigatory derived waste management costs, in the
long-term.
Examples of visualizations from a Waiora U.S. EPA Environmental Technology
Verification (ETV) project are presented in Figures 8.63 through 8.66. The pilot project
involved the use of water-level transducers and nitrate sensors to monitor nitrate treat-
ment within a bioreactor trench that receives wastewater discharge. Figure 8.63 depicts the
distribution of hydraulic head at the pilot site, represented by contours created by Waiora
Groundwater Remediation 501
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.63
Potentiometric surface contours and selected sensor time series charts of water level–elevation data created
using Waiora. Hardware included Instrumentation Northwest level transducers and WaveData telemetry plat-
form. (Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)

FIGURE 8.64
Contours of nitrate concentration (in parts per million) created using Waiora. Hardware included Instrumenta­
tion Northwest nitrate ion selective electrodes and WaveData telemetry platform. (Courtesy of Mark Kram,
PhD, CGWP of Groundswell Technologies, Inc.)
502 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 8.65
Three-dimensional visualization of nitrate mass discharge in grams per second through a source control plane
created using Waiora. (Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)

using water-level data collected by sensors in the field. Similarly, Figure 8.64 presents con-
tours of nitrate concentration in groundwater interpolated from nitrate sensor measure-
ments collected in real time. Note that the contours presented in Figures 8.63 and 8.64
represent a snapshot in time; however, Waiora can automatically reproduce the contours
for any monitored time step and play back geospatial changes in these parameters. Figure
8.65 is a three-dimensional view of nitrate mass discharge through the control transect
of interest, located a few feet downgradient of the nitrate injection point oriented per-
pendicular to the primary direction of groundwater flow. Mass discharge in Figure 8.65
is calculated using the surfaces presented in Figures 8.63 and 8.64 in addition to a site-
specific estimate of hydraulic conductivity. Temporal changes in mass discharge through
the transect can be tracked and visualized in a time series graph as shown in Figure 8.66.
Integration of this graph provides the total mass discharge over the time period of interest.
A similar sensor-based approach can be applied to numerous other contaminants, includ-
ing chlorinated solvents, such as TCE.

FIGURE 8.66
Time series chart of nitrate mass discharge (grams per second) through the control plane depicted in Figure
8.65, created using Waiora. (Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)
Groundwater Remediation 503

8.5 The Way Forward: Sustainable Remediation


The Visalia Pole Yard remediation project is often hailed by the U.S. EPA as an exemplary
success story—a fact that has likely contributed to the rapid growth of in situ thermal tech-
nology in the 2000s. For example, in a publication prepared for the U.S. EPA, Ryan (2010)
states

“The Visalia Pole Yard Superfund site attained all soil and groundwater remediation
goals, becoming one of the best examples to date of a site with massive quantities of
DNAPL in the saturated zone that has achieved and sustained drinking water stan-
dards following a source-mass depletion remedy.”
Downloaded by [University of Auckland] at 23:45 09 April 2014

To remediate groundwater contaminated by creosote and pentachlorophenol to applicable


standards, the following activities were conducted over a 31-year period:

• Groundwater pump and treat with discharge to publicly owned treatment works
(1975–1985)
• Construction and operation of an on-site groundwater treatment system to con-
tinue operating the pump and treat system (1985–1997)
• SEE (1997–2000)
• Installation and operation of an enhanced biodegradation system with continued
pump and treat operation (2000–2004)
• Excavation of shallow soils from 0–10 ft below ground surface (2006; U.S. EPA 2009c)

The final close-out report for the site listed the total remedial cost between 1996 and 2006
as approximately $30 million (more than two-thirds of which was the thermal remedi-
ation). Costs for the first 20 years of pump and treat operation and treatment-building
construction were not provided nor was the total energy consumed by the thermal reme-
diation (U.S. EPA 2009c). Without digging deeper, it may appear that the end result of clean
groundwater justifies this significant expenditure, and that the remediation was necessary
to protect human health and the environment and to restore beneficial use of the property
and its underlying groundwater. Unfortunately, as detailed in the ROD (U.S. EPA 1994)
and the final site close-out report (U.S. EPA 2009c), this was not the case.
First of all, no private or public drinking water was contaminated by the Visalia Pole
Yard site. The following text is taken directly from the ROD (U.S. EPA 1994):

“The primary contribution to risk for each of these populations (on- and off-site occu-
pational workers and off-site residents) is the estimated hypothetical future ingestion
of groundwater from the intermediate aquifer. On-site wells are used for groundwater
monitoring and treatment extraction purposes only. Thus, groundwater exposures evalu-
ated in this risk assessment are hypothetical [emphasis added].”

The authors acknowledge that contamination of potential future drinking-water


resources has long been justification for requiring remediation to MCLs, and the above
quotation is only provided to illustrate that there were no actual exposures associated
with the site that exceeded risk thresholds at the time of the ROD. In fact, the general
public was not particularly interested in the cleanup in the first place as evidenced by the
“Community Relations Activities” section of the site close-out report (U.S. EPA 2009c):
504 Hydrogeological Conceptual Site Models

“Community involvement activities included the development of a Community


Relations Plan (CRP), prior to initiation of the RI/FS activities. The CRP included devel-
opment of a community profile and a list of key local contacts. The community profile
indicated the surrounding area was mainly businesses that had little interest in the site cleanup
activities. Copies of the Draft ROD were made available at the local public library, DTSC
and USEPA Region IX Record Center. Notification of the issuance of the Draft ROD was
made. A Public Notice was also placed in the local newspaper. A Public Meeting was
held in Visalia, California on October 13, 1993, to provide information on the proposed
cleanup. There were no members of the public in attendance at the meeting [emphasis added].”

More disturbing is that the five-year review specifically stated, “There are no specific
redevelopment plans currently planned for the site.” Site-redevelopment opportunities
were actually severely limited by a Covenant to Restrict Use of Property, Environmental
Restriction, which was required as part of the ROD. The following text related to this cov-
Downloaded by [University of Auckland] at 23:45 09 April 2014

enant is taken from U.S. EPA (2009c):

“As remedial action objectives are based on industrial cleanup standards, prohibited
Site Uses include: residences, human hospitals, schools, and day care centers for chil-
dren. Prohibited Activities include: soil disturbance greater than ten feet below grade,
and the installation of water wells for any purpose [emphasis added].”

When considering all of the above information, it is unclear who or what exactly benefited
from the remediation other than the site environmental consultant and thermal remediation
vendor. The community was unaffected by the site and did not participate actively in site-
redevelopment plans (of which there were none), and future use of the groundwater that was
remediated to site-specific standards (because of a hypothetical future exposure risk) is strictly
prohibited by law. Additionally, the contaminated groundwater did not discharge to a surface
water feature or result in any known adverse ecological impact as far as the authors are aware.
It therefore seems that $30 million was spent simply to prove the point that remediation of
DNAPL sites to some standard is feasible. It is still puzzling, however, that the site closure
report strictly prohibits any future use of groundwater underlying the site, including installa-
tion of any wells.
The frequency with which sites, such as the Visalia Pole Yard, are aggressively remedi-
ated with unclear benefits leads to the question posed by many over the past 30 years: Are
Superfund cleanups really worth the cost? As of 2005, an estimated $35 billion in federal
monies and an unknown amount of private funding has been spent on Superfund cleanups
with remediation only complete at roughly half of the nearly 1600 sites. The average cost of
Superfund cleanups has been estimated at $43 million per site (Greenstone and Gallagher
2008). Two working papers produced by the Massachusetts Institute of Technology (MIT)
Department of Economics seek to answer the above question, evaluating the benefits of
Superfund from an economic (Greenstone and Gallagher 2008) and public health (Currie,
Greenstone, and Moretti 2011) perspective.
Greenstone and Gallagher (2008) assess the economic benefits of Superfund by com-
paring housing market outcomes in the areas surrounding the first 400 sites selected for
Superfund cleanup to the areas surrounding the 290 sites that narrowly missed quali-
fying for the program. The results of the study indicate that Superfund cleanups led to
changes in local residential property values that are statistically indistinguishable from
zero. This is also true for property rental rates, housing supply, total population, and the
types of individuals living near the sites. This conclusion indicates that economic benefits
Groundwater Remediation 505

of Superfund likely fall short of the high costs of the program. Potential explanations for
and implications of this phenomenon are provided in the paper

“In our view, the most likely explanations are that the people that choose to live near
these sites do not value the clean-ups or that consumers have little reason to believe
that the clean-ups substantially reduce health risks. In either case, the results mean that
local residents’ gain in welfare from Superfund clean-ups falls well short of the costs.
Unless there are substantial benefits that are not captured in local housing markets,
less ambitious clean-ups like the erection of fences, posting of warning signs around
the sites, and simple containment of toxins might be a more efficient use of resources”
(Greenstone and Gallagher 2008).

It is important to note that the U.S. EPA partially funded a similar study (Gamper-
Downloaded by [University of Auckland] at 23:45 09 April 2014

Rabindran and Timmons 2011) as an apparent rebuttal to the findings of Greenstone and
Gallagher (2008). The approach of Gamper-Rabindran and Timmons (2011) uses data from
Greenstone and Gallagher (2008) but applies slightly different conceptual and economic
methodologies. For example, there is greater focus on site deletion from the NPL as a criti-
cal element. Gamper-Rabindran and Timmons also state that their methods account for
within-tract heterogeneity that can detect benefits understated or entirely missed by the
median tract–level approach taken by Greenstone and Gallagher (2008). The findings of
the 2011 paper are

“Our results at these three levels of analysis reveal that deletion, which signals the end
of cleanup activities, significantly raises the value of nearby owner-occupied houses on
average at the national level, but that there is considerable heterogeneity in this effect
across metro-specific housing markets. Our tract analysis finds that deletion of a site
raises housing values significantly at the lower deciles of the within-tract housing value
distribution—by 18.2% at the 10th percentile, 15.4% at the median, and 11.4% at the 60th
percentile. . . Our analysis of repeat sales data uncovers evidence of significant hetero-
geneity in the effects of Superfund site remediation across metro areas. We find that
deletion (measured relative to proposal) causes a sizable appreciation in housing values
in northern New Jersey (11.3%), but we find no statistically significant effect of dele-
tion (measured relative to proposal) for LA metro, southwestern Connecticut or metro
Boston. While the appreciation in New Jersey indicates that some neighborhoods do
recover post-cleanup, the lower prices in Boston at deletion relative to pre-discovery
(–6.1%) suggest, conversely, that stigma against contaminated sites and neighborhoods
can also persist despite cleanup. . . This heterogeneity suggests that to perform a cost-
benefit analysis of a particular candidate site, metro-specific estimates, which assess
the remediation of multiple sites in the relevant regional housing market, would be
appropriate.”

To summarize, Gamper-Rabindran and Timmons partially contradict the results of the


findings of Greenstone and Gallagher by concluding that there is significant economic
benefit to Superfund site deletion as measured by property values. However, this benefit
is found to vary significantly across metro regions and can be negligible or even a negative
in the case of Boston, MA. Therefore, after evaluating both the MIT paper and the U.S.
EPA–funded study together, one can conclude that, at a minimum, economic factors, such
as the expected impact on property values, should be considered in the site- and remedy-
selection process for Superfund cleanups.
506 Hydrogeological Conceptual Site Models

The authors acknowledge that there are numerous non-economic benefits to Superfund
remediation, most notably related to public health and the reduction or reversal of damages
to important natural resources. For example, Currie et al. (2011) found that Superfund clean-
ups can reduce the incidence of congenital anomalies by approximately 20%–25% within the
affected community. This was the first study to examine the impact of cleanups of hazard-
ous-waste sites on infant health, which has the important benefit of not requiring detailed
knowledge of environmental factors that may affect adult health, including lifetime smoking
behavior, lifetime exposure to ambient air pollution, and lifetime exposure to multiple hazard-
ous-waste sites (Currie et al. 2011). The reader is referred to U.S. EPA (2011c) for a comprehen-
sive listing of additional benefits of the Superfund program from the U.S. EPA’s perspective.
The work of Currie et al. (2011) clearly demonstrates the need to eliminate actual (not
hypothetical) contaminant exposures related to hazardous-waste sites. However, how
to most efficiently eliminate existing and prevent future exposures across the country is
Downloaded by [University of Auckland] at 23:45 09 April 2014

beyond the realm of Superfund as we know it. Sustainable remediation is an emerging field
that has been specifically developed to counteract the irrational mind-set of the Visalia
Pole Yard cleanup and other similar projects. The concept was originally introduced by
the U.S. Sustainable Remediation Forum (SURF) in a 2008 white paper (SURF 2009) and
has since rapidly spread throughout the United States and the European Union (EU). In
its Summer 2011 issue, the Remediation Journal published the first guidance documents on
the subject, which together present a framework for sustainable remediation, detail neces-
sary steps in completing environmental footprint analysis and life-cycle assessments, and
identify metrics for incorporating sustainable practices in remediation projects (see Simon
(2011) for an overview).
Sustainable remediation is defined as a remedy or combination of remedies whose net
benefit on human health and the environment is maximized through the judicious use of
limited resources (SURF 2009). This is achieved by balancing economic growth, protection
of the environment, and social responsibility to improve the quality of life for current and
future generations (U.S. EPA 2011d). Thus, the triple bottom line of sustainable remedia-
tion is measured by environmental, social, and economic factors (Butler et al. 2011). A dia-
gram illustrating interconnections within the triple bottom line is presented in Figure 8.67.
The Sustainable Remediation Framework, defined by Holland et al. (2011), is character-
ized by the following key elements:

• Process-based implementation that includes several decision points necessary to


achieve the final outcome. This is an alternative to the more rigid and traditional
goal-based implementation in which all focus is placed on meeting a specific regu-
latory requirement (e.g., MCLs).
• Future-use planning that manages the entire site cleanup process with the pre-
ferred end use or future use in mind. This includes a transition strategy that effi-
ciently moves the project from the remediation stage to the long-term use stage
after achieving remedial objectives.
• A tiered sustainability evaluation that maximizes the positive sustainability
impacts of the project as measured through metrics such as carbon dioxide emis-
sions, groundwater or energy use, local laborers trained, local suppliers utilized,
or native species reintroduced to site habitat (see Butler et al. (2011) for additional
metrics).
• Development of a sustainable CSM that incorporates sustainable elements
as shown in Figure 8.68. The sustainable CSM can be used to answer complex
Groundwater Remediation 507

Sustainable
Solutions

Environmental
Factors

Social Costs versus


Justice? Benefits?

Economic Factors Ecological Social Factors


Downloaded by [University of Auckland] at 23:45 09 April 2014

Health?

FIGURE 8.67
Sustainable remediation’s triple bottom line. Sustainable remediation attempts to achieve the optimal balance
between social, environmental, and economic factors. (Original figure based on concept presented in United States
Environmental Protection Agency (U.S. EPA), US and EU Perspectives on Green and Sustainable Remediation Part
2, CLU-IN Internet Seminar, Delivered March 15, 2011, 2011. Available at http://www.cluin.org/live/archive/#US_
and_EU_Perspectives_on_Green_and_Sustainable_Remediation_Part_2, accessed August 15, 2011.)

FIGURE 8.68
Expansion of the traditional CSM to include sustainability factors as proposed by Holland et al. (2011).
508 Hydrogeological Conceptual Site Models

questions regarding the beneficial use of groundwater at the site, the feasibility
of reaching site closure, and how achieving remedial goals will change site risk.
• Implementation of sustainable remedial measures, including, but not limited to:
• Using in situ technologies that mimic natural processes and/or result in con-
taminant mineralization rather than phase transfer. For example, bioremedia-
tion, which can result in complete contaminant destruction in situ, is preferable
to thermal remediation during which phase change is often required to extract
contaminants from the subsurface and to potentially dispose of remediation
waste off site.
• Minimizing or eliminating emissions and natural resource (e.g., energy, water)
consumption. This includes minimizing transportation requirements when-
ever possible.
Downloaded by [University of Auckland] at 23:45 09 April 2014

• Using renewable energy to power remedial operations.


• Recycling or reusing soil or demolition materials.
• Training and employing local workers.
• Conducting collaborative community events before, during, and after remedy
selection and implementation.

Sustainability elements can also be incorporated into traditional site-assessment activi-


ties, such as risk assessment, which was discussed previously in Section 8.4.3. It is criti-
cal to remember the triple bottom line during sustainability evaluations such that social
and economic challenges faced by the community are given sufficient weight in decision-
making processes. For example, very low excess cancer risks caused by site contamination
(i.e., one in a million) are often used to justify site remediation while zero consideration
is given to the American Cancer Society (ACS) estimate that nearly two-thirds of can-
cer deaths in the United States in 2011 will be attributable to tobacco use, overweight or
obesity, physical inactivity, and poor nutrition—all of which are preventable causes (ACS
2011). Sustainable remediation approaches would also consider and potentially address
these health concerns in a manner that logically ties into the site cleanup. For example, site
reuse plans could incorporate a community health center, community gardens for cultiva-
tion of fresh produce, or exercise facilities and/or nature trails. Community involvement
activities could also include wellness and economic counseling. All of the above examples
would represent a more judicious use of monies typically spent in a Pyrrhic struggle to
reach MCLs.
In the United States, there has been significant recent interest in green remediation, a
practice defined and encouraged by the U. S. EPA. Green remediation considers all environ-
mental effects of cleanup actions and incorporates options to minimize the environmental
footprints of these actions (U.S. EPA 2011e). Thus, green remediation shares several core
elements with sustainable remediation, such as the minimization of energy use and emis-
sions during remediation. However, there are critical fundamental differences between
the two concepts that are easy to disguise because of the way that the terms “green” and
“sustainable” are often interchanged. For example, the following are key U.S. EPA perspec-
tives on green remediation, quoted directly from U.S. EPA (2011d):

• “Greener cleanups are not an alternative approach to setting cleanup levels and
selecting remedies;
• Cleaning up sites for reuse supports sustainable development; and
Groundwater Remediation 509

• Reducing the environmental footprint of a cleanup does not justify changing the
end point.”

Simply put, green remediation does not incorporate sustainability considerations in select-
ing the remedial approach or the preferred end use. Green remediation is strictly imple-
mentation related, and does not incorporate social and economic factors to address broader
land-management issues (U.S. EPA 2011d). This is unfortunate as the greatest opportunity
to realize sustainable outcomes are in the early stages of remedial implementation while
setting the remedial specification and strategy (NICOLE 2010). This is shown conceptually
in Figure 8.69. Site-specific data are needed to confirm this concept.
With these fundamental limitations, the potential exists for green remediation to become
a good example of what has been termed “LEED brain” in reference to misuse of the
Leadership in Energy and Environmental Design (LEED) building-certification program.
Downloaded by [University of Auckland] at 23:45 09 April 2014

LEED brain is a term originally coined by Schendler and Udall (2005) to reflect the absur-
dity of LEED-certifying fundamentally non-sustainable projects because of the inclusion
of expensive green features, such as rooftop fuel cells and benches made of salvaged euca-
lyptus wood (Owen 2009). Owen (2009) presents an excellent example of LEED brain in his
analysis of the Philip Merrill Environmental Center of the Chesapeake Bay Foundation
(CBF), which was opened in 2001 and became the first building to be certified as LEED

Setting the Remedial Strategy & Setting the Remedial Technical


Specification Approach
(Sustainable Remediation Only) (Sustainable & Green Remediation)

FIGURE 8.69
Conceptual diagram illustrating that including sustainability considerations in the remedy selection phase,
during which performance/cleanup standards are also determined, is expected to result in the greatest overall
benefit to project sustainability. (Based on concept presented in Network for Industrially Contaminated Land in
Europe (NICOLE), NICOLE Road Map for Sustainable Remediation, 2010. Available at http://www​.nicole.org/
documents/DocumentList.aspx?l=2&w=n, accessed July 28, 2011.)
510 Hydrogeological Conceptual Site Models

platinum. The CBF facility has innumerable green technologies, such as geothermal wells,
composting toilets, rainwater-collection systems, and showers for bicyclists. However, the
facility was constructed at a remote location along the Chesapeake Bay and is inaccessible
to mass transit. The previous CBF headquarters was in downtown Annapolis, MD, and the
relocation “turned all of the foundation’s eighty employees into automobile commuters”
(Owen 2009). Furthermore, the CBF is an hour-long drive from Baltimore or Washington,
D.C., which is where the majority of visitors will be coming from by car. This pennywise,
pound-foolish approach to sustainability will not result in a net environmental benefit
and fosters the dangerous misconception that sustainability (as measured by the quantity
of green gadgets) is very expensive. An alternative (and much less expensive) building
in a downtown location accessible to mass transit would have been a much more sus-
tainable solution for the CBF, regardless of the number of composting toilets used in the
construction.
Downloaded by [University of Auckland] at 23:45 09 April 2014

Despite being a potential example of LEED brain, green remediation accomplishes the
critical objective of making consultants, regulators, and stakeholders consider the over-
all environmental footprint of a cleanup. Additionally, green remediation encourages the
use of important technologies that undoubtedly play a role in sustainable development.
It is the opinion of the authors that green remediation is another evolutionary step in the
maturation of the remediation industry. Figure 8.70 is an expanded schematic of this evo-
lutionary process originally presented in SURF (2009). The authors hope that the remedial
community swiftly embraces sustainable remediation as it has great potential in applying

FIGURE 8.70
Conceptual diagram illustrating the evolution of societal thinking about waste and environmental cleanups.
This version has been expanded from the original to include the phase we are currently entering (Increased
Knowledge), in which green remediation practices are encouraged and alternative remedial end points and
metrics are at least being actively considered. The next logical evolutionary step is the full embrace of sus-
tainable remediation practices. (Based on concept presented in U.S. Sustainable Remediation Forum (SURF).
Remediat. J. 19, 5–114, 2009, doi: 10.1002/rem.20210.)
Groundwater Remediation 511

our expanded knowledge to solve environmental problems in an efficient manner that


also contributes to the medical, social, and economic well-being of affected communities.
A transition toward sustainable remediation practices is also critical in ensuring the
long-term viability of environmental protection programs in the United States and the EU.
Ongoing budget and debt crises in both governments will continue to threaten financial
resources dedicated to environmental cleanups, and the remediation industry has to do
more with less in order to survive. One great danger is political embrace of the “Forget
About the Aquifer” approach, in which all attempts at groundwater restoration will be
abandoned in favor of cheaper water-supply alternatives (Ronen et al. 2011). The shortcom-
ings of remedial technologies and current cleanup ideologies highlighted in this chapter
are not meant to discourage further cleanups or disparage the idea that pristine environ-
mental resources have intrinsic value. Quite the contrary; the hope of the authors is to
galvanize support for holistic and technically sound approaches that can achieve better
Downloaded by [University of Auckland] at 23:45 09 April 2014

remedial outcomes while also freeing up financial resources for assessment and proper
management of as many contaminated sites as possible.

References
American Cancer Society (ACS), 2011. Cancer Facts and Figures 2011. American Cancer Society,
Atlanta, GA.
Beyke, G., and Fleming, D., 2005. In-situ thermal remediation of DNAPL and LNAPL using electrical
resistance heating. Remediat. J. 15, 5–22, doi: 10.1002/rem.20047.
Bussey, T., 2007. Final Five-Year Review Report. Third Five-Year Review Report for Fort Lewis
CERCLA Sites, Pierce County, Washington, Prepared for the US Army Fort Lewis Department
of Public Works, Fort Lewis, WA, 42 pp.
Butler, P. B., Larsen-Hallock, L., Lewis, R., Glenn, C., and Armstead, R., 2011. Metrics for integrat-
ing sustainability evaluations into remediation projects. Remediat. J. 21, 81–87, doi: 10.1002/
rem.20290.
Chapelle, F. H., Widdowson, M. A., Brauner, J. S., Mendez, E., and Casey, C. C., 2003. Methodology
for Estimating Times of Remediation Associated with Monitored Natural Attenuation. U.S.
Geological Survey Water-Resources Investigations Report 03-4057.
Chapelle, F. H., Novak, J., Parker, J., Campbell, B. G., and Widdowson, M. A., 2007. A Framework
for Assessing the Sustainability of Monitored Natural Attenuation. U.S. Geological Survey
Circular 1303, 35 pp.
Cohen, R. M., Vincent, A. H., Mercer, J. W., Faust, C. R., and Spalding, C. P., 1994. Methods for
Monitoring Pump and treat Performance. EPA Contract No. 68-C8-0058, Office of Research and
Development, EPA/6OO/R-94/123, Ada, OK, 114 pp.
Currie, J., Greenstone, M., and Moretti, E., 2011. Superfund Cleanups and Infant Health. Massachusetts
Institute of Technology Department of Economics Working Paper Series, Working Paper 11-02.
Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1768233, accessed July 22, 2011.
Deeb, R., Hawley, E., Kell, L., and O’Laskey, R., 2011. Assessing Alternative Endpoints for Ground­
water Remediation at Contaminated Sites. ESTCP Project ER-200832. Available at http://
www.serdp​.org/Program-Areas/Environmental-Restoration/Contaminated-Groundwater/
Persistent-Contamination/ER-200832, accessed August 12, 2011.
Environmental Security Technology Certification Program (ESTCP), 2009. In Situ Bioremediation of
Chlorinated Solvents Source Areas with Enhanced Mass Transfer. U.S. Department of Defense
Cost and Performance Report, Project: ER-0218, 73 pp.
512 Hydrogeological Conceptual Site Models

Gamper-Rabindran, S., and Timmins, C., 2011. Valuing the Benefits of Superfund Site Remediation:
Three Approaches to Measuring Localized Externalities. NSF ITR-0427889, U.S. EPA
Purchase Order EP09W001911. Available at http://www.epa.gov/superfund/accomp/pdfs/
Benefits%20Study%202011.pdf, accessed August 10, 2011.
Greenstone, M., and Gallagher, J., 2008. Does Hazardous Waste Matter? Evidence from the Housing
Market and the Superfund Program. Massachusetts Institute of Technology, Department of
Economics Working Paper Series, Working Paper 05-27. Available at http://papers.ssrn.com/
sol3/papers.cfm?abstract_id=840207, accessed July 22, 2011.
Ground Water Task Force, 2007. Recommendations from the EPA Ground Water Task Force. Office of
Solid Waste and Emergency Response, EPA 500-R-07-001, 29 pp.
Hazen, T. C., 2009. Biostimulation. LBNL-1691E. Available at http://cluin.org/techfocus/default​
.focus/sec/Bioremediation_of_Chlorinated_Solvents/cat/Overview/, accessed July 30, 2011.
Hazen, T. C., 2010. In Situ Groundwater Bioremediation. Chapter 13 in Part 24 of the Handbook of
Hydrocarbon and Lipid Microbiology. Springer-Verlag, Berlin, Heidelberg, ISBN: 978-3-540-­77587-4.
Downloaded by [University of Auckland] at 23:45 09 April 2014

Available at http://cluin.org/techfocus/default.focus/sec/Bioremediation_of_Chlorinated_
Solvents/cat/Overview/, accessed July 30, 2011.
Helsel, D. R., 2005. Nondetects and Data Analysis. John Wiley & Sons, Hoboken, NJ, 250 pp.
Holland, K. S., Lewis, R. E., Tipton, K., Karnis, S., Dona, C., Petrovskis, E., Bull, L. P., Taege, D., and
Hook, C., 2011. Framework for integrating sustainability into remediation projects. Remediat. J.,
21, 7–38, doi: 10.1002/rem.20288.
Interstate Technology and Regulatory Council (ITRC), 1999. Natural Attention of Chlorinated
Solvents in Groundwater: Principles and Practices. Interstate Technology and Regulatory
Cooperation Work Group, In Situ Bioremediation Work Team, and Industrial Members of the
Remediation Technologies Development Forum (RTDF). Available at http://www.itrcweb​
.org/Documents/ISB-3.pdf, accessed June 2, 2011.
Interstate Technology and Regulatory Council (ITRC), 2005. Technical and Regulatory Guidance for
In Situ Chemical Oxidation of Contaminated Soil and Groundwater, Second Edition. ISCO-2,
Interstate Technology and Regulatory Council, In Situ Chemical Oxidation Team, Washington,
DC. Available at http://www.itrcweb.org, accessed June 10, 2011.
Interstate Technology and Regulatory Council (ITRC), 2008. In Situ Bioremediation of Chlorinated
Ethene: DNAPL Source Zones. Bioremediation of DNAPLs Team, BioDNAPL-3. Available at
http://cluin.org/techfocus/default.focus/sec/Bioremediation_of_Chlorinated_Solvents/
cat/Guidance/, accessed July 25, 2011.
Interstate Technology and Regulatory Council (ITRC), 2010. Use and Measurement of Mass Flux
and Mass Discharge. Technology overview document. Available at http://www.itrcweb.org/
Documents/MASSFLUX1.pdf, accessed July 20, 2011.
Jurgens, B. C., McMahon, P. B., Chapelle, F. H., and Eberts, S. M., 2009. An Excel® Workbook for
Identifying Redox Processes in Ground Water. U.S. Geological Survey Open-File Report 2009-
1004, 8 pp. Available at http://pubs.usgs.gov/of/2009/1004/, accessed November 12, 2010.
Keely, J. F., 1989. Performance Evaluation of Pump and Treat Remediations, USEPA/540/4-89-005,
Robert S. Kerr Environmental Research Laboratory, Ada, OK, 19 pp.
Kingston, J. T., Dahlen, P. R., Johnson, P. C., Foote, E., and Williams, S., 2010. Final Report: Critical
Evaluation of State-of-the-Art In Situ Thermal Treatment Technologies for DNAPL Source Zone
Treatment. ESTCP Project ER-0314, 1272 pp. Available at http://cluin.org/techfocus/default.
focus/sec/Thermal_Treatment%3A_In_Situ/cat/Guidance/, accessed January 7, 2011.
Kingston, J. T., Dahlen, P. R., Johnson, P. C., Foote, E., and Williams, S., 2009. State-of-the-Practice
Overview: Critical Evaluation of State-of-the-Art In Situ Thermal Treatment Technologies for
DNAPL Source Zone Treatment. ESTCP Project ER-0314. Available at http://cluin.org/techfocus/
default.focus/sec/Thermal_Treatment%3A_In_Situ/cat/Guidance/, accessed January 7, 2011.
Kram, M. L., Airhart, S., Tyler, D., Dindal, A., Barton, A., McKernan, J. L., and Gustafson, G., 2011.
Web-based automated remediation performance monitoring and visualization of contaminant
mass flux and discharge. Remediat. J., 21, 89–101, doi:10.1002/rem.20291.
Groundwater Remediation 513

Kresic, N., 2009. Groundwater Resources: Sustainability, Management, and Restoration. McGraw-Hill,
New York, 852 pp.
Laws, E. P., 1995. Memorandum. Subject: Superfund Groundwater RODs: Implementing Change This
Fiscal Year, July 31, 1995. EPA-540-F-99-005, OSWER-9335.5-03P, PB99-963220, Washington, DC.
Magnuson, J. K., Stern, R. V., Gossett, J. M., Zinder, S. H., and Burris, D. R., 1998. Reductive dechlo-
rination of tetrachloroethene to ethene by a two-component enzyme pathway. Appl. Environ.
Microbiol. 64, 1270–1275.
Maymó-Gatell, X., Chien, Y., Gossett, J. M., and Zinder, S. H., 1997. Isolation of a bacterium that
reductively dechlorinates tetrachloroethene to ethene. Science 276, 1568–1571.
National Research Council (NRC), 1994. Alternatives for Groundwater Cleanup. National Academy
Press, Washington, DC, 315 pp.
Network for Industrially Contaminated Land in Europe (NICOLE), 2010. NICOLE Road Map for
Sustainable Remediation. Available at http://www.nicole.org/documents/DocumentList​
.aspx?l=2&w=n, accessed July 28, 2011.
Downloaded by [University of Auckland] at 23:45 09 April 2014

Newell, C. J., Rifai, H. S., Wilson, J. T., Connor, J. A., Aziz, J. A., and Suarez, M. P., 2002. Calculation and
Use of First-Order Rate Constants for Monitored Natural Attenuation Studies. Ground Water
Issue, EPA/540/S-02/500, US Environmental Protection Agency, National Risk Management
Research Laboratory, Cincinnati, OH, 27 pp.
Owen, D., 2009. Green Metropolis: Why Living Smaller, Living Closer, and Driving Less are the Keys to
Sustainability. Riverhead Books, New York.
Parsons Engineering Science, Inc. (Parsons), 2009. Field-Scale Evaluation of Monitored Natural
Attenuation for Dissolved Chlorinated Solvent Plumes. Air Force Center for Environmental
Excellence (AFCEE), Contract Number F41624-00-D-8024, Task Order 0024, Brooks City-Base,
TX, 455 pp.
Powell, T., Smith, G., Sturza, J., Lynch, K., and Truex, M., 2007. New advancements for in-situ treat-
ment using electrical resistance heating. Remediat. J. 17, 51–70, doi:10.1002/rem.20124.
Ricker, J. A., 2008. A practical method to evaluate ground water contaminant plume stability. Ground
Water Monitor. Remediat. 28(4), 85–94.
Ronen, D., Sorek, S., and Gilron, J., 2011. Rationales behind irrationality of decision making in
groundwater quality management. Ground Water, 2011, doi:10.1111/j.1745-6584.2011.00823.x.
Ryan, S., 2010. Dense Nonaqueous Phase Liquid Cleanup: Accomplishments at Twelve NPL Sites.
National Network of Environmental Management Studies Fellow, U.S. EPA, http://cluin.org,
84 pp.
Schendler, A., and Udall, R., 2005. LEED Is Broken: Let’s Fix It. Grist, October 26, 2005.
Simon, J. A., 2011. Editor’s perspective—US sustainable remediation forum pushes forward with
guidance on the state of the practice. Remediat. J. 21, 1–5, doi:10.1002/rem.20287.
Sinke, A., and van Moll, L., 2010. Natural Attenuation (Cartoon Booklet). Available at http://www​
.nicole.org/documents/documentlist.aspx?w=n&l=2, accessed July 28, 2011.
Thompson, P., Baker, P., Calkin, S., and Forbes, P., 2000. The Regional Bedrock Structure at Loring Air
Force Base, Limestone, Maine: The Unifying Model for the Study of Basewide Groundwater
Contamination. White Paper, AMEC E&I, Inc.
TRS Group, Inc. (TRS), 2009. Featured Site: Site Characteristics and Design Parameters, Figure 4.
Available at http://www.thermalrs.com/performance/featuredSites/ftLewis/featured_ft_
lewis_2.php, accessed July 20, 2011.
Tsitonaki, A., and Bjerg, P. L., 2008. In-Situ Chemical Oxidation: State of the Art. Schæffergarden,
Gentofte, ATV Jord og Grundvand, Kings Lyngby. Available at http://cluin.org/techfocus/
default.focus/sec/In_Situ_Oxidation/cat/Overview/, accessed July 7, 2011.
United Facilities Criteria (UFC), 2006. Design: In Situ Thermal Remediation. UFC 3-280-05. Available
at http://costperformance.org/remediation/pdf/USACE-In_Situ_Thermal_Design.pdf, accessed
January 10, 2011.
United States Army Corps of Engineers (USACE), 2007. Cost and Performance Report: In Situ Thermal
Remediation (Electrical Resistance Heating) East Gate Disposal Yard, Ft. Lewis, WA, 36 pp.
514 Hydrogeological Conceptual Site Models

United States Army Corps of Engineers (USACE), 2009. Design: In-Situ Thermal Remediation.
Manual 1110-1-401536, 226 pp. Available at http://www.clu-in.org/techfocus/default.focus/
sec/Thermal_Treatment%3A_In_Situ/cat/Guidance, accessed January 7, 2011.
United States Army Corps of Engineers (USACE), 2010. Five-Year Review Report for OU 1 SIA
Groundwater Interim Remedial Action, OU 2 SIA Soils, OU 3 Ammunition Storage Area Soils
and Groundwater, Anniston Army Depot, Calhoun County, Alabama, EPA ID: 321002027,
Mobile, AL, Prepared for U.S. EPA Region 4, Atlanta, GA.
United States Environmental Protection Agency (U.S. EPA), 1989a. Treatability Studies under
CERCLA: An Overview. Publication No. 9380.3-02FS, Office of Solid Waste and Emergency
Response, 6 pp.
United States Environmental Protection Agency (U.S. EPA), 1989b. Risk Assessment Guidance for
Superfund, Vol. I, Human Health Evaluation Manual (Part A), Interim final, EPA/540/1-
89/002, Office of Emergency and Remedial Response, US Environmental Protection Agency,
Washington, DC.
Downloaded by [University of Auckland] at 23:45 09 April 2014

United States Environmental Protection Agency (U.S. EPA), 1991. Risk Assessment Guidance for
Superfund, Vol. I, Human Health Evaluation Manual (Part B, Development of Risk-Based
Preliminary Remediation Goals), Interim, EPA/540/R-92/003, Office of Emergency and
Remedial Response, U.S. Environmental Protection Agency, Washington, DC.
United States Environmental Protection Agency (U.S. EPA), 1993. Guidance for Evaluating
the Technical Impracticability of Ground-Water Restoration, OWSER Directive 9234.2-5,
EPA/540-R-93-080, September.
United States Environmental Protection Agency (U.S. EPA), 1994. EPA Superfund Record of Decision:
Southern California Edison, Visalia Pole Yard Superfund Site, Visalia, California. EPA ID:
CAD980816466.
United States Environmental Protection Agency (U.S. EPA), 1995. Use of Risk-Based Decision
Making in UST Corrective Action Programs, OWSER Directive 9610.17, Office of Solid Waste
and Emergency Response, 20 pp.
United States Environmental Protection Agency (U.S. EPA), 1996a. Pump and treat Ground-
Water Remediation: A Guide for Decision Makers and Practitioners. Office of Research and
Development, EPA/625/R-95/005, Washington, DC, 90 pp.
United States Environmental Protection Agency (U.S. EPA), 1996b. Presumptive Response Strategy
and Ex-Situ Treatment Technologies for Contaminated Ground Water at CERCLA Sites, Final
Guidance. OSWER Directive 9288.1-12, EPA 540/R-96/023.
United States Environmental Protection Agency (U.S. EPA), 1999a. Use of Monitored Natural
Attenuation at Superfund, RCRA Corrective Action, and Underground Storage Tank Sites.
Directive 9200.4-17P, Office of Solid Waste and Emergency Response, 41 pp.
United States Environmental Protection Agency (U.S. EPA), 1999b. EPA Superfund Record of
Decision: Montrose Chemical Corp. and Del Amo. EPA ID: CAD008242711 and CAD029544731
OU(s) 03 & 03, Los Angeles, CA, 03/30/1999, Dual Site Groundwater Operable Unit II: Decision
Summary, EPA/ROD/R09-99/035.
United States Environmental Protection Agency (U.S. EPA), 2000. Engineered Approaches to In Situ
Bioremediation of Chlorinated Solvents: Fundamentals and Field Applications. EPA 542-R-00-
008. Available at http://cluin.org/download/remed/engappinsitbio.pdf, accessed August 12,
2011.
United States Environmental Protection Agency (U.S. EPA), 2003. The DNAPL Remediation Challenge:
Is There A Case For Source Depletion? Report Prepared by an Expert Panel to the Environmental
Protection Agency, Office of Research and Development, Publication EPA/600/R-03/143.
Available at http://www.epa.gov/ada/download/reports/600R03143/600R03143​.pdf.
United States Environmental Protection Agency (U.S. EPA), 2005. Cost-Effective Design of Pump
and Treat Systems. Office of Solid Waste and Emergency Response, EPA 542-R-05-008. Available at
http://www.clu-in.org/download/remed/hyopt/factsheets/cost-effective_design.pdf, accessed
July 15, 2011.
Groundwater Remediation 515

United States Environmental Protection Agency (U.S. EPA), 2007a. Treatment Technologies for
Site Cleanup: Annual Status Report (Twelfth Edition). Office of Solid Waste and Emergency
Response, EPA-542-R-07-012. Available at http://www.clu-in.org/asr/, accessed July 10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2007b. Optimization Strategies for Long-
Term Ground Water Remedies (with Particular Emphasis on Pump and Treat Systems). Office
of Solid Waste and Emergency Response, EPA 542-R-07-007. Available at http://www.clu-in​
.org/download/remed/hyopt/542r07007.pdf, accessed July 17, 2011.
United States Environmental Protection Agency (U.S. EPA), 2008. A Systematic Approach for
Evaluation of Capture Zones at Pump and Treat Systems: Final Project Report. Office of
Research and Development, EPA 600/R-08/003, Washington, DC, 38 pp.
United States Environmental Protection Agency (U.S. EPA), 2009a. DNAPL Remediation: Selected
Projects Where Regulatory Closure Goals Have Been Achieved. Office of Solid Waste and
Emergency Response, EPA 542/R-09/008, 52 pp. Available at http://www.clu-in.org/s​
.focus/c/pub/i/1719/, accessed July 25, 2011.
Downloaded by [University of Auckland] at 23:45 09 April 2014

United States Environmental Protection Agency (U.S. EPA), 2009b. Amendment #2 to the Record of
Decision for the Commencement Bay–South Tacoma Channel Superfund Site, Operable Unit 1,
Well 12A, EPA Region 10.
United States Environmental Protection Agency (U.S. EPA), 2009c. Final Close Out Report: Southern
California Edison Visalia Pole Yard Superfund Site, Visalia, Tulare County, California. EPA Region 9.
United States Environmental Protection Agency (U.S. EPA), 2010. Superfund Remedy Report,
Thirteenth Edition. Office of Solid Waste and Emergency Response, EPA-542-R-10-004.
Available at http://www.clu-in.org/asr/, accessed July 10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011a. Record of Decision. Available at
http://www.epa.gov/superfund/cleanup/rod.htm, accessed July 24, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011b. Green Power Equivalency
Calculator Methodologies. Available at http://www.epa.gov/greenpower/pubs/calcmeth​
.htm, accessed January 5, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011c. Beneficial Effects of the Superfund
Program. Office of Superfund Remediation and Technology Innovation, EPA Contract EP W-07-
037. Available at http://www.epa.gov/superfund/accomp/pdfs/SFBenefits-031011-Ver1.pdf,
accessed August 10, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011d. US and EU Perspectives on Green
and Sustainable Remediation Part 2. CLU-IN Internet Seminar, Delivered March 15, 2011.
Available at http://www.cluin.org/live/archive/#US_and_EU_Perspectives_on_Green_and_
Sustainable_Remediation_Part_2, accessed August 15, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011e. Introduction to Green Remediation.
Office of Superfund Remediation and Technology Innovation, Quick Reference Fact Sheet.
Available at http://cluin.org/greenremediation/, accessed August 15, 2011.
United States Environmental Protection Agency (U.S. EPA), 2011f. Groundwater Road Map: Recommended
Process for Restoring Contaminated Groundwater at Superfund Sites. OSWER 9283.1-34, 31 pp.
U.S. Sustainable Remediation Forum (SURF), 2009. Sustainable remediation white paper—integrat-
ing sustainable principles, practices, and metrics into remediation projects. Remediat. J. 19,
5–114, doi: 10.1002/rem.20210.
Watts, R. J., 2011. Enhanced Reactant-Contaminant Contact through the Use of Persulfate In Situ
Chemical Oxidation (ISCO). Strategic Environmental Research and Development Program
(SERDP) Project ER-1480. Available at http://cluin.org/techfocus/default.focus/sec/In_Situ_
Oxidation/cat/Guidance/, accessed July 10, 2011.
Watts, R. J., Loge, F. J., and Teel, A. L., 2006. Improved Understanding of Fenton-Like Reactions
for the In Situ Remediation of Contaminated Groundwater, Including Treatment of
Sorbed Contaminants and Destruction of DNAPLs. Strategic Environmental Research and
Development Program (SERDP). Available at http://cluin.org/techfocus/default.focus/sec/
In_Situ_Oxidation/cat/Guidance/, accessed July 10, 2011.
516 Hydrogeological Conceptual Site Models

Wischkaemper, K., 2007. Technical Impracticability, So Far: Anniston Army Ammunition Depot in
Anniston, AL. TSP Semiannual Meeting in Las Vegas, November 7, 2007. Available at www​
.epa.gov/tio/tsp/download/2007_fall_meeting/wed-wischkaemper.pdf.
Woolford, J. E., 2011. Memorandum. Subject: Clarification of OSWER’s 1995 Technical Impracticability
Waiver Policy. OSWER- #9355.5-32, Washington, DC.
Downloaded by [University of Auckland] at 23:45 09 April 2014
9
Groundwater Supply

9.1 Integrated Water Resources Management


When developing a groundwater source for water supply, it is always recommended to
include evaluation of the related impacts on the resource (aquifer) and, more broadly,
on the water-supply management in the watershed as a whole. At the most basic level,
groundwater supply can be defined as a process that secures enough water of suitable
quality to meet demand, provided that this demand is reasonable and that there is no
waste of water. At the same time, it cannot be overemphasized that groundwater is a key
element of the overall hydrologic cycle. It is inseparable from surface water resources
because it provides baseflow to and sustains aquatic life in surface water streams, lakes,
and wetlands and sometimes in the aquifer itself, such as in the case of karst environ-
ments (caves and conduits). Withdrawal of groundwater may affect surface water flows
and quality and vice versa. Surface water may become groundwater at some point, and
the same water may again emerge as surface water after flowing through a groundwater
system for miles and centuries. Upstream and upgradient withdrawal of surface water
and groundwater, respectively, and upstream (upgradient) wastewater discharges will
affect downstream users, water availability, and water quality. Whenever applicable,
groundwater management should therefore be seamlessly integrated with the manage-
ment of surface water, storm water, used water (wastewater), and rainwater, thus con-
stituting integrated water resources management (IWRM; see Figure 9.1). As discussed
by Rogers and Hall (2003), IWRM eschews politics and the traditional fragmented and
sectoral approach to water and makes a clear distinction between resource manage-
ment and the water service delivery functions. It should be kept in mind, however,
that IWRM is itself a political process because it deals with reallocation of water, the
distribution of financial resources, and the implementation of environmental goals. The
political context affects political will and political feasibility. Detailed discussion on
various aspects of groundwater management and water governance in general is pro-
vided by Kresic (2009).
Accurate and systematic measurements of key hydrologic cycle and climate elements
are paramount to fully understanding the quantities of available water and anticipating
future climatic changes that may impact water supplies. Unfortunately, records of air
temperature and precipitation, the most important direct measures of climate, go back
only several hundred years in Europe and less than that in other parts of the world. The
situation is even worse with hydrologic measurements of stream or spring flows and
worse yet with records of groundwater levels, the two most important direct measures
of freshwater budget. Even though the time record of direct climatic and hydrologic

517
518
Downloaded by [University of Auckland] at 23:45 09 April 2014

Hydrogeological Conceptual Site Models


FIGURE 9.1
Cartoon from ICLEI, Freiburg (Germany), SWITCH-project, and Loet van Moll (www.loetvanmoll.nl) depicting a water city of the future emphasizing IWRM,
multiobjective planning, engineering efficiency, and sustainability. (Copyright Loet van Moll—Illustraties, Aalten, Netherlands. Published with kind permission.)
Groundwater Supply 519

measurements is increasing, it is becoming more and more evident that 100 years or so is
still too short to calculate the statistics necessary for a more accurate probability analysis
of extreme climate events, such as floods and droughts. For example, it was during a wet
period in the measured hydrologic record that the 1922 Colorado River Compact estab-
lished the basic apportionment of the river between the Upper and Lower Colorado River
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.2
Aerial view of the July 2004 drought conditions of Lake Powell, located in southern Utah on the Colorado River.
(Courtesy of U.S. Bureau of Reclamation; available at http://www.usbr.gov/lc/region/g5000/photolab.)
520 Hydrogeological Conceptual Site Models

Basins in the United States. At the time of compact negotiations, it was thought that an
average annual flow volume of about 21 million acre-feet (MAF; 1 acre-foot equals 1233
m3 and, conceptually, is equal to the volume of water that would cover 1 acre of land to a
depth of 1 ft) was available for apportionment. Subsequently, a 1944 treaty with Mexico
provided a volume of water of 1.5 MAF annually for Mexico. From the measured hydro-
logic data now available, it became apparent that the river’s average annual natural flow
had been over­estimated, resulting in overallocation of its water and many political and
societal problems in the region. The reservoirs on the Colorado River used to manage
water supply of the region have been under increasing stress for years now (Figure 9.2).
Major users taking this water for granted, such as the state of California and the city of
Las Vegas, are struggling to find alternative sources of water supply. The focus, in most
cases, is on groundwater because, as in many other countries, the surface waters of the
United States are largely developed with little opportunity available to increase stor-
Downloaded by [University of Auckland] at 23:45 09 April 2014

age along main rivers as few suitable sites remain for dams, and there is increased con-
cern about the environmental effects of reservoirs. The surface waters of the nation also
receive and assimilate, to a large degree, significant quantities of point- and nonpoint-
source contaminants (Anderson and Woosley 2005). Table 9.1 includes some comparative
features of groundwater and surface water resources that should be considered when
planning for IWRM.

TABLE 9.1
Comparative Features of Groundwater and Surface Water Resources
Groundwater Resources and Surface Water Resources
Feature Aquifers and Reservoirs

Hydrological Characteristics
Storage Volumes Very large Small to moderate
Resource Areas Relatively unrestricted Restricted to water bodies
Flow Velocities Very low Moderate to high
Residence Times Generally decades/centuries Mainly weeks/months
Drought Propensity Generally low Generally high
Evaporation Losses Low and localized High for reservoirs
Resource Evaluation High cost and significant Lower cost and often less
uncertainty uncertain
Abstraction Impacts Delayed and dispersed Immediate
Natural Quality Generally (but not always) high Variable
Pollution Vulnerability Variable natural protection Largely unprotected
Pollution Persistence Often extreme Mainly transitory

Socioeconomic Factors
Public Perception Mythical, unpredictable Aesthetic, predictable
Development Cost Generally modest Often high
Development Risk Less than often perceived More than often assumed
Style of Development Mixed public and private Largely public
Source: Tuinhof, A. et al., Groundwater Resource Management: An Introduction to Its Scope and
Practice, Sustainable Groundwater Management: Concepts and Tools, Briefing Note Series,
Note 1, GW MATE (Groundwater Management Advisory Team), The World Bank,
Washington, DC, 6 pp., 2002–2005. With permission.
Groundwater Supply 521

The biggest challenge for IWRM is and will continue to be coping with two seem-
ingly incompatible imperatives: the needs of ecosystems and the needs of growing
population. The shared dependence on water of both makes it natural that ecosystems
must be given full attention within IWRM. At the same time, however, the Millennium
Declaration 2000, agreed upon by world leaders at the United Nations, involves a
set of human livelihood imperatives that are all closely water-related, with the most
important goal being to halve, by 2015, the population suffering from poverty, hun-
ger, ill-health, and lack of safe drinking water and sanitation. A particularly crucial
question will be the water-mediated implications for different ecosystems of the needs
for an increasing population: growing food, biomass, employment, and shelter needs
(Falkenmark 2003).
The most fundamental task of IWRM is the realization, by all stakeholders, that balanc-
ing and compromise are necessary in order to sustain both humanity’s and the planet’s life
Downloaded by [University of Auckland] at 23:45 09 April 2014

support systems. Therefore, a watershed-based approach should have a priority with the
following goals (Falkenmark 2003):

• To satisfy societal needs while minimizing the pollution load and understanding
the water consumption that is involved
• To meet ecological minimum criteria in terms of fundamental ecosystem needs,
such as secured (uncommitted) environmental flow in the rivers, secured flood-
flow episodes, and acceptable river water quality
• To secure hydro-solidarity between upstream and downstream societal and eco-
system needs

On a more technical level, one of the most important roles of hydrogeologists is to edu-
cate the public and water professionals alike about the importance of groundwater and its
invisible role in the watershed and the hydrologic cycle as a whole. Often, water-resource
managers and decision makers have little background in hydrogeology and thus a limited
understanding of the processes induced by pumping groundwater from an aquifer. Both
irrational underutilization of groundwater resources (compared to surface water) and
excessive complacency about the sustainability of intensive groundwater use are thus still
commonplace (Tuinhof et al. 2002–2005).
Groundwater (and water in general) management is commonly divided into supply-
side management and demand-side management. This division is more for technical
and administrative purposes, however, because the two aspects are interdependent.
Overall water management is not just about making sound engineering and economic
decisions. In many cases, it is disproportionately influenced by policies that favor
growth of one or more groups of water users—urban, industrial, or agricultural—
without much regard for the sustainability of water use or the environmental impacts.
The worst possible outcome of failed policies is an uncontrolled spiral of increas-
ing demand, causing increasing groundwater withdrawals, which, in turn, results
in unsustainable depletion of the groundwater resource and overall environmental
degradation. Where groundwater is viewed as both an economic and a public good
and its use is overseen by most, if not all, stakeholders, it is less likely that this spiral
would continue unchecked. In contrast, when selling water is viewed only as a source
of profit for the water purveyor, possibly shared by others through tax revenues, for
example, it is more likely that an unsustainable use of groundwater resources will
continue.
522 Hydrogeological Conceptual Site Models

Figure 9.3 illustrates application of an integrated numeric model used as a decision sup-
port system (DSS) for water-resources management on a regional, watershed scale. Such a
model can be continuously updated as new field information becomes available and can
be used in real time to make short- and long-term predictions based on estimated climatic
input. It can also be used to evaluate various engineering projects for development, aug-
mentation, protection, and restoration of both surface water and groundwater. Finally, it
can be used in support of new regulations aimed at balancing demand and supply and
competing interests of various water users.
An integrated hydrologic model called GSFLOW (for groundwater and surface-water
flow) has recently been developed by the United States Geological Survey (USGS) to simu-
late coupled groundwater and surface water resources (Markstrom et al. 2008). The new
model is based on the integration of the Precipitation-Runoff Modeling System (PRMS)
and MODFLOW. Additional model components were developed and existing compo-
Downloaded by [University of Auckland] at 23:45 09 April 2014

nents were modified to facilitate integration of the models. Methods were developed to
route flow among the PRMS Hydrologic Response Units (HRUs) and between the HRUs
and the MODFLOW finite-difference cells. PRMS and MODFLOW have similar modular

Investigations
Extraction GIS Population/
Monitoring Databases Census Data
Meteorology Maps Growth Projections
Land Use/Cover

INTEGRATED
MODELS
SUPPLY Surface Water DEMAND
Groundwater
Climate

Artificial Aquifer Scenarios Regulations


Recharge Optimization

Post-processing
Environment Visualization Risk (Floods,
Ecosystems Droughts)

DECISION
Social Industry
Public Health MAKING

Agriculture
Energy Urban Supply

FIGURE 9.3
Application of integrated models used as a DSS for water-resources management on a regional, watershed scale.
Decisions regarding various water uses can be made based on any combination of modeling and field data
related to water supply and demand.
Groundwater Supply 523

programming methods, which allows for their integration while retaining independence
that permits substitution of additional PRMS modules and MODFLOW packages. Both
models have a long history of support and development.
PRMS is a modular, deterministic, distributed-parameter, physical-process watershed
model used to simulate and evaluate the effects of various combinations of precipitation,
climate, and land use on watershed response. Response to normal and extreme rainfall
and snowmelt can be simulated to evaluate changes in water-balance relations, streamflow
regimes, soil–water relations, and groundwater recharge. PRMS simulates the hydrologic
processes of a watershed using a series of reservoirs that represent a volume of finite or
infinite capacity. Water is collected and stored in each reservoir for simulation of flow,
evapotranspiration, and sublimation. Flow to the drainage network, which consists of
Downloaded by [University of Auckland] at 23:45 09 April 2014

Solar
radiation
Precipitation

Evaporation Sublimation Air temperature

Plant canopy
interception

Rain Throughfall
Evaporation Rain
and Snowpack
Transpiration Evaporation
Transpiration Surface runoff
Snowmelt to stream or lake
Soil-Zone Reservoir Impervious-Zone Reservoir
Recharge zone
Lower zone

Subsurface recharge
Groundwater
recharge Subsurface Interflow (or subsurface
Reservoir flow) to stream or lake

Groundwater recharge

Groundwater
Reservoir Groundwater discharge
to stream or lake

Groundwater
sink

FIGURE 9.4
Schematic diagram of a watershed and its climate inputs (precipitation, air temperature, and solar radiation)
simulated by the PRMS. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow
Model Based on the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water
Flow Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240 pp., 2008. Modified
from Leavesley, G. H. et al., Precipitation–Runoff Modeling System—User’s Manual, U.S. Geological Survey Water-
Resources Investigations Report 83-4238, 207 pp., 1983.)
524 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.5
Hydrologic response units discretized for Sagehen Creek watershed near Truckee, CA, used in a surface
water model. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow
Model Based on the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular
Ground-Water Flow Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240
pp., 2008.)

stream-channel and detention-reservoir (or simple lake) segments, is simulated by surface


runoff, interflow, and groundwater discharge (Figure 9.4).
GSFLOW simulates flow within and among three regions. The first region is bounded
on top by the plant canopy and on the bottom by the lower limit of the soil zone, the
second region consists of all streams and lakes, and the third region is the subsurface
zone beneath the soil zone. PRMS is used to simulate hydrologic responses in the first
region (Figure 9.5), and MODFLOW-2005 is used to simulate hydrologic processes in
the second and third regions (Figure 9.6), including one-dimensional unsaturated flow
and groundwater discharge to the land surface, three-dimensional saturated ground-
water flow and storage, and groundwater interactions with streams and lakes (see
Figure 9.7).
The area of a watershed is discretized into a network of HRUs. The discretization
can be based on hydrologic and physical characteristics, such as drainage b­oundaries,
land-surface altitude, slope, and aspect; plant type and cover; land use; distribution of
Groundwater Supply 525
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.6
Hydraulic conductivity values used in a groundwater model of the Sagehen Creek watershed near Truckee,
CA. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow Model Based on
the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water Flow
Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240 pp., 2008.)

precipitation, temperature, and solar radiation; soil morphology and geology; and flow
direction. Each HRU is assumed to be homogeneous with respect to these hydrologic and
physical characteristics and to its hydrologic response. A water balance and an energy
balance are computed daily for each HRU. GSFLOW allows simulations using only
PRMS or MODFLOW-2005 within the integrated model for the purpose of initial cali-
bration of model parameters prior to a comprehensive calibration using the integrated
model. The model boundaries are defined using standard specified-head, specified-flow,
and head-dependent boundary conditions to account for inflows to and outflows from
the modeled region.
526 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.7
Inflows to and outflows from a lake as represented in GSFLOW. Grid shows finite-difference cells for lakes and
groundwater. (From Markstrom, S. L. et al., GSFLOW-Coupled Ground-Water and Surface-Water Flow Model
Based on the Integration of the Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water
Flow Model (MODFLOW-2005), U.S. Geological Survey Techniques and Methods 6-D1, 240 pp., 2008.)

9.2 Groundwater Supply


Groundwater supply is a critical element of water-resources management, which should
have a clearly stated objective. This is true for any level of management, starting with a
local water agency or water purveyor and ending at the national (federal) level. The man-
agement objective should include the establishment of threshold values for readily mea-
sured quantities, such as groundwater levels, groundwater withdrawal (pumping) rates
and quality, land-surface subsidence, and changes in surface-water flow rates and quality,
where they impact or are impacted by groundwater withdrawal (see Figure 9.8). When a
threshold level is reached, the rules and regulations require that groundwater extraction
be adjusted or stopped to prevent exceeding that threshold.
Groundwater supply and management objectives may range from entirely qualitative
to strictly quantitative. At a local level, each objective would have a locally determined
threshold value, which can vary greatly. For example, in establishing a management objec-
tive for groundwater quality, a water utility may simply choose to establish an average
value of total dissolved solids as the indicator of whether a management objective is met,
and another agency may choose to have no constituents exceeding the maximum contami-
nant levels for public drinking-water standards. While there is great latitude in establish-
ing management objectives, local managers should remember that the objectives should
serve to support the goal of a sustainable supply for the beneficial use of the water in their
particular area (DWR 2003).
Sustainable groundwater development has become increasingly important to IWRM
even in areas where utilization of surface-water resources has long been the top priority.
Following are some of the reasons for this trend in regions where both surface-water and
groundwater resources are available:
Groundwater Supply 527
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.8
Top: Cross section illustrating groundwater flow at an actual site where the flow regime is influenced by canals
and natural drainage features. Bottom: Change of groundwater flow directions after pumping is initiated at
a nearby water-supply well. Note that one of the canals is eventually completely dewatered by the pumping.

• Groundwater development requires lower capital investment and simpler distri-


bution systems because it can be executed in phases and closer to end users.
• Surface-water intakes and storage (reservoirs) are more vulnerable to seasonal
fluctuations in recharge and periods of drought; they are also more vulnerable to
the projected impacts of climate change.
• Evaporative loss from surface-water reservoirs is large, especially in semiarid and
arid regions, whereas such loss from groundwater systems (storage) is mostly neg-
ligible or nonexistent.
• The environmental and societal (e.g., community relocation) impacts of dams and
associated surface-water reservoirs are incomparably less acceptable to the gen-
eral public than just a generation ago.
528 Hydrogeological Conceptual Site Models

• The general quality of surface-water bodies and their sediments has been impacted
by point- and nonpoint sources of contamination to a much greater extent and
for longer periods of time, requiring more expensive drinking-water treatment
involving the use of a variety of chemicals.
• Surface-water supplies are more vulnerable to accidental or intentional
contamination.
• The ability of surface water systems to balance daily and seasonal periods of peak
demand and low demand is limited. In contrast, water wells can simply be turned
off and on, and their pumping rates can be adjusted as needed.

Figure 9.9 illustrates the common evolution of groundwater resources development and
the associated stages based on the impacts of the hydraulic stress (groundwater extrac-
Downloaded by [University of Auckland] at 23:45 09 April 2014

tion) on the system. The condition of excessive and unsustainable extraction (3A–Unstable
Development) is also included. For this case, the total abstraction rate (and usually the
number of production wells) will eventually fall markedly as a result of near irreversible
degradation of the aquifer system itself (Tuinhof et al. 2002–2005).

FIGURE 9.9
States of groundwater resource development in a major aquifer and their corresponding management needs. (From
Tuinhof, A. et al., Groundwater Resource Management: An Introduction to Its Scope and Practice, Sustainable
Groundwater Management: Concepts and Tools, Briefing Note Series, Note 1, GW MATE (Groundwater
Management Advisory Team), The World Bank, Washington, DC, 6 pp., 2002–2005. With permission.)
Groundwater Supply 529
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.10
USGS monitoring well located at the headquarters of the National Ground Water Association in Columbus,
OH. Water levels are recorded in real time and transmitted via satellite to the USGS processing center. Data are
available online 15 minutes after recording.

In order to be effective or even possible, the groundwater supply must rely on monitor-
ing of static (nonpumping) and pumping water levels in the aquifer, spring and surface-
water flows, water quality, and their spatial and temporal changes (see Figure 9.10). All
monitoring data and data generated during resource evaluation, development, and exploi-
tation (operations and maintenance) should be stored and organized within an interactive
geographic information system (GIS) database for quantitative analyses and easy visual-
izations (see Chapter 3).

9.2.1 Groundwater Quantity


In general, the quantity of groundwater available for water supply should be derived from
the overall water budget of the watershed portion of interest (see Chapter 2 and Figure
2.33). However, as discussed earlier, quite a few elements contributing to the water budget
cannot be measured directly and are estimated with uncertainty. This includes the effec-
tive groundwater recharge rate, which is the most important such element. In addition, all
groundwater stored in or flowing through the porous media is not readily available for
530 Hydrogeological Conceptual Site Models

feasible extraction in most cases (this, of course, should not be the goal of any groundwater-
based water supply to begin with). Therefore, the most common approach is to calculate
groundwater flow rates that can be obtained from a specific groundwater extraction design,
such as a well or a well field. This is accomplished by performing one or more aquifer
pumping tests and by analyzing the test results to obtain key hydrogeologic parameters of
the porous media: the hydraulic conductivity (transmissivity after multiplying by aquifer
thickness) and the storage properties (specific yield for unconfined aquifers and storage
coefficient for confined aquifers). These parameters are required for calculating optimum
pumping rates, radius of influence, and long-term impacts of groundwater withdrawal.
The design and analyses of aquifer tests are, with varying detail, described in most hydro-
geology textbooks (e.g., see Freeze and Cherry 1979; Kresic 2007) and quite a few specialty
texts (e.g., Kruseman et al. 1991). There are also various commercial computer programs
for interpretation of aquifer tests, such as AQTESOLV (HydroSOLVE 2002). Spreadsheets
Downloaded by [University of Auckland] at 23:45 09 April 2014

for aquifer test analysis produced by the USGS that are available in the public domain are
included in the companion DVD for the benefit of the reader.
Many different analytical solutions for aquifer pumping tests are available for a vari-
ety of hydrogeologic conditions, such as the presence of leaky aquitards below or above
the pumped aquifer (see Figure 9.11), delayed gravity drainage in unconfined aquifers,
partially penetrating wells, wells with large diameters and presence of bore skin on the
well walls, aquifer anisotropy, and fractured aquifers, including dual-porosity approaches
and fractures with skin. It therefore cannot be overemphasized that a thorough under-
standing of the site-specific hydrogeologic characteristics is crucial for any meaningful

FIGURE 9.11
Aquifer pumping test drawdown and recovery data at an observation well used to determine aquifer transmis-
sivity (T) and storage coefficient (S) utilized in a numeric groundwater flow model. (Courtesy of AMEC E&I,
Inc.)
Groundwater Supply 531

interpretation of any aquifer test results. Most importantly, different analytical methods
developed with very different assumptions regarding the underlying physical process
can produce very similar results, and it is up to the interpreting hydrogeologist to decide
which method is the most appropriate and makes the most hydrogeologic sense. The worst
option is to let a computer program perform automated curve matching and accept the
results without any critical analysis. Numeric groundwater models are being increasingly
utilized not only for quantification of groundwater flow in a system but also for the analy-
sis of aquifer pumping tests because they can simulate heterogeneity, anisotropy, and the
varying geometry of the system as well as the presence of any boundaries to groundwater
flow. Various hydrogeologic assumptions can be changed and tested in a numeric model
until the field data are matched and the final conceptual model of the underlying hydro-
geologic conditions is selected.
Aquifer testing should always include two parts for both the newly developed and the
Downloaded by [University of Auckland] at 23:45 09 April 2014

existing wells as shown in Figure 9.12. The first part of the test, which has three steps,
is designed to determine well characteristics, such as well loss and the need for possi-
ble redevelopment. The duration of each step should be the same, usually not more than
six to eight hours. Data recorded during the first step are used to initially estimate the
transmissivity and the storage coefficient of the aquifer. The size of the pump and the
long-term pumping rate for the second part of the test are selected based on drawdown
development during the three-step test. The second part of the test should be performed
after a complete recovery of the hydraulic head in the well and with a maximum feasible
pumping rate. Duration of this part of the test, which is designed to determine the overall
aquifer transmissivity for an extensive radius of influence, depends on specific project
requirements and may vary from 24 hours to several weeks in case of aquifer develop-
ment for major water-supply projects. Long-term pumping with a maximum rate is nec-
essary for uncovering aquifer characteristics that may not be apparent from a short test.
This includes delayed gravity drainage, distant boundaries, leakage through (or from) the
adjacent aquitards, the presence of dual-porosity media, and changes in storage. Both the
drawdown and the recovery data should be used to find the aquifer parameters. At least
one monitoring well near the pumping well should be available to analyze the test results.
However, it is preferable to have several monitoring wells at increasing distances from the

Test for aquifer


Test for well
characteristics
Pumping rate

characteristics

Time
Drawdown

FIGURE 9.12
Pumping-rate hydrographs and drawdown curves for a pumping test designed to determine well and aquifer
characteristics. Left: Three-step test for determining well efficiency (well loss) and optimum pumping rate for
the long-term test. Right: Long-term test for determining aquifer transmissivity (hydraulic conductivity) and
storage parameters.
532 Hydrogeological Conceptual Site Models

pumping wells and in different directions, thus enabling an evaluation of possible aquifer
anisotropy and heterogeneity, including the location of any hydraulic boundaries that may
be present (e.g., see Kresic 2007).
Real-time management of groundwater resources is becoming increasingly important
because of greater stresses on existing systems and the apparent greater frequency of
extreme weather events such as floods and droughts. Water managers must be able to rap-
idly assimilate and visualize data obtained from the field in order to make critical decisions
regarding allocations under difficult circumstances. To assist in this process, Groundswell
Technologies, Inc., has created a Groundwater Basin Storage Tracking (GBST) module for
automated water resources management within its Waiora software platform (described in
Chapter 8). This Web-based application uses sensors and telemetry to continuously moni-
tor and visualize changes in aquifer volume (storage) between any two selected time steps.
Cumulative storage volumes are automatically calculated and displayed and become read-
Downloaded by [University of Auckland] at 23:45 09 April 2014

ily available for detailed analyses and export to reports. Applications of the GBST module
include groundwater-supply monitoring, water banking, aquifer storage and recovery,
and storm water infiltration accounting. Basin-scale tracking of groundwater resources
is critical to the overall management framework as it covers a wider area than conven-
tional aquifer tests used to site supply areas and, therefore, allows assessment of regional
impacts that have greater uncertainty. Figure 9.13 presents example visualizations of the
GBST module created in Waiora.
When evaluating the quantity of spring water available for water supply, it is important
to include a measure of spring discharge variability. For example, a spring may have a
very high average discharge, but it may be dry or just trickling most of the year. The pre-
vailing practice in most countries is to evaluate springs based on the minimum discharge
recorded over a long period, typically longer than several hydrologic years (a hydrologic
year is defined as spanning all wet and dry seasons within a full annual cycle). The sim-
plest measure of spring variability is the ratio of the maximum to minimum discharge:

Qmax
Iv = .
Qmin

Springs with the index of variability (Iv) greater than 10 are considered highly variable, and
those with Iv < 2 are sometimes called constant or steady springs.
Meinzer (1923) proposed the following measure of variability expressed as a percentage:

Qmax − Qmin
V= × 100(%)
Qav

where Qmax, Qmin, and Qav are maximum, minimum, and average discharge, respectively.
Based on this equation, a constant spring would have a variability of less than 25%, and a
variable spring would have a variability of greater than 100%.
A more exact and preferable quantitative analysis is illustrated with Figures 9.14 through
9.16, provided there is a sufficiently long time series of spring discharge. Although in this
particular case, it would not be possible to accurately estimate the real natural discharge of
the spring because Edwards Aquifer is heavily pumped for water supply, the figures pro-
vide very useful related information. Hydrographs of the average monthly discharge of
Comal Springs in Texas for May and August for the 73-year-long period of record show the
Groundwater Supply 533
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.13
(Top) Interpolated water-level distributions for a selected time step. Well symbols represent where water-level
data have been collected and contours represent interpolated distributions. Note that tick marks along the
bottom of the image represent additional time steps that can be viewed using playback and mouse controls.
(Courtesy of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.). (Bottom) Interpolated contours rep-
resenting the change in groundwater basin storage between two selected time steps in acre-feet. An estimate
of the volumetric storage change is displayed along the bottom of the frame. The contour map displays relative
change, so users can understand hydraulic elasticity and specific areas of intense and modest change. (Courtesy
of Mark Kram, PhD, CGWP of Groundswell Technologies, Inc.)

impact of several droughts, which are compounded by increased pumpage from the aqui-
fer. Note that May typically has the highest recorded daily flows and August the lowest.
During the drought of the 1950s, the springs were dry from June to November of 1956. This
example illustrates how using some average values, even when having an unusually long
period of record, could lead to erroneous conclusions about available secure discharge
rates for any given time. For example, Figure 9.15 shows the theoretical probability that the
average spring discharge in August would be less than 50 cfs is about 4%, and the prob-
ability that the spring would be dry (discharge equal to 0 cfs) is about 2%–3%. However,
534 Hydrogeological Conceptual Site Models

500

May
400 May
average
Discharge (cfs)

300

August
200 average

100
August

0
Downloaded by [University of Auckland] at 23:45 09 April 2014

1933 1940 1950 1960 1970 1980 1990 2000 2006


Year

FIGURE 9.14
Average monthly discharge in cubic feet per second at Comal Springs, TX, for May (bold line) and August
(dashed line). Horizontal lines indicate average May and August discharge for the entire 73-year-long period of
record. (From Kresic, N., Groundwater Resources: Sustainability, Management and Restoration. McGraw Hill, New
York, 852 pp., 2009. With permission.)

we know from the record that the spring went dry in August 1956. It should be noted again
that this probability analysis also reflects historic artificial groundwater withdrawals from
the system and, therefore, should not be used alone for any planning purposes. In other
words, such withdrawals may change in the future, and their impact would have to be
accounted for in some quantitative manner.
Similar probability analysis can also be performed for daily spring flows, and various
percentiles of the probability distribution for individual months can be combined to pro-
duce graphs, such as the one shown in Figure 9.16. This graph can be displayed with a
similar plot of historic and current (recent) monthly precipitation, which is very useful

99.9
99
90
Cumulative probability

70
50
30
20
August
10
5
May
1
0.5

0.1
0 50 100 150 200 250 300 350 400 450 500
Discharge (cfs)

FIGURE 9.15
Extreme value probability distributions of average monthly flows in May and August for Comal Springs. (From
Kresic, N., Groundwater Resources: Sustainability, Management and Restoration. McGraw Hill, New York, 852 pp.,
2009. With permission.)
Groundwater Supply 535

1000

Max
1%

Percent of Time Exceeded


100 5%
10%
30%
Daily Flow in CFS

50%
10 70%

90%
95%
99%
Min
1
Downloaded by [University of Auckland] at 23:45 09 April 2014

0.1
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Month

FIGURE 9.16
Daily flow duration curves for each month. (Modified from U.S. Army Corps of Engineers (USACE), Hydrologic
Frequency Analysis, Engineering manual 1110-2-1415, Washington, DC, 1993. Available at http://140.194.76.129/
publications/eng-manuals/.)

when anticipating likely spring flows in the near future. For example, if the spring dis-
charge is significantly dependent on precipitation, the currently measured spring flow is
less than the 10th percentile (i.e., the historically observed flow is higher 90% of the time),
and the same is true for the recent precipitation. It may be necessary to impose some type
of restriction on spring-water use before the hydrologic and meteorological conditions
improve (Kresic and Bonacci 2010).
Regression between different hydrologic variables that are correlated in some manner
and represented with a sufficient number of data is a simple and efficient quantitative
method for estimating groundwater availability in some cases. In the hydrology of springs,
this refers to finding a simple or multiple regression equation describing spring-flow rate
using an observed time series of variables known to influence it. For example, Figure 9.17
shows plots of daily flows at the Comal and San Marcos springs (both springs are draining
Edwards Aquifer in Texas), daily pumpage from the aquifer in the San Antonio area, the
aquifer levels at the Bexar County Index Well J17, and the precipitation in San Antonio. As
can be seen without any quantitative analysis, Comal Springs, which are closer to well J17
and San Antonio, shows a better correlation with the pumpage and the aquifer levels. In
fact, the simple regression model of the Comal Springs discharge based on the J17 water
levels (Figure 9.18) appears almost perfect, judging from the model correlation coefficient
(r = 0.978). This is also the main reason why the aquifer levels measured at index well
J17 are used as the key quantitative threshold parameter for aquifer management; when
this level drops below certain values, successively more stringent restrictions on aquifer
pumpage are imposed in order to protect the flow of the springs.
536 Hydrogeological Conceptual Site Models

Rainfall (inches)
600
1
Spring Flow (cfs)

500 700
2
J17 680
400

Aquifer Level at J17 (feet asl)


660
Comal Springs
300
Daily Pumping (Mgal)

640

200
Pumping
Downloaded by [University of Auckland] at 23:45 09 April 2014

620

100
600
San Marcos Springs

0 580
2005 2006
Year

FIGURE 9.17
Daily flows at Comal and San Marcos Springs (in cubic feet per second) versus pumpage from the Edwards
Aquifer in the San Antonio area (in million gallons per day), daily aquifer level at the Bexar County Index
Well J17, and daily precipitation in San Antonio. (Reprinted from Groundwater Hydrology of Springs: Engineering,
Theory, Management, and Sustainability, Kresic, N., Modeling, edited by N. Kresic and Z. Stevanović, pp. 166–230,
Copyright 2010, with permission from Elsevier.)

600

Comal = (-74.563 + 0.1376 x J17)2


Comal Springs Flow (cfs)

500 r = 0.9778

400

300

200
640 650 660 670 680 690 700
Aquifer level at J17 (ft asl)

FIGURE 9.18
Simple regression model of the Comal Springs flow on aquifer level at the Bexar County Index Well J17 with
the 95% prediction limits. (Reprinted from Groundwater Hydrology of Springs: Engineering, Theory, Management,
and Sustainability, Kresic, N., Modeling, edited by N. Kresic and Z. Stevanović, pp. 166–230, Copyright 2010, with
permission from Elsevier.)
Groundwater Supply 537

9.2.2 Groundwater Quality


Managing or predicting future quality of raw groundwater, before it is extracted from an
aquifer, is a much more complex task compared to evaluating or managing its quantity.
There are many sources, past and present, of potential or actual groundwater contami-
nation and an infinite variety of natural and anthropogenic contaminants (Kresic 2009).
It is often very difficult to pinpoint every single source of groundwater contamination
and even more difficult to quickly restore the resource to beneficial use. In most cases,
the quality of the resource cannot be directly controlled by the end user because of legal,
financial, and other constraints. A simple example is contamination that is (or has been)
occurring miles away, outside the jurisdiction of the utility that extracts groundwater,
and is affecting (or may be affecting in the near future) a well field for public water sup-
ply. Even when the sources of contamination are well defined and the legal authority for
Downloaded by [University of Auckland] at 23:45 09 April 2014

groundwater restoration is clearly established, it may take years before any measures to
mitigate the situation are taken. One common reason is the high cost of groundwater
remediation, which can prohibit small and large users alike from attempting to solve
the problem on their own. This is the main reason why in some societies, such as the
United States, where the legal rights of both water users and alleged polluters are highly
protected, exorbitant sums of money are spent each year on litigation over groundwater
contamination.
In some complex hydrogeologic environments, aquifer restoration to pristine or near-
pristine conditions (which legally are defined as all contaminants present below their
maximum allowed concentrations) may not be technically feasible (see Chapter 8). This
fact is often not acceptable to some stakeholders, and there are many examples of ground-
water users with false hopes, waiting for someone else to pay for solving their groundwa-
ter contamination problem. In such cases, the someone else should be, whenever possible,
considered as something else, including at least two options: (1) groundwater treatment to
drinking-water standards after extraction from the aquifer, and (2) innovative approaches
to overall water management, including water reuse and public outreach (education)
regarding groundwater-resource (aquifer) protection.
Oftentimes a source of aquifer contamination may be unknown to or underestimated by
a public or private entity developing a groundwater resource. One such example is illus-
trated in Figure 9.19. Several years of initial monitoring at a proposed extraction site under
short-term pumping or static conditions showed acceptable water quality, including con-
centrations of constituent #1 at or below its drinking water equivalency level (or guidance
level) and concentrations of constituent #2 below its secondary drinking-water standard.
However, upon full-scale initiation of pumping from the well for water supply, concen-
trations of both constituents increased rapidly as shown in Figure 9.19. In a worst-case
scenario, the well may need to be abandoned if concentrations of constituent #2 eclipse its
secondary standard.
This example highlights the importance of expansive water-quality monitoring under
long-term pumping conditions around a proposed extraction site before full-scale devel-
opment of the resource in addition to rigorous identification of potential contaminants
(including nontraditional contaminants with secondary standards) and quantification of
their mass in the aquifer system. It is also important to be aware of emerging contaminants
that may jeopardize the future usage of a groundwater supply. At any time, previously
unregulated contaminants may be added to state or federal drinking water criteria or
existing criteria may be modified to lower concentration thresholds for already-regulated
contaminants (two example cases for the reader to explore independently are related to
538 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.19
Concentrations of two chemical constituents in a water-supply well over time. Continuous pumping of the well
for water supply began in 1997 as indicated by the dashed black vertical line.

arsenic and 1,4-dioxane). Either of these actions may compromise a well’s viability and
increase monitoring and operational costs. For this reason, many public and private enti-
ties are focused on identifying pristine resources for groundwater development. However,
as undeveloped, pristine sources are becoming increasingly rare and inaccessible, appro-
priate risk characterization and management of more vulnerable resources are becoming
key elements of sustainable groundwater development.
Nonpoint-source groundwater contamination, which requires both regulatory and local
land-use changes in order to restore aquifers to their natural condition, is caused worldwide
by the use of pesticides and fertilizers. For example, the United Kingdom Environment
Agency reported that pesticides were found in more than a quarter of groundwater moni-
toring sites in England and Wales in 2004—in some cases exceeding applicable drinking-
water limits. Atrazine is a weed killer used mainly to protect maize (corn) crops, and it
was used in the past to maintain roads and railways. It has been a major problem, but since
nonagricultural uses were banned in 1993, concentrations in groundwater have gradually
declined. A complete ban on all use of atrazine (and simazine, another pesticide) in the
United Kingdom was planned to be phased in between 2005 and 2007 but has been delayed.
As noted by the United Kingdom Environment Agency, even when banned, pesticides can
remain a problem for many years after they were last used (Environment Agency 2007).
Some other European countries have also banned the use of atrazine: France, Sweden,
Norway, Denmark, Finland, Germany, Austria, Slovenia, and Italy. In contrast, the United
States Environmental Protection Agency (U.S. EPA) has concluded that the risks from atra-
zine for approximately 10,000 community drinking-water systems using surface water are
low and did not ban this pesticide, which continues to be the most widely used pesticide
in the United States. Incidentally, as stated by the U.S. EPA, 40,000 community drinking-
water systems using groundwater were not included in the related study, and private wells
used for water supply were not mentioned in the U.S. EPA’s decision to allow continuous
use of atrazine (U.S. EPA 2003).
The United Kingdom Environment Agency also reported that in 2004 almost 15% of mon-
itoring sites in England (none in Wales) had an average nitrate concentration that exceeded
50 mg/L, the upper limit for nitrate in drinking water (for comparison, groundwater
Groundwater Supply 539

naturally contains only a few milligrams per liter of nitrate). Water with high nitrate levels
has to be treated or diluted with cleaner water to reduce concentrations. More than two-
thirds of the nitrate in groundwater comes from past and present agriculture, mostly from
chemical fertilizers and organic materials. It is estimated that more than 10 million tons
of organic material per year is spread on the land in the United Kingdom. More than 90%
of this is animal manure; the rest is treated sewage sludge, green waste compost, paper
sludge, and organic industrial wastes. Other major sources of nitrate are leaking sewers,
septic tanks, water mains, and atmospheric deposition. Atmospheric deposition of nitro-
gen makes a significant contribution of nitrate to groundwater. A study in the Midlands
of England concluded that approximately 15% of the nitrogen leached from soils came
from the atmosphere. The United Kingdom Environment Agency estimates that 60% of
groundwater bodies in England and 11% in Wales are at risk of failing the European Water
Framework Directive objectives because of high nitrate concentrations (Environment
Downloaded by [University of Auckland] at 23:45 09 April 2014

Agency 2007). In general, nitrate is believed to be the most widespread groundwater con-
taminant worldwide (Kresic 2009).
Figure 9.20 shows results of a modeling effort aimed at better understanding the mecha-
nisms of nitrate leaching to groundwater and predicting its future impacts. A GIS and
Microsoft Excel tool, which models the slow movement of historically leached nitrate
down through the unsaturated zone of the Chalk aquifer, has been developed for Wessex
Water Services, Ltd., in the United Kingdom. Seasonal and short-term variations in nitrate
were simulated by linking to groundwater level and bypass recharge variations. Model
predictions closely matched short- and long-term trends and gave confidence in the use of
the model for predicting nitrate concentrations in the years to come assuming a number of
nitrate leaching scenarios. Wessex Water used the findings of the modeling tool to assess
the likely success of active catchment management (i.e., storm water runoff capture and
treatment) as an alternative to blending or treatment in the short and long term.

Site Measured Nitrate (Average) Modeled NO3 (Fn of GWL & Recharge)
Total Recharge at Water Table (mm/d) Modeled NO3 (Fn of GWL)
Estimated Bypass Recharge (mm/d)

12 11.2
Modeled Nitrate Concentration (mg/L N)
Nitrate (mg/L N) or Recharge (mm/d)

10 9.2

8 7.2

6 5.2

4 3.2

2 1.2

0 -0.8
1998 1999 2000 2001 2002 2003 2004 2005

FIGURE 9.20
Model-simulated (red line) trends in historic groundwater nitrate concentrations used to predict future impacts
assuming a number of nitrate leaching scenarios to the underlying chalk aquifer in England. (Courtesy of
AMEC E&I, Inc.)
540 Hydrogeological Conceptual Site Models

9.2.2.1 Protection of Groundwater


As discussed by Hötzl (1996), it is practical to distinguish between resource and source
protection, although both concepts are closely related to each other—it is impossible to
protect a resource without also protecting the source. In European countries, for exam-
ple, groundwater is considered a valuable resource that must be protected, and activities
endangering its quality are forbidden by law. The European Water Framework Directive
(The European Parliament and the Council of the European Union 2000) emphasizes that
water is not a commercial product like any other but a heritage that must be protected,
defended, and treated as such. Thus, the directive demands the protection of groundwater
and surface-water resources. The highest priority is to protect the groundwater used for
drinking-water supply. The source may be a captured spring, a pumping well, or any other
groundwater extraction point. The European Groundwater Directive of 2006 extends the
Downloaded by [University of Auckland] at 23:45 09 April 2014

concept of overall resource protection in detail (The European Parliament and the Council
of the European Union 2006).
The following discussion on the importance of groundwater resources and their vulner-
ability is based on discussion provided by the U.S. EPA’s Ground Water Task Force (GWTF
2007).
Ground water use typically refers to the current use(s) and functions of groundwater as
well as future reasonably expected use(s). Groundwater use can generally be divided into
drinking water, ecological, agricultural, industrial/commercial uses or functions, and
recreational. Drinking water use includes both public supply and individual (household
or domestic) water systems. Ecological use commonly refers to groundwater functions,
such as providing baseflow to surface water to support habitat and also to the fact that
groundwater (most notably in karst settings) may also serve as an ecologic habitat in
and of itself. Agricultural use generally refers to crop irrigation and livestock watering.
Industrial/commercial use refers to water use in any industrial process, such as for cool-
ing water in manufacturing, or commercial uses, such as car-wash facilities. Recreational
use generally pertains to impacts on surface water caused by groundwater; however,
groundwater in karst settings can be used for recreational purposes, such as cave div-
ing (see Figure 2.92). All of these uses and functions are considered beneficial uses of
groundwater. Furthermore, within a range of reasonably expected uses and functions,
the maximum (or highest) beneficial groundwater use refers to the use or function that
warrants the most stringent groundwater cleanup levels.
Groundwater value is typically considered in three ways: for its current uses, for its future
or reasonably expected uses, and for its intrinsic value. Current use value depends, to a
large part, on need. Groundwater is more valuable where it is the only source of water,
where it is less costly than treating and distributing surface water, or where it supports
ecological habitat. Current use value can also consider the costs associated with impacts
from contaminated groundwater on surrounding media (e.g., underlying drinking-water
aquifers, overlying air—particularly indoor air—and adjacent surface water). Future or
reasonably expected value refers to the value people place on groundwater they expect
to use in the future; this value will depend on the particular expected use or uses (e.g.,
drinking water, industrial). Society also places an intrinsic value on groundwater, which
is distinct from economic value. Intrinsic value refers to the value people place on the fact
that clean groundwater exists and will be available for future generations, irrespective of
current or expected uses. While the value of groundwater is often difficult to quantify, it
will certainly increase as the expense of treating surface water increases and as existing
surface water and groundwater supplies reach capacity with continuing development.
Groundwater Supply 541

Groundwater vulnerability refers to the relative ease with which a contaminant intro-
duced into the environment can negatively impact groundwater quality and/or quantity.
Vulnerability depends to a large extent upon local conditions including, for example,
hydrogeology, contaminant properties, size or volume of a release, and location of the
source of contamination. Shallow groundwater is generally more vulnerable than deep
groundwater. Private (domestic) water supplies can be particularly vulnerable because
they are generally shallower than public water supplies, regulatory agencies generally
require little or no monitoring or testing for these wells (see arsenic discussion in Chapter
1), and homeowners may be unaware of contamination unless there is a taste or odor
problem. Furthermore, vulnerability can change over time. For example, anthropogenic
activities, such as mining or construction, can remove or alter protective overburden, thus
making underlying aquifers more vulnerable.
The protection of groundwater resources is achieved by prevention of possible contami-
Downloaded by [University of Auckland] at 23:45 09 April 2014

nation, by remediation of already-contaminated groundwater, and by detection and pre-


vention of unsustainable extraction. The prevention aspect includes pollution-prevention
programs or control measures at potential contaminant sources, land-use control, and
public education. Some examples of prevention measures include the following:

• Mandatory installation of devices for early detection of contaminant releases,


such as leaks from underground storage tanks at gas stations, and landfill leach-
ate migration
• Banning of pesticide use in sensitive aquifer recharge areas
• Land-use controls that prevent an obvious introduction of contaminants into the sub-
surface, such as from industrial, agricultural, and urban untreated wastewater lagoons
• Land-use controls that minimize the interruption of natural aquifer recharge,
such as the paving of large urban areas (urban sprawl)
• Management of urban runoff that can contaminate both surface and groundwater
resources (for example, see U.S. EPA 2005)

Remediation of already-contaminated groundwater is the second key aspect of resource


protection (see Chapter 8). In its publication titled “Protecting the Nation’s Ground Water:
EPA’s Strategy for the 1990s: The Final Report of the EPA Ground-Water Task Force,” the
U.S. EPA stated that groundwater remediation activities must be prioritized to limit the
risk of adverse effects to human health first and then to restore currently used and rea-
sonably expected sources of drinking water and groundwater whenever such restora-
tions are practicable and attainable. The agency also states, “Given the costs and technical
limitations associated with groundwater cleanup, a framework should be established that
ensures the environmental and public health benefit of each dollar spent is maximized.
Thus, in making remediation decisions, EPA must take a realistic approach to restora-
tion based upon actual and reasonably expected uses of the resource as well as social
and economic values.” Finally, given the expense and technical difficulties associated with
groundwater remediation, the agency emphasizes early detection and monitoring, so that
it can address the appropriate steps to control and remediate the risk of adverse effects of
groundwater contamination to human health and the environment (U.S. EPA 1991).
The fundamental concept of groundwater vulnerability is that some portions of the
underlying aquifer (groundwater resource) are more vulnerable to contamination from the
land surface than others. Vrba and Zaporozec (1994) emphasize that the vulnerability of
542 Hydrogeological Conceptual Site Models

groundwater is a relative, unmeasurable, dimensionless property, and they make the dis-
tinction between intrinsic (natural) and specific vulnerability. The intrinsic vulnerability
depends only upon the natural properties of an area, such as characteristics of the porous
media, and recharge. It is independent of any particular contaminant. Specific vulnerability
takes into account the fate-and-transport properties of a contaminant. Simplified, this means
that, for example, an aquifer may be vulnerable to an improper disposal or spill of chlori-
nated solvents at the land surface even though the groundwater flow directions and the
presence of a low-permeable overlying aquitard may be protective against nonpoint-source
nitrate contamination. Another example is a thick, unsaturated zone (e.g., > 300 ft) in arid
climates that may be highly protective of the underlying unconfined aquifer simply because
of the insignificant present-day aquifer recharge that cannot facilitate migration of a con-
taminant through such a thick vadose zone all the way down to the water table. However,
if there were some land-use practices, such as waste disposal in artificial ponds, which can
Downloaded by [University of Auckland] at 23:45 09 April 2014

facilitate contaminant migration, these aquifers would be considered vulnerable.


Although most definitions and methods for mapping groundwater vulnerability con-
sider only contamination aspects, there are also development-based aspects of ground-
water protection and vulnerability, such as overexploration and aquifer mining (Vrba and
Zaporozec 1994). Maps depicting time-dependent quantities of groundwater available for
extraction and the associated development of drawdown (decrease in water levels) are a
very useful tool for groundwater management of nonrenewable groundwater resources
and other stressed aquifer systems.
It is important to distinguish between the protection of groundwater resources in general,
which is supported by vulnerability maps, and the protection of a drinking-water source,
which is supported by mapping the source zone protection area (or, as commonly referred
to in the United States, delineation of the wellhead protection zone). Although these two
concepts are closely related, they usually involve different scales, and the associated maps
have different objectives. A wellhead protection zone is an area within which there is a
complete pathway between any given location at the water table (top of the unconfined
aquifer) and the groundwater extraction point (water well). This zone is usually defined by
the length of time required for all water particles inside the zone to be extracted, i.e., cap-
tured by the well (e.g., 5- or 10-year capture zone). In contrast, groundwater vulnerability,
which is a qualitative concept, involves a vaguely expressed probability that a theoretical
contamination at the land surface is more or less likely to reach the water table. In some
cases, the objectives of resource and source protections are merged by creating a map, or
several map overlays, that can be used to present both the general vulnerability of an area
and the wellhead protection zones for the existing groundwater supplies (Kresic 2009).
Most methods used to delineate wellhead protection zones address the residence-time
criterion. This criterion is based on the assumption that

• Nonconservative contaminants subject to various fate-and-transport processes


(e.g., sorption, diffusion, degradation) may be attenuated after a given time in the
subsurface.
• Detection of conservative contaminants (not subject to attenuation) entering the
wellhead protection area will give enough lead time for the public water supply
entity to take necessary action, including groundwater remediation and/or devel-
opment of a new (alternative) water supply.
• Detection of any contaminants already in the wellhead protection area would
require an immediate remedial action.
Groundwater Supply 543

The most critical decision regarding an appropriate residence time is made by the stake-
holders in each individual case although 5-, 10-, and 20-year wellhead capture zones have
been most widely used to delineate certain subzones of various land-use restrictions
within the main wellhead capture zone (for example, see Kraemer et al. 2005).
Wellhead protection zone delineation methods can be divided into the following three
categories: (1) nonhydrogeologic, (2) quasihydrogeologic, and (3) hydrogeologic. The non­­
hydrogeologic method is a selection of an arbitrary fixed radius or fixed-shape area
around the well(s) in which authorized personnel implement some type of strict protec-
tion such as limited access. This method does not consider the residence-time criterion.
Quasihydrogeologic methods use very simple assumptions, which, in many cases, do
not have much in common with the site-specific hydrogeologic conditions. Because they
include the application of certain equations, it may appear to nonhydrogeologists that
such methods must have some credibility. When involved in the application of quasihy-
Downloaded by [University of Auckland] at 23:45 09 April 2014

drogeologic methods, hydrogeologists should clearly explain the limitations of various


unrealistic assumptions and their implications on the final wellhead zone delineation. For
example, these methods do not consider aquifer heterogeneity and anisotropy, stratifica-
tion or the presence of confining layers, vertical flow components, interference between
multiple wells screened at different or the same depths in the same well field, or depth to
screen of individual wells. Three common quasihydrogeologic methods include (1) cal-
culated fixed radius, (2) a well in a uniform flow field, and (3) groundwater modeling not
based on hydrogeologic mapping (Kresic 2007). None of these methods should be applied
in any of the situations listed above.
Numeric models based on the results of hydrogeologic mapping are the best tools for
the delineation of wellhead capture zones and groundwater resources management in
general. Regardless of the effort and resources invested into hydrogeologic mapping for
any purpose, there will inevitably remain a level of uncertainty as to the true represen-
tative hydrodynamic characteristics of the groundwater flow system analyzed. Numeric
models provide for quantitative analysis of this uncertainty and enable decision makers
to analyze different “what if” scenarios related to source management and protection.
Numeric models can take into account all important aspects of the three-dimensional
hydrogeologic mapping and convert them into a quantitative description of the flow sys-
tem. Consequently, they can delineate complex aquifer volumes and areas contributing
water to well fields (see Chapter 5).
Karst aquifers are particularly vulnerable to contamination. Contaminants can easily
enter conduits and may be transported rapidly over large distances in the aquifer. Processes
of contaminant retardation and attenuation often do not work effectively in karst systems.
Therefore, karst aquifers need special protection and attention. A detailed knowledge of
karst hydrogeology is a precondition for the delineation of source protection areas in karst
(see Chapter 2). However, karst drainage areas contributing water to a single spring or a
large supply well can be very large and/or extensive and convoluted because of anisotropy
(e.g., presence of karst conduits); it is therefore often unrealistic to designate maximum
protection for the entire system as the resulting land-use restrictions would not be accept-
able to some stakeholders. In addition, many public or private water systems do not have
financial and other resources to do it right and embark on an adequate characterization
of karst hydrogeology in their service areas. An equally important problem is the applica-
tion of inadequate tools, such as use of numeric models not capable of simulating the real
physical nature of groundwater flow in karst, as illustrated with the following example.
Figure 9.21 shows groundwater flow paths and velocities in a portion of the Edwards
Aquifer near San Antonio, TX, as modeled by the USGS numeric groundwater flow model
544 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.21
Map showing tracer test results, faults (in black), and modeled flow paths. Injection sites are shown with yellow
circles, and monitored wells are shown with smaller dotted circles. Tracer test velocities range from 80 ft/day
to more than 12,000 ft/day. The groundwater travel velocity between arrows placed on the modeled flow paths,
as calculated from the USGS’s numeric model of Edwards Aquifer, TX, is 1 mi per year. Dye-tracing results are
presented in the work of Schindel et al. (2009). (Courtesy of Geary Schindel, Edwards Aquifer Authority.)

together with the information obtained from a number of dye-tracing tests performed in
the same area. The numeric model was developed using MODFLOW and an assumption
that the karstic Edwards Aquifer can be modeled as an equivalent porous medium (see
also Chapter 5). However, as can be seen from Figure 9.21, the model completely fails to
match either the groundwater flow directions or the velocities observed in the field.

9.2.3 Groundwater Extraction


Groundwater extraction (withdrawal) is generally accomplished by the following: installation
of individual water wells or well fields, construction of underground dams (reservoirs), and
regulation of springs. In addition to classical vertical wells, groundwater extraction can be per-
formed in a number of other ways, including with horizontal and slanted wells, collector wells,
infiltration galleries, drainage galleries, trenches, and drains (Kresic 2007, 2009).
Regardless of the selected means of groundwater extraction, the final design and loca-
tion is usually a compromise made after considering various factors:

• Capital cost
• Vicinity to future users
• Existing groundwater users and groundwater permits
• Hydrogeologic characteristics and depth to different water-bearing zones
(aquifers)
Groundwater Supply 545

• Required flow rate of the water-supply system and expected yield of individual wells
• Well drawdown and radius of well (well field) influence
• Interference between wells in the well field
• Water treatment requirements
• Energy cost for pumping and water treatment and general O&M costs
• Aquifer vulnerability and risks associated with the existing or potential sources
of contamination
• Interactions with other parts of the groundwater system and with surface water
• Options for artificial aquifer recharge, including storage and recovery
• Societal (political) requirements
• Existence or possibility of an open water market
Downloaded by [University of Auckland] at 23:45 09 April 2014

The above factors are not all-inclusive and are not listed in order of importance; sometimes just
one or two factors are all that are needed for proceeding with the final design. However, as
the development and use of groundwater resources is becoming increasingly regulated in the
United States and in many other countries, it is likely that most of these factors will have to be
addressed as part of a well-permitting process. Even in cases where permitting requirements
are absent, it is prudent to consider most, if not all, of the listed factors because they ultimately
define the long-term sustainability of any new well or well field (Kresic 2009).
Wells have been used for centuries for domestic and public water supply throughout the
world. Their depth, diameter, and construction methods vary widely, and there is no such
thing as a “one size fits all” approach to construction of wells.
Well design, installation, and well-construction materials should conform to applicable
standards. In the United States, the most widely used water-well standard is the American
National Standards Institute (ANSI)/American Water Works Association (AWWA) A100 stan-
dard, but the authority to regulate products for use in or contact with drinking water rests
with individual states, which may have their own standard requirements. Local agencies may
choose to impose requirements more stringent than those required by the state (AWWA 1998).
Answers to just about any question regarding well design can be found in the classic
1000-page book Groundwater and Wells by Driscoll (1986). Other exhaustive reference books
on well design are Handbook of Ground Water Development by Roscoe Moss Company (1990)
and Water Well Technology by Campbell and Lehr (1973). Following is a brief discussion
of key concepts for consideration in well design. Various public-domain publications by
United States government agencies provide useful information on the design and installa-
tion of water-supply and monitoring wells (e.g., U.S. EPA 1975, 1991; USBR 1977).
The design elements of vertical water wells include the following: drilling method, bor-
ing (drilling) and casing diameter, depth, well screen, gravel pack, well development, well
testing, and selection and installation of the permanent pump. Whenever possible, a well
design should be based on information obtained by a pilot boring drilled prior to the main
well bore. Geophysical logging and coring (sample collection) of the pilot boring provide
the following information: depth to and thickness of the water-bearing intervals in the
aquifer, grain size, permeability of the water-bearing intervals, and physical and chemical
characteristics of the porous media and groundwater. Unknown geology and hydrogeol-
ogy of the formation(s) to be drilled may result in the selection of an improper drilling
technology, sometimes leading to a complete abandonment of the drilling location because
of various unforeseen difficulties such as flowing sands, collapse of boring walls, or loss of
drilling equipment in karst cavities.
546 Hydrogeological Conceptual Site Models

The expected well yield, the well depth, and the geologic and hydrogeologic characteristics
of the porous media (rock) all play an important role in selecting the drilling diameter and the
drilling method. Deep wells or thick stratification of permeable and low-permeable porous
media may require drilling with several diameters and the installation of several casings of
progressively smaller diameter called telescoping casing. This is done to provide stable and
plumb boreholes in deep wells and to bridge difficult or undesirable intervals (e.g., flowing
sands, highly fractured and unstable walls prone to caving, thick sequences of swelling clay).
Ultimately, the expected well capacity is the parameter that will define the last drilling
diameter sufficient to accommodate the screen diameter, including thickness of any gravel
pack for that capacity. The relationship between the two diameters is not linear—doubling
the screen diameter will not result in doubling the well yield. For example, for the same
drawdown and radius of influence, an increase in diameter from a 6-in. well to a 12-in. well
will yield only 10% more water.
Downloaded by [University of Auckland] at 23:45 09 April 2014

The well screen is arguably the most important part of a well because this is where
groundwater enters the well and where the efficiency of an otherwise good design may
be compromised, including loss of the entire well. Casing and screens both stabilize the
formation materials, and screens, in addition to the inflow of water, allow proper well
development. It has been generally accepted that the screens of public water-supply wells
should be made of high-quality stainless steel (see Figure 9.22). During the process of well

FIGURE 9.22
Installation of continuous slot (Johnson) well screen and gravel pack into a large-diameter water-supply well in
upstate New York. The well is completed adjacent to a major river in alluvial deposits containing large gravel
and boulders, requiring use of the cable-tool drilling method.
Groundwater Supply 547

development, the finer materials from the productive water-bearing zones and any fines
introduced by the drilling fluid are removed, so that only the coarser materials are in
contact with the screen. In formations where the porous media grains surrounding the
screen are more uniform in size (homogeneous) and are graded in such a way that the
fine grains will not clog the screen, the developed aquifer materials will form a so-called
natural pack, consisting of grains coarser than farther away from the well bore. Such wells
are called naturally developed wells. In contrast, when the targeted aquifer (formation)
intervals are heterogeneous and have predominantly finer grains, it may be necessary to
place an artificial gravel pack around the screen intervals (Figure 9.22). This gravel pack
(also called filter material) will allow proper well development and prevent the continuous
entrance of fines and screen clogging by the fines during well operation. The placement of
a gravel pack also makes the zone around the well screen more permeable and increases
the effective hydraulic diameter of the well.
Downloaded by [University of Auckland] at 23:45 09 April 2014

The size of well screen openings depends on the grain size distribution of the natural
porous media. When natural well development is not possible, the size of screen openings
is also dependent on the required gravel pack characteristics (gravel pack grain size and
uniformity). The percentage of openings, the screen diameter, and the screen length should
all be selected simultaneously to satisfy the following criteria: maximize well yield, maxi-
mize well efficiency by minimizing hydraulic loss at the screen, and provide for structural
strength of the screen, i.e., prevent its collapse resulting from formation pressure.
Proper well development will improve almost any well regardless of type and size,
whereas without development, an otherwise excellent well may never be satisfactory. As
discussed by U.S. EPA (1975) and Driscoll (1986), in any well drilling technology, the per-
meability around the borehole is reduced. Compaction, clay smearing, and driving fines
into the wall of the borehole occur in the cable tool drilling method. Drilling fluid invasion
into the aquifer and formation of a mud cake on the borehole walls are caused by the direct
rotary method. Silty and dirty water often clog the aquifer in the reverse rotary drill-
ing method. In consolidated formations, compaction may occur in some poorly cemented
rocks, where cuttings, fines, and mud are forced into fractures, bedding planes, and other
openings, and a mud cake forms on the wall of the borehole.
There are various methods of well development, and their selection depends primarily
on the applied drilling technology and the formation characteristics. However, availability
of the equipment and driller’s preference in many cases play unjustifiably more impor-
tant roles. It is often impossible to anticipate how a well will respond to certain types of
development and how long it will take to achieve adequate development. General methods
of well development are pumping, surging, fracturing, and washing, each of which has
several variations (U.S. EPA 1975). It is always recommended that at least two methods
be applied for best results. Because a lump-sum basis for well development may result in
unsatisfactory work, it is better to provide for development on a unit price per hour basis
and continue until the following conditions have been met (AWWA 1998):

1. Sand content should average no more than 5 mg/L for a complete pumping cycle
of 2-hour duration when pumping at the design discharge capacity.
2. No less than 10 measurements should be taken at equal intervals to permit plot-
ting of sand content as a function of time and production rate and to determine the
average sand content for each cycle.
3. There should be no significant increase in specific capacity during at least 24 hours
of development.
548 Hydrogeological Conceptual Site Models

Figure 9.23 shows discharge of clear water from a properly developed, large-diameter,
deep well designed for public water supply in Phoenix, AZ. Wells like this have enabled an
unprecedented growth of urban areas and agricultural land in many arid environments
around the world. At the same time, however, a relatively small number of people fully
understand the complexity, importance, and cost of a properly designed and constructed
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.23
Large-diameter water-supply well in Phoenix, AZ, during performance of an aquifer pumping test. (Courtesy
of Chris Legg.)
Groundwater Supply 549

large-capacity well used for public water supply. For nonhydrogeologists and those who
are not in a related water supply profession, a well usually means a nondescript hole
in the ground that somehow produces water. On the other hand, hydrogeologists and
groundwater professionals think of wells in many different contexts, and some of them
spend lifetimes trying to better understand and design them. Unfortunately, the continu-
ous advances in well design, flexibility in selecting the most feasible locations for certain
uses, and general affordability in terms of well installation costs and energy cost for pump
operation are the main reasons why there is a growing concern regarding sustainability of
groundwater use in many countries. The most obvious consequence of an indiscriminate
use of water wells and subsequent lowering of the hydraulic heads in aquifers is cessation
of spring flows documented around the world (Kresic 2010). It is the hope of the authors
that this trend will not continue as younger generations become more and more aware of
many aspects of groundwater sustainability and preserve jewels such as the one shown in
Downloaded by [University of Auckland] at 23:45 09 April 2014

Figure 9.24.

FIGURE 9.24
Spa pool fed by thermal water issuing from the historic Octagon Spring (inset) in The Homestead resort, Warm
Springs, VA, established in 1766. Archeological evidence indicates use of the mineral and thermal springs in the
area dating back to 7000 BC.
550 Hydrogeological Conceptual Site Models

9.3 Groundwater Sustainability


The determination of the sustainable use of groundwater is not solely a scientific, engi-
neering, or managerial question. Rather, it is a complex interactive process that considers
societal, economic, and environmental values and the respective consequences of different
decisions. One commonly held, and inaccurate, belief when estimating water availability
and developing sustainable water supply strategies is that groundwater use can be sus-
tained if the amount of water removed is equal to recharge—often referred to as safe yield.
However, there is no volume of groundwater use that can be truly free of any adverse con-
sequence, especially when time is considered. The safe yield concept is therefore a myth
because any water that is used must come from somewhere. It falsely assumes that there
will be no effects on other elements of the overall water budget. Bredehoeft et al. (1982)
Downloaded by [University of Auckland] at 23:45 09 April 2014

and Bredehoeft (2002) provide illustrative discussions about the safe yield concept and the
related water budget myth.
In order to examine the safe yield myth more carefully, an analogy is made comparing
an aquifer and a reservoir behind a dam on a river. If withdrawals from a reservoir equal
inflows, the river below the dam will be dry because there will be no outfall from the res-
ervoir. The same principle can be applied to a groundwater reservoir. If pumping (with-
drawal) equals inflow (recharge), the outflows (subsurface flow or discharge to springs,
streams, or wetlands) from the aquifer will decrease and may eventually reach zero,
resulting in some adverse consequence at some point in time. The direct hydrologic effects
will be equal to the volume of water removed from the natural system, but those effects
may require decades to centuries to manifest. Because aquifer recharge and groundwater
withdrawals can vary substantially over time, these changing rates can be critical informa-
tion for developing groundwater management strategies (Anderson and Woosley 2005).
With an increased demand for water and pressures on groundwater resources, the
decades-long debate among water professionals about what constitutes safe withdrawal
of groundwater has now changed into a debate about sustainable use of groundwater.
The difference is not only semantic, and confusion has occasionally resulted. For example,
there are attempts to distinguish between safe yield and sustainable pumping when the
latter is defined as the pumping rate that can be sustained indefinitely without mining or
dewatering the aquifer. Devlin and Sophocleous (2005) provide a detailed discussion of
these and other related concepts.
What appears most difficult to understand is that the groundwater system is a dynamic
one—any change in one portion of the system will ultimately affect its other parts as
well. Even more important is the fact that most groundwater systems are dynamically
connected with surface water. As groundwater moves from the recharge area toward the
discharge area (e.g., a river), it constantly flows through the saturated zone, which is the
groundwater storage (reservoir). If another discharge area (such as a well for water supply)
is created, less water will flow toward the old discharge area (river). This fact seems to be
paradoxically ignored by those who argue that groundwater withdrawals may actually
increase aquifer recharge by inducing inflow from recharge boundaries (such as surface
water bodies!) and, therefore, result in sustainable pumping rates. Although such ground-
water management strategy may be safe or sustainable for the intended use, another ques-
tion is if it has any consequences for the sustainable use of the surface water system, which
is now losing water to rather than gaining it from the groundwater system (Kresic 2009).
Dependence of communities or regions solely on groundwater in storage is a manage-
ment strategy that is not sustainable for future generations. When it is obvious that the
Groundwater Supply 551

natural aquifer recharge cannot offset the reduction in groundwater storage in any mean-
ingful way over a reasonable time, prudent groundwater management must also consider
strategies that rely on used water for aquifer recharge. Two general groups of groundwater
systems fall into the category of nonrenewable:

• Unconfined aquifers in areas where contemporary recharge is very infrequent


and of small volume and the resource is essentially limited to static groundwater
storage reserves
• Confined portions of large aquifer systems, where groundwater development
intercepts or induces little active recharge and the hydraulic head falls continu-
ously with groundwater extraction

Both groups involve the extraction of groundwater that originated as recharge in a distant
Downloaded by [University of Auckland] at 23:45 09 April 2014

past, including during more humid climatic regimes. The volumes of such groundwater
stored in some aquifers are enormous. For example, the total recoverable volume of fresh
water in the Nubian Sandstone Aquifer System (NSAS) in North Africa is estimated at
about 15,000 km3, and the present rate of annual groundwater extraction is 2.17 km3. For
comparison, the combined volume of water stored in the Great Lakes of North America is
22,684 km3.
The term groundwater sustainability in the case of nonrenewable systems has an
entirely social rather than physical (engineering, scientific) context. It implies that full con-
sideration must be given not only to the immediate benefits but also to the negative socio-
economic impacts of development and to the what comes after question—and thus to time
horizons longer than 100 years (Foster et al. 2002–2005).
There are two general situations under which the utilization of nonrenewable ground-
water occurs: planned and unplanned. In the planned scenario, the management goal is
the orderly utilization of groundwater reserves stored in the system with little preexisting
development. The expected benefits and predicted impacts over a specified time frame
must be specified. Appropriate exit strategies need to be identified, developed, and imple-
mented by the time that the groundwater system is seriously depleted. This scenario must
include balanced socioeconomic choices on the use of stored groundwater reserves and on
the transition to a less water-dependent economy. A key consideration in defining the exit
strategy will be identification of the replacement water resource, such as desalination of
brackish groundwater. In an unplanned situation (Figure 9.25), a rationalization scenario
is needed in which the management goal is to achieve hydraulic stabilization (or recovery)
of the aquifer or utilize groundwater reserves in a more orderly way by minimizing qual-
ity deterioration, maximizing groundwater productivity, and promoting social transition
to a less water-dependent economy (Foster et al. 2002–2005).
Saudi Arabia is a good example of two main stages in exploitation of nonrenewable
groundwater: initially very rapid, large-scale, and unrestricted development for all uses,
subsequently supplemented by desalinated water and treated wastewater. Saudi Arabia
has become the largest desalinated water producer in the world. The present produc-
tion presents approximately 50% of the total current domestic and industrial demands
with the rest met from groundwater resources (Abderrahman 2006). However, irrigated
agriculture is still the largest user of the nonrenewable groundwater in Saudi Arabia,
where food security concerns have the highest priority. This is also true in other parts
of the world, including some of the most developed countries, such as the United States
and Australia (Figures 9.26 and 9.27), and some of the poorest sub-Saharan countries in
Africa.
552 Hydrogeological Conceptual Site Models

Average aquifer water level

GRADUAL
RECOVERY

Evaluation of groundwater
GENERAL
STABILIZATION

management options Time ORDERLY


DEPLETION
resources and

UNPLANNED RATIONALIZATION SCENARIO OF


MINING GROUNDWATER MANAGEMENT
Downloaded by [University of Auckland] at 23:45 09 April 2014

Rapid depletion with With knowledge of contemporary


uncertain trajectory recharge rates and
availability of groundwater storage

FIGURE 9.25
Targets for nonrenewable groundwater resource management in rationalization scenarios following indis-
criminate and excessive exploitation. (From Foster, S. et al., Utilization of Non-Renewable Groundwater: A
Socially Sustainable Approach to Resource Management, Sustainable Groundwater Management: Concepts
and Tools, Briefing Note Series, Note 11, GW MATE (Groundwater Management Advisory Team), the World
Bank, Washington, DC, 6 pp., 2002–2005. With permission.)

FIGURE 9.26
Center-pivot irrigation using groundwater is widespread throughout the Unites States, including large areas in
the arid West as shown here.
Groundwater Supply 553
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.27
Center pivot irrigation in Colorado using the Low Elevation Spray Application system. This type of application
uses less water and reduces evaporation compared to traditional methods. (Courtesy of Gene Alexander, USDA
National Resources Conservation Service.)

As mentioned earlier, artificial aquifer recharge is becoming a focal point of groundwater


management in many regions. The sustainable use of groundwater, surface water, storm
water, and recycled (used) water relies increasingly on storing water in the subsurface.
Two terms have gained popularity in describing the concept of artificial aquifer recharge:
managed aquifer recharge (MAR) as a more general description for a variety of engineer-
ing solutions (Figure 9.28) and aquifer storage and recovery (ASR), which describes injec-
tion and extraction of potable water with dual-purpose wells. Artificial aquifer recharge
should not be confused with induced aquifer recharge, which is a response of the surface-
water system to groundwater withdrawal adjacent to a stream (lake).
The important factors that have to be considered for any scheme of artificial aquifer
recharge are as follows:

• Regulatory requirements.
• The availability of an adequate source of recharge water of suitable chemical and
physical quality.
• Geochemical compatibility between recharge water and the existing groundwater
(e.g., possible carbonate precipitation, iron hydroxide formation, mobilization of
trace elements).
• The hydrogeologic properties of the porous media (soil and aquifer) must facil-
itate desirable infiltration rates and allow direct aquifer recharge. For example,
existence of extensive low permeable clays in the unsaturated (vadose) zone may
exclude a potential recharge site from future consideration.
554 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.28
Possible options for managed aquifer recharge or MAR. (Modified from International Association of Hydro­
geologists (IAH), Managing Aquifer Recharge, Commission on Management of Aquifer Recharge, IAH-MAR,
12 pp., 2002.)

• The water-bearing deposits must be able to store the recharged water in a reason-
able amount of time and allow its lateral movement toward extraction locations
at acceptable rates. In other words, the specific yield (storage) and the hydraulic
conductivity of the aquifer porous media must be adequate.
• Presence of fine-grained sediments may have the advantage of improving the
quality of recharged water because of their high filtration and sorption capacities.
Other geochemical reactions in the vadose zone below recharge facilities may also
influence water quality.
• An engineering solution should be designed to facilitate efficient recharge when there
is an available surplus of water and efficient recovery when the water is most needed.
• The proposed solution must be cost-efficient, environmentally sound, and com-
petitive to other water-resource development options.

Aquifers that can store large quantities of water and do not transmit them away quickly are
best suited for artificial recharge. For example, karst aquifers may accept large quantities of
recharged water but, in some cases, tend to transmit them very quickly away from the recharge
area. This may still be beneficial for the overall balance of the system and the availability of
groundwater downgradient from the immediate recharge sites. Alluvial aquifers are usually
the most suited to storage because of the generally shallow water table and vicinity to source
water (surface stream). Sandstone aquifers are, in many cases, very good candidates because of
their high storage capacity and moderate hydraulic conductivity (Kresic 2009).
Groundwater Supply 555

One example of artificial aquifer recharge on a grand scale is a program by Central


Arizona Project (CAP) developed more than a decade ago (Figure 9.29a). The program has
been instrumental in helping protect groundwater supplies. Today, more than 5 million
people call Arizona home, and this number is expected to exceed 8 million in less than 20
years. Arizonans use almost 8 million acre-feet of water every year, and water resource
leaders use a variety of tools to preserve and protect the state’s water supplies. Use of
groundwater is highly regulated by state law but continues to be relied upon for irriga-
tion, municipal, and domestic uses. CAP is a 336-mi-long system that brings more than 1.5
million acre-feet of Colorado River water to customers in the central and southern area of
the state. CAP delivers water to cities for drinking, to agricultural and Native American
communities for farming, and to recharge projects where it is stored underground for
future use. CAP currently maintains six recharge projects: Avra Valley, Pima Mine Road,
Lower Santa Cruz, Agua Fria, Hieroglyphic Mountains, and Tonopah Desert. In addition
Downloaded by [University of Auckland] at 23:45 09 April 2014

to replenishing Arizona’s depleted groundwater supplies, CAP’s recharge program is


helping diminish the impacts of groundwater overdraft, including subsidence; improv-
ing water quality by natural filtration; and firming Arizona’s water supply by providing a
reserve of water to be recovered during prolonged droughts.
It is important, however, to remember that the ultimate source of water for CAP is the
Colorado River, a highly stressed and vulnerable resource. Securing water supply for
Arizona’s burgeoning population comes at the expense of the environment and down-
stream users. The consequences of CAP and other major Colorado River diversions in the
USA are presented in Figure 9.29b, a reality faced by the people of Mexico and the inevi-
table outcome of unsustainable development.

FIGURE 9.29a
Recharge basins of the Agua Fria Recharge Project in central Arizona. (Courtesy of Philip Fortnam, Central
Arizona Project.)
556 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.29b
Top: Rivers run to the sea but sometimes do not make it there. The mighty Colorado River runs dry at the
USA/Mexico border in January 2009. Bottom: The Colorado River delta 5 miles north of the Sea of Cortez,
Mexico, also in January 2009. (Photos courtesy of Pete McBride, United States Geological Survey; avail-
able at http://gallery.usgs.gov/photos/10_15_2010_rvm8Pdc55J_10_15_2010_1 and http://gallery.usgs.gov/
photos/10_15_2010_rvm8Pdc55J_10_15_2010_0.)
Groundwater Supply 557

9.3.1 Sustainable Groundwater Use Case Study: Plant Washington


Plant Washington is an 850-MW coal-fired power plant proposed for construction by
Power4Georgians in Washington County, GA. An innovative approach of conjunctive use
of surface water and groundwater has been proposed as part of the water-supply permit-
ting process for the plant. Surface water from the Oconee River will be the primary water-
supply source, whereas during drought conditions, when the river flow decreases below
Georgia Environmental Protection Division (EPD)–designated levels, Plant Washington
will rely on groundwater. Plant water requirements of about 13.5 MGD will be satisfied by
withdrawal from the Oconee River when streamflows exceed the monthly nondepletable
flow requirement that protects in-stream flows and downstream users. Whenever daily
streamflow passing the intake falls short of the nondepletable flow requirement, with-
drawal from the river ceases, and the plant continues to operate using its onsite water stor-
Downloaded by [University of Auckland] at 23:45 09 April 2014

age pond for up to 12 days before converting to groundwater withdrawal. Groundwater


use continues until onsite storage is refilled and river flows return to levels exceeding the
nondepletable flow requirement at which time river withdrawals resume and groundwa-
ter withdrawals cease (Neal et al. 2011).
Analysis of streamflow records estimates the expected frequency and duration of
groundwater use. During normal streamflow years (i.e., the 2-year return interval), river
withdrawals and onsite storage are sufficient to supply the plant without using groundwa-
ter. And once every 5 years, on average, the plant will rely on groundwater for about four
months and once every 20 years for about eight months.
In addition to conjunctive use of the river and groundwater, the strategy includes onsite
reuse of all process-generated wastewater. The only discharge from the plant is approxi-
mately 1.55 MGD of noncontact cooling tower blowdown going to the Oconee River
after seven to eight cycles of reuse. To prevent any potential water-quality impact in the
Ogeechee River, all stormwater—even exceeding a 500-year storm event—will be collected
onsite and used as part of the integrated water management strategy (Neal et al. 2011).
As part of the permitting process, a regional three-dimensional, transient groundwater flow
model has been developed to evaluate impacts of the proposed groundwater withdrawals on
the existing groundwater users, surface water features (streams, ponds, and wetlands), and
sustainability of the groundwater resource for projected future beneficial uses, including dur-
ing extreme droughts. In order to minimize potential adverse impacts of the proposed 16 Plant
Washington wells, their locations and design parameters, including pumping rates and depths
of individual well screen intervals within the aquifer, were optimized with the model.
Because of the expressed public concerns regarding apparent decreasing trends of the
Cretaceous Aquifer water levels (see Figure 9.30), the Georgia EPD required that the model
be based on conservative assumptions, including simulation of unlikely succession of
droughts without any periods of above-average aquifer recharge. Accordingly, the model
incorporated the following assumptions:

• The groundwater supply system would continuously operate at the highest per-
mitted monthly average rate of 16.12 MGD.
• Washington County would only receive average rainfall for 4 years followed by a
drought that would occur in the fifth year, which coincides with the anticipated
operation of Plant Washington’s groundwater supply system.
• During the 50-year life expectancy of Plant Washington, a 100-year extreme
drought will occur. The conditions of a 100-year drought are more extreme than
the recent, severe 2007–2008 drought or the drought of record in 1981–1982.
558 Hydrogeological Conceptual Site Models

16 230
Pumping, in mgd
14 Well #23X027

Elevation amsl, in feet


220
12 Water level elevation
10 210
8
6 d3 200
Pumping rate
16 4
190
14 2
0 180
12
1/1/1985
Precipitation, in inches

10

8
Downloaded by [University of Auckland] at 23:45 09 April 2014

0
1/1/1995

1/1/2009
1/1/1980

1/1/1990

1/1/2000

1/1/2005
1/1/1985
1/1/1978

1/1/2011
Date

FIGURE 9.30
Hydrograph of daily water-level elevation at the USGS monitoring well 23X027 in Sandersville, GA versus
monthly precipitation in Sandersville and annual permitted groundwater withdrawal from the Cretaceous
Aquifer in the greater Sandersville area. (From Kresic, N. et al., Sustainable Groundwater Use for Power
Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)

• Despite the actual nonincreasing trend, the model assumes a 2% annual population
growth with a proportional increase of domestic water usage from Washington
County’s aquifers.
• Agricultural water demands, that is, the withdrawals from the underlying
Cretaceous aquifer, would increase by 20% during future drought periods. (Note
that for the drought of the year 2000, water consumption was 11% higher than
in 2005, a nondrought year; therefore, a 20% increase in agricultural pumping is
considered conservative.)

As can be seen from Figures 9.30 and 9.31, the model assumptions are overly conservative
because the most obvious aquifer recharge episodes (reflected by the increasing hydraulic
head at the USGS index well) are the result of above-average precipitation and not caused
by any cyclic operation of the existing water wells in the area (Kresic et al. 2011).
The transient numeric model developed for this project is the most complex such model
required by the EPD for groundwater permitting in the state of Georgia to date. The model
dimensions are 48.3 mi × 36 mi, and it consists of eight layers comprising 1,008,000 cells.
All permanent surface streams are simulated in the model with fine cell size (Figure 9.32)
to accurately simulate surface water–groundwater interactions. The overly conservative,
low recharge rates and the lowered hydraulic heads at the external model boundaries dur-
ing the simulated 100-year drought are shown schematically in Figures 9.33 and 9.34.
Groundwater Supply 559
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.31
Cross-correlation between monthly precipitation in Sandersville and average monthly water-level elevation at
the USGS monitoring well 23X027 in Sandersville, GA. (From Kresic, N. et al., Sustainable Groundwater Use for
Power Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)

FIGURE 9.32
Detail of the numerical groundwater flow model showing surface water features, existing wells, and permit-
ted Plant Washington wells (shown in red). (From Kresic, N. et al., Sustainable Groundwater Use for Power
Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)
560 Hydrogeological Conceptual Site Models
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.33
Schematic presentation of simulated conservative future recharge in the groundwater flow model. (From Kresic,
N. et al., Sustainable Groundwater Use for Power Generation in Georgia, Case Study. Meeting Competing
Demands with Finite Groundwater Resources, Presented at the Groundwater Protection Council Annual
Forum, Atlanta, GA, September 24–28, 2011. Courtesy of AMEC E&I, Inc.)

The groundwater modeling results show that Plant Washington’s groundwater sup-
ply system will cause small but reasonable impacts to groundwater as a result of using
this resource for an average of four months once every five years. Under extreme drought
conditions, these impacts would increase but are still considered acceptable. The ground-
water model predicts a steady, small decline in groundwater elevations resulting from
the assumed 2% per year increase in Washington County population. By the year 2063,
without any pumping associated with Plant Washington, the model predicts that the

FIGURE 9.34
Simulated recharge and change in the hydraulic head elevation at the general head boundary prior, during,
and following a 100-year drought. (From Kresic, N. et al., Sustainable Groundwater Use for Power Generation
in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources, Presented at
the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy of AMEC
E&I, Inc.)
Groundwater Supply 561

groundwater level at the USGS Sandersville well 23X027 would drop by only 13 ft below
its current level (Figure 9.35). When the model includes Plant Washington’s impacts, future
drawdown at the USGS Sandersville well would increase by approximately 1 ft. During a
100-year drought, the model predicts that Plant Washington’s impact would result in an
additional 2 ft of drawdown at this well compared to the impact by all other users during
the drought.
The model also shows that, after cessation of pumping by the Plant Washington wells,
the potentiometric surface recovers before the next simulated drought and continues to
follow the general trend caused by the simulated increasing groundwater withdrawals by
other users (without the influence of Plant Washington wells pumping). As illustrated in
Figure 9.36, during the entire simulated period, including during simulated groundwater
withdrawal by the Plant Washington wells, the predicted potentiometric surface at well
23X027 remains 100 ft above the top of the confined Cretaceous aquifer, thus retaining its
Downloaded by [University of Auckland] at 23:45 09 April 2014

full saturated thickness.


By locating wells away from the river and withdrawing only from the bottom of the
Cretaceous aquifer, the model predicts that Plant Washington groundwater withdrawals
in the western portion of the county will not result in water losses from the Oconee River.
The Ogeechee River, Williamson Swamp Creek, and other surface water bodies located
near the plant will be protected from Plant Washington’s withdrawals by the Cretaceous
Aquifer’s confining clay layers. The model predicts that the impacts on these water bodies
may not even be measurable when Plant Washington’s groundwater supply system is in
use (i.e., resulting drawdown of less than 1 in.). The impact on several existing agricultural
wells completed in the Cretaceous aquifer near the pumping center at Plant Washington
would be acceptable because the maximum additional drawdown outside of the property
boundary at the end of the simulated 100-year drought would be less than 15 ft (Figure 9.37).
Likely because of the very conservative model assumptions and close cooperation
between major stakeholders and EPD during all phases of the groundwater model

210
USGS data
logger Predicted, no Plant Washington
Water-level elevation, in feet amsl

205
wells pumping; average recharge;
increasing pumping by other users
200

195
Predicted, Plant Washington wells
not pumping; drought conditions
190

185

180
Predicted, Plant Washington
wells pumping; 100-year drought
175
1995

2000

2005

2010

2015

2020

2025

2030

2035

2040

2045

2050

2055

2060

2065

Year

FIGURE 9.35
Modeled potentiometric surface at the USGS monitoring well 23X027 with and without Plant Washington Wells
pumping for the recharge and boundary conditions shown in Figures 9.33 and 9.34. (From Kresic, N. et al.,
Sustainable Groundwater Use for Power Generation in Georgia, Case Study. Meeting Competing Demands
with Finite Groundwater Resources, Presented at the Groundwater Protection Council Annual Forum, Atlanta,
GA, September 24–28, 2011. Courtesy of AMEC E&I, Inc.)
562 Hydrogeological Conceptual Site Models

440

400 Land Surface


Water-Level Elevation (ft amsl)

360

320
USGS Datalogger
280
Data
Plant Washington
240
Wells Not Pumping
200
160
Plant Washington
120 Wells Pumping
80

40 Top of Cretaceous Aquifer


Downloaded by [University of Auckland] at 23:45 09 April 2014

0
2000

2005

2010

2015

2020

2025

2030

2035

2040

2045

2050

2055

2060
1996

Year

FIGURE 9.36
Modeled potentiometric surface at the USGS monitoring well 23X027 stays well above the top of the confined
Cretaceous Aquifer for the entire simulated time (From Kresic, N. et al., Sustainable Groundwater Use for
Power Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater Resources,
Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28, 2011. Courtesy
of AMEC E&I, Inc.)

FIGURE 9.37
Detail of the model-predicted maximum additional drawdown in the vicinity of Plant Washington from
pumping of the proposed wells during the simulated 100-year drought. (From Kresic, N. et al., Sustainable
Groundwater Use for Power Generation in Georgia, Case Study. Meeting Competing Demands with Finite
Groundwater Resources, Presented at the Groundwater Protection Council Annual Forum, Atlanta, GA,
September 24–28, 2011. Courtesy of AMEC E&I, Inc.)
Groundwater Supply 563

development and its subsequent use for evaluating numerous groundwater withdrawal
scenarios, third parties did not challenge the permit issued by EPD.

9.3.2 Sustainable Groundwater Use: Conclusion


It is the hope of the authors that the concept of sustainable groundwater use illustrated in
Figure 9.38 will remain just that—a concept never translated into reality.
Downloaded by [University of Auckland] at 23:45 09 April 2014

FIGURE 9.38
Top: Initial abundance provided by a prolific, nonrenewable aquifer. Bottom: Possible final outcome of ground-
water withdrawal from the nonrenewable aquifer. (Courtesy of Marin Kresic.)
564 Hydrogeological Conceptual Site Models

References
Abderrahman, W. A., 2006. Saudi Arabia Aquifers. In Non-Renewable Groundwater Resources: A
Guidebook on Socially Sustainable Management for Water-Policy Makers. IHP-VI, Series on
Groundwater No. 10, edited by S. Foster and D. P. Loucks, UNESCO, Paris, pp. 63–67.
Anderson, M. T., and Woosley, L. H., Jr., 2005. Water Availability for the Western United States–Key
Scientific Challenges. U.S. Geological Survey Circular 1261, Reston, VA, 85 pp.
American Water Works Association (AWWA), 1998. AWWA Standard for Water Wells: American
National Standard. ANSI/AWWA A100-97, AWWA, Denver, CO.
Bredehoeft, J. D., 2002. The water budget myth revisited: why hydrogeologists model. Ground Water
40(4), 340–345.
Bredehoeft, J. D., Papadopulos, S. S., and Cooper, H. H., 1982. Groundwater—The Water Budget
Myth. In Studies in Geophysics, Scientific Basis of Water Resource Management. National Academy
Downloaded by [University of Auckland] at 23:45 09 April 2014

Press, Washington, DC, 127 pp.


Campbell, M. D., and Lehr, J. H., 1973. Water Well Technology. McGraw-Hill, New York, 681 pp.
Devlin, J. F., and Sophocleous, M., 2005. The persistence of the water budget myth and its relation-
ship to sustainability. Hydrogeol. J. 13, 549–554.
Department of Water Resources (DWR), 2003. California’s Groundwater. Bulletin 118, Update 2003,
State of California, The Resources Agency, Department of Water Resources, 246 pp.
Driscoll, F. G., 1986. Groundwater and wells. Johnson Filtration Systems Inc., St. Paul, MN, 1089 pp.
Environment Agency, 2007. Underground, Under Threat: The State of Groundwater in England
and Wales. Environment Agency, Almondsburry, Bristol, 23 pp. Available at http://www​
.environment-­agency.gov.uk/.
Falkenmark, M., 2003. “Water Management and Ecosystems: Living with Change.” TEC Background
Papers No. 9, Global Water Partnership Technical Committee (TEC), Global Water Partnership,
Stockholm, Sweden, 50 pp.
Foster, S., Nanni, M., Kemper, K., Garduño, H., and Tuinhof, A., 2002–2005. Utilization of Non-
Renewable Groundwater: A Socially Sustainable Approach to Resource Management.
Sustainable Groundwater Management: Concepts and Tools, Briefing Note Series, Note 11, GW
MATE (Groundwater Management Advisory Team), the World Bank, Washington, DC, 6 pp.
Freeze, R. A., and Cherry, J. A., 1979. Groundwater. Prentice Hall, Englewood Cliffs, NJ, 604 pp.
Ground Water Task Force (GWTF), 2007. Recommendations from the EPA Ground Water Task Force;
Attachment B: Ground Water Use, Value, and Vulnerability as Factors in Setting Cleanup Goals.
EPA 500-R-07-001, Office of Solid Waste and Emergency Response, pp. B1–B14.
Hötzl, H., 1996. Grundwasserschutz in Karstgebieten. Grundwasser 1(1), 5–11.
HydroSOLVE, 2002. AQTESOLV for Windows, User’s Guide. HydroSOLVE, Inc., Reston, VA, 185 pp.
International Association of Hydrogeologists (IAH), 2002. Managing Aquifer Recharge. Commission
on Management of Aquifer Recharge, IAH-MAR, 12 pp.
Kraemer, S. R., Haitjema, H. M., and Kelson, V. A., 2005. Working with WhAEM2000: Capture Zone
Delineation for a City Wellfield in a Valley Fill Glacial Outwash Aquifer Supporting Wellhead
Protection. EPA/600/R-00/022, United States Environmental Protection Agency, Office of
Research and Development, Washington, DC, 77 pp.
Kresic, N., 2007. Hydrogeology and Groundwater Modeling, Second Edition. CRC Press, Taylor & Francis
Group, Boca Raton, FL, 807 pp.
Kresic, N., 2009. Groundwater Resources: Sustainability, Management and Restoration. McGraw Hill,
New York, 852 pp.
Kresic, N., 2010. Modeling. In Groundwater Hydrology of Springs: Engineering, Theory, Management, and
Sustainability, edited by N. Kresic and Z. Stevanović, Elsevier, Amsterdam, pp. 166–230.
Kresic, N., and Bonacci, O., 2010. Spring Discharge Hydrograph. In Groundwater Hydrology of Springs:
Engineering, Theory, Management, and Sustainability, edited by N. Kresic and Z. Stevanović,
Elsevier, Amsterdam, pp. 129–163.
Groundwater Supply 565

Kresic, N., and Stevanović, Z. (eds.), 2010. Groundwater Hydrology of Springs: Engineering, Theory,
Management, and Sustainability, Elsevier, Amsterdam, 573 pp.
Kresic, N., Kennedy, J., Ledbetter, L., and Alford, D., 2011. Sustainable Groundwater Use for Power
Generation in Georgia, Case Study. Meeting Competing Demands with Finite Groundwater
Resources. Groundwater Protection Council Annual Forum, Atlanta, GA, September 24–28,
2011.
Kruseman, G. P., de Ridder, N. A., and Verweij, J. M., 1991. Analysis and Evaluation of Pumping
Test Data (Completely Revised 2nd edition). International Institute for Land Reclamation and
Improvement (ILRI) Publication 47, Wageningen, The Netherlands, 377 pp.
Leavesley, G. H., Lichty, R. W., Troutman, B. M., and Saindon, L. G., 1983. Precipitation–Runoff
Modeling System—User’s Manual. U.S. Geological Survey Water-Resources Investigations
Report 83-4238, 207 pp.
Markstrom, S. L., Niswonger, R. G., Regan, R. S., Prudic, D. E., and Barlow, P. M., 2008. GSFLOW-
Coupled Ground-Water and Surface-Water Flow Model Based on the Integration of the
Downloaded by [University of Auckland] at 23:45 09 April 2014

Precipitation–Runoff Modeling System (PRMS) and the Modular Ground-Water Flow Model
(MODFLOW-2005). U.S. Geological Survey Techniques and Methods 6-D1, 240 pp.
Meinzer, O. E., 1923. The Occurrence of Ground Water in the United States with a Discussion of
Principles. U.S. Geological Survey Water-Supply Paper 489, Washington, DC, 321 pp.
Neal, L., Ledbetter, L., and Alford, D., 2011. An Integrated Water Management Strategy for Power
Generation: A Central Georgia Case Study. Meeting Competing Demands with Finite
Groundwater Resources. Groundwater Protection Council Annual Forum, Atlanta, GA,
September 24–28, 2011.
Rogers, P., and Hall, A. W., 2003. Effective Water Governance. TEC Background Papers No. 7, Global
Water Partnership Technical Committee (TEC), Global Water Partnership, Stockholm, Sweden,
44 pp.
Roscoe Moss Company, 1990. Handbook of Ground Water Development. John Wiley & Sons, New York,
493 pp.
Schindel, G., Johnson, S. Hoyt, J., Green, R. T., Alexander, E. C., and Krietler, C., 2009. Hydrology of the
Edwards Group: A Karst Aquifer Under Stress. A Field Trip Guide for the USEPA Groundwater
Forum November 19, 2009, San Antonio, TX, 57 pp.
The European Parliament and the Council of the European Union, 2000. Directive 2000/60/EC of
the European Parliament and the Council of 23 October 2000 Establishing a Framework for
Community Action in the Field of Water Policy (EU Water Framework). Official Journal of the
European Union, 22 December, p. L 327/1.
The European Parliament and the Council of the European Union, 2006. Directive 2006/118/EC
on the Protection of Groundwater Against Pollution and Deterioration. Official Journal of the
European Union, 27 December, pp. L 372/19-31.
Tuinhof, A., Dumars, C., Foster, S., Kemper, K., Garduño, H., and Nanni, M., 2002–2005. Ground­­
water Resource Management: An Introduction to Its Scope and Practice. Sustainable
Groundwater Management: Concepts and Tools, Briefing Note Series, Note 1, GW MATE
(Groundwater Management Advisory Team), The World Bank, Washington, DC, 6 pp.
U.S. Army Corps of Engineers (USACE), 1993. Hydrologic Frequency Analysis. Engineering manual
1110-2-1415, Washington, DC. Available at http://140.194.76.129/publications/eng-manuals/.
USBR, 1977. Ground Water Manual. U.S. Department of the Interior, Bureau of Reclamation,
Washington, DC, 480 pp.
U.S. EPA, 1975. Manual of Water Well Construction Practices. EPA-570/9-75-001, Office of Water
Supply, Washington, DC, 156 pp.
U.S. EPA, 1991. Protecting the Nation’s Ground Water: EPA’s Strategy for the 1990s. The final report
of the EPA Ground-Water Task Force, 21Z-1020, Office of the Administrator.
U.S. EPA, 2003. Atrazine Interim Reregistration Eligibility Decision (IRED). Q&As—January.
Available at http://www.epa.gov/pesticides/factsheets/atrazine.htm#q1, accessed January
23, 2008.
566 Hydrogeological Conceptual Site Models

U.S. EPA, 2005. National Management Measures to Control Nonpoint Source Pollution from Urban
Areas. EPA-841-B-05-004, United States Environmental Protection Agency, Office of Water,
Washington, DC.
Vrba, J., and Zaporozec, A. (eds.), 1994. Guidebook on Mapping Groundwater Vulnerability, International
Contributions to Hydrogeology Vol. 16. International Association of Hydrogeologists (IAH), Swets
& Zeitlinger Lisse, Munich, 156 pp.
Downloaded by [University of Auckland] at 23:45 09 April 2014

You might also like