Download as pdf or txt
Download as pdf or txt
You are on page 1of 105

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/316878381

The nugget effect

Thesis · December 2009

CITATIONS READS

0 9,238

1 author:

Sarah-Jane Gill
UK Space Agency
4 PUBLICATIONS   7 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Sarah-Jane Gill on 12 May 2020.

The user has requested enhancement of the downloaded file.


The Nugget Effect:
In describing the variability of an
ore deposit

S-J SIMMONDS*

*Department of Geology, Rhodes University, Grahamstown, 6140, South Africa

Dissertation submitted in partial


fulfilment of the requirements for
the degree of Master of Science
(Exploration Geology) at Rhodes
University.

December, 2009.

1
Abstract
The nugget effect is a geostatistical term used to describe the variability seen between
samples that are closely spaced. The nugget effect is composed of a geological
component, which can be thought of as inherent, and a sampling component, which is
not fixed. In this work the nugget effect is defined only in terms of the descriptive
stages of characterising an ore body and although reference is made to the resource
estimation process, estimation is excluded from this work.

The geological contribution to the nugget effect is described in terms of geological


continuity for quart-gold reefs, kimberlites and placer deposits as these are all subject
to a high nugget effect (>50%). The geological nugget effect is attributed to the
heterogeneous distribution of grain sizes, grades and small to large scale structures
inherent in these deposits. The sampling contribution to the nugget effect is due to
error which can be introduced at any stage of the sampling campaign from collecting
the principal sample to the assay procedures used in an analytical laboratory. Gy’s
Theory of Sampling allows for the accurate prediction of the errors introduced during
sampling, and proposes mathematical formulae which can be used to calculate these
errors. The nugget effect in this context is the minimum acceptable sampling error
permissible for the project at hand. A high nugget effect can be identified during the
statistical analysis of the sampling data. In a classical statistical analysis a sample
population with a high nugget effect will usually show a positively skewed
distribution. In conducting a spatial analysis of a sample population the variogram is
used. The nugget effect in variography is defined as the discontinuity seen at the
origin where at negligible sample separation distances variability is still seen between
sample pairs.

The geological contribution to the nugget effect cannot be removed, however, a sound
knowledge of the geology of the area under scrutiny will ensure that all the risks
associated with the deposit are accounted for when estimating the resource. Reducing
sampling error presents the best opportunity for reducing the nugget effect through the
use of Gy’s Theory of Sampling. In order to adequately describe the statistical
populations subject to a high nugget effect the data can be transformed to fit a log
normal distribution, or declustering of the data can be done. These techniques allow
the underlying population characteristics to be seen without the biasing influence of
erratic high values (outlier).

The presence of a high nugget effect is often predictable to a large extent based on the
geology and mineralization of a given area, therefore the best strategy in reducing its
influence is preparing for it.

2
TABLE OF CONTENTS
1.0 Introduction....................................................................................................... 7

2.0 Geological component of the Nugget Effect .................................................. 11

2.1 Quartz-Gold Reefs ............................................................................................... 13


2.1.1 Epithermal Gold ............................................................................................................. 13
2.1.2 Mesothermal Gold .......................................................................................................... 21
2.2 Placers ................................................................................................................... 26
2.3 Kimberlites ........................................................................................................... 35
3.0 Sampling component of the Nugget Effect .................................................... 43

3.1 Sampling Error .................................................................................................... 44


3.2 Databases and data quality ................................................................................. 62
4.0 Data analysis ................................................................................................... 63

4.1 Classical Statistical Analysis ............................................................................... 64


4.2 Spatial Analysis .................................................................................................... 71
5.0 Recommendations for reducing the Nugget Effect ....................................... 78

5.1 Geological Nugget Effect ..................................................................................... 78


5.2 Sampling Nugget Effect ....................................................................................... 83
5.3 Data Quality ......................................................................................................... 91
5.4 Classical statistics analysis .................................................................................. 92
5.5 Spatial statistical analysis and variography ...................................................... 94
6.0 Final remarks .................................................................................................. 96

Acknowledgments........................................................................................................ 97

References ................................................................................................................... 98

3
LIST OF FIGURES
Figure 1: Simplified flow chart showing the basic resource estimation process. Figure taken from
Olssen (2009). _____________________________________________________________________ 8
Figure 2: Examples of mineral nuggets taken from a Google Image search (2009). From left to right:
A nugget of gold; coarse-grained cassiterite; a diamonds sitting within a piece of kimberlite; and a
rounded nugget of platinum from a placer deposit. _______________________________________ 10
Figure 3: Gold fill of an open quartz vein in an epithermal Au-Ag style deposits. Example taken from
Edie Creek. Figure taken from Corbett (2002b).__________________________________________ 15
Figure 4: The main geometry of epithermal gold deposits related to the permeability created by
structural, hydrothermal and lithological controls. Figure taken from Hedenquist et al. (2000). ____ 16
Figure 5: Map of the main low- and high-sulphidation deposits of the circum-Pacific area (a) and
Europe-Central Asia. Figure taken from Hedenquist et al. (2000). ___________________________ 17
Figure 6: The geological setting and characteristics of high- and low-sulphidation epithermal deposits.
A possible genetic link is suggested between some epithermal and porphyry-type deposits. Figure taken
from Hedenquist et al. (2000). ________________________________________________________ 18
Figure 7: Generalised geology of the Choquelimpie main ore body (previous page) in map and cross-
sectional view (this page). Figures taken from Bisso et al. (1991). ____________________________ 21
Figure 8: Schematic illustration of the elements of deep, intermediate and shallow portions of gold-
quartz vein system. Figure taken from Stephen and Peters (1993). ___________________________ 22
Figure 9: On a regional scale the main mineralised banded iron formation shows reasonable
continuity (left). On the scale of drill-core intersections a complex relationship between folding, micro-
faulting and veining is observed (top-right). However, on an outcrop-scale the main mineralized vein
systems can be recognised as Group IIA and B vein systems (bottom-right). The photographs are of the
banded iron formation hosted gold deposits of the Kraaipan Greenstone Belt in South Africa, and the
map is taken from Hammond and Moore (2006). Photographs taken by the author (2009). ________ 23
Figure 10: The main types of structures hosting lode-gold are sketched above. Note the variability in
the scale bar for each of the types. Not included are replacement-style deposits, such as can be seen in
banded iron formations. This example is taken from in the Menzies-Kambalda area in Western
Australia (Witt, 1993). ______________________________________________________________ 25
Figure 11: Schematic illustration of the processes involved in the progressive formation of placer
deposits. The placer types preferentially formed are: 1 = eluvial deposits; 2 = saprolite and regolith; 3
= laterite; 4 = rock glacier; 5 = glacial moraine; 6 = talus slope; 7 = colluvial deposit; 8 =
paleochannel and deep lead; 9 = constricted channel and braided river; 10 = headwater creek; 11 =
glaciofluvial deposit; 12 = some proximal alluvial placers; 13 = some distal alluvial placers; 14 =
beach; 15 = dunes; 16 = tidal deltas; 17 = cheniers; 18 = washover sands; and 19 = some marine
deposits. Figure taken from Garnett and Bassett (2005). ___________________________________ 32
Figure 12: The idealised kimberlite pipe, or classic pipe structure includes a root, diatreme and crater
zone. The present erosion levels at Orapa pipe in Botswana, Jagersfontein at Kimberley and the
Bellsbank pipes in South Africa are also shown (left). Kimberlites are believed to have originated at
depths of around 150-200km and to have sourced their diamonds from the mantle (right). The image on
the right was sourced from a Google Image search (2009), the image on the left is taken from Kirkley
et al. (1991). _____________________________________________________________________ 36
Figure 13: The three kimberlite classes are presented as idealised models. RVK = resedimented
volcanoclastic kimberlite; PK = pyroclastic kimberlite; mTK = massive tufficitic kimberlite; TFK =
transitional-facies kimberlite; MK = magmatic-kimberlite; HK = Hypabyssal-kimberlite. Figure taken
from Skinner (pers. comm., 2009). ____________________________________________________ 37
Figure 14: Five solid rendered wire-frame models illustrating both the variability in the sizes and
shapes of the Ekati kimberlite pipes, and the level of smoothing carried out when modelling these
pipes. Figure taken from Dyck et al. (2004). _____________________________________________ 40
Figure 15: The four generalised internal geological models of the Ekati kimberlite pipes are illustrated
above as vertical section. Figure taken from Dyck et al. (2004). _____________________________ 41
Figure 16: The contrasting models of the 169 kimberlite are presented. In model A by Berryman et al.
(2004) the main kimberlite body is presented as an in-fill in a reverse champagne-glass geometry,
while in model B the kimberlite has a more irregular shape and is dominated by reworked kimberlite
(Kjarsgaard, Leckie and Zonneveld. modified after Kjarsgaard et al., 1997; reported in: Kjarsgaard et
al., 2007). In model B the Westgate Formation Shales form an overlying cap which is thought to
presenrve the domal shaped kimberlite tephra cone. Figure taken from Kjarsgaard et al. (2007). ___ 42

4
Figure 17: The main search techniques employed in a sampling campaign are a systematic grid
pattern (A); random-stratified (B); and non-random stratified procedure (C). Images taken from
Google image search (2009). ________________________________________________________ 47
Figure 18: Schematic illustration of the sample preparation chain and sample size reduction phases.
On the left is a sample reduction diagram showing the relationship between sample size and weight.
On the right is a summary of the typical processes a sample will go through in the preparation chain.
Figure (left) is taken from Gy (1979) and the figure on the right is modified after Olssen (2009). ___ 48
Figure 19: Based on the origin of the error, Gy has classified the main sampling errors into two main
groups: ‘incorrect’ sampling errors which are based on poor design or operation of the sampling
equipment, and ‘correct’ sampling errors which are statistical. Figure taken from Minkkinen (2004). 51
Figure 20: Relationship between precision, accuracy and bias using the analogy of the target. In this
example the bulls-eye or centre of the target represents the ideal situation where there is no bias and
the result is accurate and precise. Figure taken from internet source [2]. ______________________ 53
Figure 21: ISO/IEC 17025 Requirements for Analytical Laboratories. Taken from internet source [3].
________________________________________________________________________________ 54
Figure 22: Estimation of the particle shape factor f is carried out through comparing the particles
critical dimension d with a cube of the same side length. Figure taken from Minkkinen (1987). _____ 57
Figure 23: Schematic illustration of some of the textural relationships between gold and other
(gangue) minerals. The examples are taken from Getchell Mine in Nevada, USA, which is a Carlin-type
deposit from an epithermal system. Figure modified after Bowell et al. (1999). _________________ 58
Figure 24: Graphs showing the main styles of population distribution, namely: normal, negatively
skewed and positively skewed population distributions. The populations are illustrated using
histograms, CDF’s (cumulative distribution functions) and probability plots. Figure modified after
Olssen (2009). ____________________________________________________________________ 68
Figure 25: An example of multiple sample populations taken from the Panda kimberlite pipe on the
Ekati Property, Lac de Gras kimberlite field in the east-central portion of the Slave Province in the
Northwest Territories of Canada. Figure modified after Dyck et al. (2004). ____________________ 69
Figure 26: Scatter-plots of 100 U versus V values. The actual 100 data pairs are plotted in (a) while in
(b) the V value indicated by the arrow has been plotted as 14ppm instead of 143ppm. The purpose of
graph (b) is to illustrate the usefulness of scatter-plots in detecting errors in the data. Figure taken
from Isaaks and Srivastava (1989). ____________________________________________________ 70
Figure 27: Two ways to display the spatial relationships between sample values are to construct
contour maps (left) and indicator maps (right). Both were created using 100 data for the
variable/attribute V. The contour lines are 10 ppm apart and range from 0 to 140ppm. In the indicator
maps the indicators are defined using the cut-off values as defined above for the variable V. Figures
taken from Isaaks and Srivastava (1989). _______________________________________________ 72
Figure 28: Examples of how data can be paired to obtain an h-scatterplot. Samples are paired in a
given direction and based on a specific separation distance (Isaaks & Srivastava, 1989). _________ 73
Figure 29: Examples (a) through (d) are h-scatterplots for four different separation distances, but for
the same direction, in this instance the direction is northeasterly. With increasing separation distances
the sample pairs becomes more diffuse and the similarity between pairs declines. In (a) the separation
distance is 1m, in (b) 2m, in (c) 3m and in (d) 4m (Isaaks & Srivastava, 1989). _________________ 74
Figure 30: The points on a variogram are compiled from h-scatterplots of the sample pairs for a given
sample separation or lag. The sample separation at which little/no correlation between sample pairs is
evident is referred to as the range and the variogram value at which this occurs is referred to as the
sill. Figure modified after Olssen (2009). _______________________________________________ 75
Figure 31: The variogram has been modelled using a spherical model. The nugget is the discontinuity
at the origin for very small separation distances and the total sill represents the total inherent
variability in the data. Figure modified after Olssen (2009). ________________________________ 76
Figure 32: Figures 11a through to 11c are presented with figure captions from Corbett and Burrell
(2001). __________________________________________________________________________ 80
Figure 33: Schematic summary of the macroscopic components of the different emplacement products
infilling the three main types of kimberlite pipes in Canada. Such models can be used to guide
extrapolations and interpretations of sample data when modelling a new kimberlite body. Figure taken
from Scott Smith and Smith (2009). For further details on the abbreviations used in this diagram the
reader is referred to the source article. _________________________________________________ 81
Figure 34: Summary of the qualitative macroscopic petrography of the main textural types of
kimberlite in Figure 33. Figure taken from Scott Smith and Smith (2009). For further details on the
abbreviations used in this diagram the reader is referred to the source article. __________________ 82

5
Figure 35: The variation of precision with the number of gold particles contained in a sample. Figure
taken from Nichol et al. (1994). _______________________________________________________ 87
Figure 36: The variations of sample weights containing 20 particles of gold according to particle size
and gold concentration. Gold particles are flakes in which the diameter is five times the thickness.
Figure taken from Nichol et al. (1994). _________________________________________________ 87
Figure 37: Schematic qualitative relationship between resource variability (complexity) and sampling
(drilling) density. Figure modified from Dyck et al. (2004). _________________________________ 90
Figure 38: Simplified flow chart summarising the ‘standard best practice’ for dealing with skewness in
geostatistical analysis. Figure taken from Kerry and Oliver (2007). __________________________ 93

6
1.0 Introduction
In the field of mineral resource estimations the objective is to predict the
characteristics of an ore body, and make a confident statement about the
mineralization it contains. Within a given deposit the distribution of ore grades will
always have a mixed character which is part structured, and part random (Matherton,
reported in: Journel & Huijbregts, 1978). “On the one hand, the mineralising process
has an overall structure and follows certain laws, either geological or metallogenic: in
particular, zones of rich and poor grades will always exist, and this is possible only if
the variability of grades possesses a certain degree of continuity. Depending upon the
type of ore deposit, this degree of continuity will be more or less marked, but it will
always exist” (Matherton, reported in: Journel & Huijbregts, 1978). The fact that there
will always be some level of continuity in a given deposit allows for the prediction of
ore distributions, but requires that both the random (chaotic) and structured
(consistent) features of a deposit are well understood (Matherton, reported in: Journel
& Huijbregts, 1978).

In mineral resource estimation the level of confidence to which grades can be


predicted for three-dimensional (3D) volumes of a given ore body, a process which is
largely based on computer modelling (Olssen, 2009). The process involves making
predictions for un-sampled locations based on the available data from sampled
locations (Isaaks & Srivastava, 1989). From this modelling process a set of figures are
produced of the tonnes and average grade of the deposit, which is known as a Mineral
Resource, and is complied in accordance with a recognised reporting code standard
such as JORC or SAMREC (Snowden, 2001; Olssen, 2009). However, before
reaching the computer modelling stage and the final figures of the Mineral Resource,
there are a number of steps involved which help to address the random and structured
features of a deposit. The resource estimation process is summarised in Figure 1.

7
Figure 1: Simplified flow chart showing the basic resource estimation process. Figure taken
from Olssen (2009).

The accurate estimation of un-sampled locations is done using the tools of


geostatistics. It is the language used to characterise the spatial distribution and
variability of natural phenomena (Journel & Huijbregts, 1978). The term was first
defined in 1962 by G.Matheron, who stated that, “geostatistics is the application of
the formalisation of random functions to the reconnaissance and estimation of natural
phenomena” (reported in: Journel & Huijbregts, 1978). This approach incorporates
both the structured and random features of a given natural phenomenon (such as an
ore body) and characterises them spatially. This is the main difference between
geostatistics and classical statistical methodology, the latter does not contain any
spatial information, while the former “offers a way of describing the spatial continuity
that is an essential feature of many natural phenomena” (Isaaks & Srivastava, 1989).

This approach characterises a particular natural phenomenon in terms of its


distribution in space of one or more variables. Journel and Huijbregts (1978) explain
it as follows:
“Let z(x) be the value of the variable z at the point x. The
problem is to represent the variability of the function z(x) in
space (when x varies). This representation will then be used to

8
solve such problems as the estimation of the proportion of values
z(x0) at a point x0 at which no data are available, or to estimate
the proportion of values z(x) within a given field that are greater
than a given limit (e.g., cut-off grade).”

This approach is probabilistic, and if used in conjunction with geological knowledge,


can be a very accurate tool for modelling the mineralization of an ore body. As stated
by Rendu (1981), “the theory of geostatistics covers a branch of applied statistics
aimed at a mathematical description and analysis of geological observations.” Its
application under the umbrella of ‘natural phenomenon’ is endless and this paper
focuses solely on its practical applications in describing ore bodies.

The nugget effect is a geostatistical term, which is used to describe the variability of
an ore body. In the past the term was used loosely by mining professionals and
geologists, now, however, “geostatistics has given the term a full scientific definition
and conceptual clarification” (François-Bongarçon, 2004). The variability
encompassed by the nugget effect is partly attributed to the geology of the ore body, a
component that can be thought of as inherent (also referred to as small-scale
variability or microstructures); and partly due to error introduced during sampling
through either poor design of the sampling campaign or the use of inappropriate
techniques (Dominy et al., 2001; 2002; 2003; François-Bongarçon, 2004).

The word ‘nugget’ is defined by the Oxford English Dictionary as, “a lump of gold or
other metal found ready-formed in the earth” (Internet source [1]). A nugget is
considered a rare find, and a deposit will usually have a wide variety of grain sizes,
and variable concentrations of the precious metal/mineral being sought (Dominy et
al., 2003). The term can be better understood by considering a football field strewn
with diamonds. A given sample may contain 10 diamonds, and the sample next to it
may contain none. This discrepancy between samples considerably complicates the
estimation process due to the high variability of the deposit.

During the spatial analysis of an ore body the nugget effect is defined by Dominy et
al. (2003) as, “a quantitative term describing the level of variability between samples
at very small distances apart.” A deposit with a low nugget effect should have low

9
variability with largely homogenous and predictable distribution, in terms of lithology
and grade. As the geometry and distribution of the mineralization in a deposit
becomes more complex and variable, the deposit becomes harder to predict when
estimating for areas where no samples are available. In such a deposit the nugget
effect is said to be high, and the distribution of mineralization trends towards
randomness (Dominy et al., 2003). In essence, “this is a term used to describe how
well sampling results can be reproduced by repeated sampling at the same location”
as, “finely disseminated mineralization will tend to give easily reproducible results
but heterogeneous mineralization will be sensitive to the method of sampling and
could give variable results from a single location” (Snowden, 2001).

This paper is concerned with defining the nugget effect in terms of its principal
components, geological variability and variability introduced through sampling error.
What is considered here are deposits with high nugget effects as these are the most
challenging to constrain (Dominy et al., 2001; 2002; 2003). The deposits chosen to
illustrate this term are quartz-gold reefs (epithermal and mesothermal), placer
deposits, and kimberlites (Figure 2).

Figure 2: Examples of mineral nuggets taken from a Google Image search (2009). From left
to right: A nugget of gold; coarse-grained cassiterite; a diamonds sitting within a piece of
kimberlite; and a rounded nugget of platinum from a placer deposit.

These deposits are described in terms of their geology and continuity (structured and
random features) in turn, and then the sampling component of the nugget effect is
discussed. Identifying a high nugget effect during both a classical statistical and a
geostatistical (spatial) analysis of a deposit is also considered. This is followed by a
review of the methods that can be used to reduce the nugget effect during the early
stages of resource characterisation. The actual process of resource estimation will not
be discussed herein. This work is concerned with the descriptive stages of defining an
ore body prior to estimation. In Figure 1 these descriptive phases are encapsulated

10
within ‘informing data’ and ‘data analysis’ and although the resource estimation
process is not reviewed herein, this work follows the steps that would be required
prior to a resource estimation.

2.0 Geological component of the Nugget Effect


As part of the definition of the nugget effect, reference is made to the inherent
variability of the ore deposit. In this section the geological contribution to the nugget
effect will be reviewed. One of the important measures of this contribution is the
assessment of continuity. The term ‘continuity’ can be applied to both the geology
and the grade of the deposit in question, but should be scale-specific. Dominy et al.
(2003) define geological continuity in terms of, “geometric continuity of the
geological structure(s) hosting mineralization,” while grade continuity is defined as
that which exists within a particular zone of geological characteristics and/or grade
cut-off. Dominy et al. (2003) suggest that continuity be reported in terms of a global
scale, which refers to large-scale continuity (hundreds to thousands of meters), and at
local scales (tens of meters). One of the common features of high nugget effect ore
bodies is the inconsistency of grade continuity on all scales compared to geological
continuity. These topics will be further discussed in Chapter 4 which deals with
methods of mitigating or reducing the nugget effect.

When conducting resource estimations the geological modelling phase is very


important. The first phase of geological modelling is to attempt to represent the ore
body in three-dimensional space. This is done through the creation of wire-frames
based on sampling data, and is usually done using a software package and necessarily
involves a degree of extrapolation. Once the body has been defined geometrically an
attempt is made to define what are referred to as ‘domains’. Domains are created
based on a number of criteria including (Dominy et al., 2002):

 Grade continuity and variability


 Geological continuity and variability
 Effects of faulting and folding
 Definition of ore envelope as well as assay hangingwall and footwall

11
 Barren or low grade internal zones
 Metallurgical characteristics
 Ore mineralogy, chemistry and petrography

A domain should be constructed primarily on the basis of geological homogeneity,


but statistically it should represent a single population which has a consistent mean
and variance (this is further explained in the section on Statistical Analyses). In terms
of the resource estimation, the relevant geological controls on mineralization need to
be identified as these will help to determine the level of spatial continuity in the ore
body (Snowden, 2001). The style of mineralization will inform the design of the
drilling or sampling program, but in turn the geological model informs sampling by
indicating the optimal sampling or drill density based on the geometry of the ore
body. Snowden (2001) defines the optimal spacing as, “that which satisfies an
acceptable risk profile at any given stage of the project for the minimum cost. The
competent person needs to decide whether sampling is close enough for an accurate
measure of continuity to be made.” What this means in practical terms is that the bank
balance often dictates the amount of drilling to be done, when in fact resolving
continuity (geology and geometry) should be the primary consideration of drill
spacing (Dominy et al., 2002).

The required level of continuity information will be dictated by the level to which the
resource is being estimated, and will change from Inferred (low-confidence) to
Measured (high confidence) Resource estimations. The implications of poor
continuity resolution are, however, far reaching. Economic forecasting is extremely
difficult for resources only defined at an Inferred level, while resources defined from
too few samples will produce unrepresentative estimates which may mask
complexities in the ore body (Dominy et al., 2002). In the same way the domaining
process should take cognisance of mineralogical boundaries as well as grade
boundaries, rather than focussing purely on lithological continuities. Structural
characterisation of the ore body is equally important to the resource estimation
process and will ultimately affect the accuracy of resource estimations. Domaining
can therefore be viewed as an attempt to characterise continuity for a given ore body,
and in that regard it also characterises the inherent variability of the body.

12
Understanding the inherent variability of deposits characterised by a high nugget
effect is best done through the use of examples. Gold-quartz reefs, placer deposits and
kimberlites are three examples of highly variable ore bodies which display little to no
spatial correlation between sample points at close distances. These are not the only
deposit types susceptible to a high nugget effect and serve only as a guide to illustrate
inherent variability and its contribution to the nugget effect. It is important to note that
the geological contribution to the nugget effect is a fixed feature which cannot be
reduced (Dominy et al., 2003).

2.1 Quartz-Gold Reefs


Mineralised quartz reefs are hosted in numerous geological settings around the world
(Dominy et al., 2003). They include mesothermal greenstone-hosted shear zones (e.g.
Barberton and Kraaipan greenstone belts, R.S.A), granite-hosted systems (e.g.
Charters Towers, Australia), volcanic-hosted epithermal systems (e.g. Nevada, USA:
Round Mountain, Comstock Lode, Midas and Sleeper, as well as numerous Andean
deposits), and slate-belt terrains (e.g. Central Victorian Goldfields, Australia;
Dolgellau Gold-Belt, UK). In the following section, examples from mesothermal and
epithermal systems will be used to illustrate the aspects of reef geology which affect
the mineral resource estimation process and reliability, as pertains to the nugget
effect.

2.1.1 Epithermal Gold


The term ‘epithermal’ was first presented by Lindgren (1933) in his ore deposit
classification scheme to refer to deposits formed at shallow crustal levels (Robb,
2005). These deposits have been found in various parts of the world, with over 137
recorded for the southwest Pacific by 1995 (White et al., 1995). These are essentially
hydrothermal systems which undergo sudden physical and chemical changes when
they reach shallow crustal depths. The main changes occur due to the transition from
lithostatic to hydrothermal pressure, which results in boiling of the hydrothermal fluid
(White & Hedenquist, 1990). Other factors include the interactions of this deeper
crustal fluid with near-surface waters, permeability changes and fluid reactions with

13
the host rocks (White & Hedenquist, 1990). As stated by White and Hedenquist
(1990), “these changes near the surface are the reason that an 'epithermal' ore
environment exists, as they affect the capacity of the hydrothermal fluid to transport
metals in solution. Focussing of fluid flow near the surface, in conjunction with
changes which decrease the solubility of metals in the fluid, will then result in metal
deposition within a restricted space.”
The conditions of formation have been constrained to between 160 and 270°C and
depths of between 50 and 1000m and their associated pressures (White et al., 1995;
Hedenquist et al., 2000; Robb, 2005). These deposits are generally characterised by
fine-grained quartz, calcite and intense brecciation, and display textures ranging from
open-space filling to comb-structures and crustification (Porter, 1988). These deposits
are hydrothermal in origin and contain characteristic hydrothermal alteration, with the
fluids implicated containing both magmatic and meteoric components (White et al.,
1995). The fluids are generally weakly saline with less than 5 wt% NaCl, and
temperatures between 200-300°C. These deposits form in oceanic and continental arc
environments and are linked to igneous activity, but can occur in many different
settings (White & Hedenquist, 1990).

The conditions of formation are inferred from evidence of boiling and concomitant
rapid cooling within the epithermal ore zones, such as hydraulic fracturing, colloform
textures in quartz and the presence of advanced argillic alteration in the wall rocks
(Hedenquist et al., 2000). This process of boiling favours gold precipitation from the
bisulphide complex (Hedenquist et al., 2000). The three main types of epithermal
mineralization can be summarised as:
 Open-vein type (Figure 3): This type forms when the boiling and
hydrothermal deposition occur at depth;
 Hot-Spring/Sinter type: This type usually includes a component of open-vein
type mineralization at depth, but boiling and hydrothermal deposition also
occur at or near the surface; and
 Sediment-hosted disseminated and replacement types: This type occurs
when the geothermal fluid interacts with chemically reactive carbonate rocks.

14
Figure 3: Gold fill of an open quartz vein in an epithermal Au-Ag style deposits. Example
taken from Edie Creek. Figure taken from Corbett (2002b).

The geometry of these deposits can be classified into three broad classes based on the
main controls: structural, hydrothermal and lithological. The geology and structural
character of the area will produce primary and secondary permeability to facilitate
fluid flow, while the fluid itself may also induce permeability in the rocks. In essence
the facilitator of permeability becomes the primary control on the mineralization style
and geometry (Hedenquist et al., 2000).

In the structural class, the mineralization occurs as stockworks, vein swarms, in thin
low-angle or massive veins (Figure 4). The latter are referred to as ‘bonanza’ type
deposits due to their high grades but low volumes. In the hydrothermal class,
geometry is controlled by hydrothermal breccias and the secondary porosity created
by dissolution, while in the lithological class mineralization can be dispersed in
ignimbrites or clastic sediments as a result of encountering an impermeable layer such
as an aquitard (Hedenquist et al., 2000). This permeability contrast can manifest in
other ways where mineralization is linked to unconformities and structural
discontinuities which produce replacement-type deposits. Mineralization can also be
dispersed in diatreme breccias such as at the Montana Tunnels deposit in Montana,
USA (Hedenquist et al., 2000).

15
Figure 4: The main geometry of epithermal gold deposits related to the permeability created
by structural, hydrothermal and lithological controls. Figure taken from Hedenquist et al.
(2000).

In recent years increased understanding of these deposits led to the recognition of two
main epithermal deposit types which incorporate the features previously mentioned.
The main mineralising systems are referred to as ‘high-sulphidation’ and ‘low-
sulphidation’ which differ primarily based on the sulphidation state of the sulphide
assemblages and their associated gangue mineralogy, as summarised in Table 1
(White et al., 1995; Hedenquist et al., 2000; Robb, 2005). These should be considered
as end-members in the epithermal system, and there are numerous variations between
the two (Figure 5) (Hedenquist et al., 2000). These deposits form as a result of the
circulation of fluids in and around volcanoes, with high-sulphidation (or kaolinite-
alunite style) deposits forming proximal to the volcano, and low-sulphidation (also
known as adularia-sericite) deposits forming in a more distal setting, although they
may also form within the volcanic edifice during the waning stages of active
magmatism (Figure 6). These two types may or may not be spatially associated, and
often either one or the other is present at any given location (Robb, 2005). The third
class of epithermal deposit is referred to as the Carlin-type gold deposit. However, as

16
the styles of mineralization are similar to those described for low- and high-
sulphidation systems, this class will not be dealt with separately.

Table 1: Summary of the main characteristics of high and low-sulphidation systems. Table taken from
White et al. (1995).

Figure 5: Map of the main low- and high-sulphidation deposits of the circum-Pacific area (a)
and Europe-Central Asia. Figure taken from Hedenquist et al. (2000).

17
Figure 6: The geological setting and characteristics of high- and low-sulphidation epithermal
deposits. A possible genetic link is suggested between some epithermal and porphyry-type
deposits. Figure taken from Hedenquist et al. (2000).

White et al. (1995) have identified four geothermal settings for epithermal deposits of
the high- and low-sulphidation varieties based on a review of the deposits of the
southwest Pacific region. These include silicic depressions; andesitic strato-
volcanoes; cordilleran volcanics and oceanic island arcs (summarised in Table 2). As
stated by White et al. (1995), even when the geology and controls are well
understood, “we do not yet fully understand why some systems produce ore, and
some, apparently identical, appear to be barren.” The two main sources of variability
identified by White et al., (1995) were wall-rock alteration and ore distributions,
which appear to be unique for each deposit.

18
Table 2: Characteristics of different geothermal settings for epithermal deposits. LS=low sulphidation
and HS=high sulphidation. Table taken from White et al. (1995).

2.1.1.1 Continuity
The main control on ore distribution and geometry in epithermal deposits, whether
low- or high-sulphidation types, is often structural. Their location in the near-surface
environment produces much of the variability observed in their distribution due to
strong permeability contrasts present in this environment, as well as the interplay
between lithological, structural and hydrothermal controls (Hedenquist et al., 2000).
In terms of grade continuity, the highly variable styles of mineralization will produce
poor continuity on a local scale, but may show broad-scale continuity. On the local

19
scale several styles of mineralization may be present for a single deposit type, and
their relationship to each other may be complex.

In the example of the epithermal Au-Ag deposit of Choquelimpie in northern Chile,


the main mineralization is extracted from siliceous veins and hydrothermal breccia
(Bisso, et al., 1991). The veins cross-cut the brecciation suggesting emplacement at a
later date but both mineralization styles have been subjected to supergene enrichment
processes which have upgraded Ag preferentially to Au. Unravelling the sequence of
events and the timing of mineralization is a complex process which is not easily done
without detailed geological knowledge of the ore deposit and it structural (veins and
faulting), lithological (host-rock) and hydrothermal (breccia) controls (Figure 7).
Grade continuity on a local scale is poor with discontinuous breccia bodies hosting
much of the mineralization, however, on a broader scale continuity can be defined in
terms of a N 60° E-trending zone which extend over 2km (Bisso, et al., 1991).

20
Figure 7: Generalised geology of the Choquelimpie main ore body (previous page) in map
and cross-sectional view (this page). Figures taken from Bisso et al. (1991).

2.1.2 Mesothermal Gold


These deposits are commonly referred to as lode-gold, orogenic gold or Archaean
gold deposits, and although in the strictest sense, they exceed the depths originally
used to define a mesothermal deposit, the term is still widely accepted and will be
used herein (Groves et al., 1998). These deposits are similar to those described for the
epithermal system (Figure 8). They contain comparable grades of Au, but generally
less Ag, with mineralization controlled largely by structures and hosted in quartz-
veins systems.

21
Figure 8: Schematic illustration of the elements of deep, intermediate and shallow portions of
gold-quartz vein system. Figure taken from Stephen and Peters (1993).

The important differences are the greater temperatures and crustal ranges of
mesothermal gold deposits which can extend to depths in excess of 20km at
temperatures over 300°C (Groves et al., 1998; Robb, 2005). In general these deposits
show much greater continuity based on their structural characteristics. However, they
often contain spatially restricted high grades which contribute to the high nugget
effect associated with these deposits (Dominy et al., 2003). They contain fault
systems which are continuous for hundreds to thousands of meters, often with
complex arrays of discrete veins, reefs or lodes and larger fault planes which may
show movement and discontinuities based on interactions with their host-rocks
(Figure 9) (Dominy et al., 2003).

22
Figure 9: On a regional scale the main mineralised banded iron formation shows reasonable
continuity (left). On the scale of drill-core intersections a complex relationship between
folding, micro-faulting and veining is observed (top-right). However, on an outcrop-scale the
main mineralized vein systems can be recognised as Group IIA and B vein systems (bottom-
right). The photographs are of the banded iron formation hosted gold deposits of the Kraaipan
Greenstone Belt in South Africa, and the map is taken from Hammond and Moore (2006).
Photographs taken by the author (2009).

These deposits are hydrothermal and typically emplaced around depths of 15-20km
but can extend to the surface (Groves et al., 1998). These deposits are primarily
hosted in accreted terranes or collision orogenic settings with associated regional
metamorphism rarely exceeding the amphibolite facies (Groves et al., 1998). The host
lithologies for mesothermal gold are highly variable and include most rock types,
ranging from banded iron formations to greywacke-pelites and are commonly
associated with greenstone belts from the Archaean, although some Proterozoic and
Phanerozoic examples have been found (Groves et al., 1998). These deposits share no
consistent link to plutonic heat-sources as the driving mechanism for hydrothermal
fluids, the fluids themselves, and the metal sources are largely heterogeneous. The

23
mineralization occurs in much the same form as that of epithermal systems.
Brecciation is not common and mineralization tends to favour quartz veins, although
it is occasionally hosted as disseminations or stockworks (Groves et al., 1998).

The areas in which these deposits form are generally characterised by prolonged and
complex structural and deformation histories with mineralization linked to second or
third-order structures (Groves et al., 1998). The ore-shoots carrying mineralization are
typically lobe-shaped planar structures hosted within areas of structural dilation which
have unpredictable, and often asymmetrical, grade distributions (Peters, 1991). The
thickness of the ore shoots is largely controlled by the host-rock lithology with the
thickest veins associated with wall-rock replacement and the thinnest often associated
with rocks deformed in a ductile manner (Figure 10). The mineralised quartz-veins
generally contain sulphides and carbonates which are characteristic, as is wall-rock
hydrothermal alteration which typifies these deposits (Groves et al., 1998).

2.1.2.1 Continuity
In general the quartz-reef ore bodies are associated with the fault planes of these areas
of intense crustal shortening, but show substantially less continuity, although these
narrow bodies follow a roughly parallel trend to the faulting (Dominy et al., 2003).
These deposits show substantial discontinuities on the local scale, and this often
causes difficulties when trying to predict ore distributions. Where drilling is unable to
resolve issues of continuity and distribution, surface outcrops and historical data may
provide an indication (Stephen & Peters, 1993; Dominy et al., 2003). Grade
continuity in these reefs varies laterally, vertically and across the body as a result of
the complex and extended structural/deformation history of the area in question.

24
Figure 10: The main types of structures hosting lode-gold are sketched above. Note the
variability in the scale bar for each of the types. Not included are replacement-style deposits,
such as can be seen in banded iron formations. This example is taken from in the Menzies-
Kambalda area in Western Australia (Witt, 1993).

Different episodes of deformation may have produced different styles of


mineralization or different grades over the evolution of the system, with structures
potentially obscuring one another (Dominy et al., 2003). When this happens on a
small-scale (vein-scale) it can potentially increase the irregular distribution of
mineralization in the reef. On a broader-scale, however, such disturbances are more

25
easily recognised and accounted for from drill core or surface outcrops (Dominy et
al., 2003). The main structural factors in mesothermal systems that will impact on
grade continuity can be summarised as (Stephen & Peters, 1993; Dominy et al.,
2003):

 Post-mineralization structures: potential for dilution or disruption by


potentially barren structures. Also potential for duplication in some instances.
 High-grade veins cross-cutting earlier veins within the reef: late veins are
usually discontinuous with erratic gold distributions.
 Barren veins within the reef structure: recognising their distribution and
orientation will help define the levels of dilution in the reef and thus assist in
continuity assessments.
 Geological characteristics of gold-bearing veins within the reef: correlations
can be made based on structural characteristics, textures or mineral
assemblages which may help in identifying areas of potentially high-grade
material, and conversely, barren veins.

2.2 Placers
Garnett and Basset (2005) state that, “placers have been mined since man first used
metal…(by) every mining method, wet, dry, mechanical and by hand, open-pit, and
underground.” Their extensive history has resulted in a plethora of information
regarding formation and morphology, as well as their distribution in space and time
(Garnett & Bassett, 2005). Placer deposits are formed from the concentration of dense
(or heavy) detrital material during active sedimentation processes. The processes that
form placer deposits are easily identified through the study of the modern
environment. The complexity of the dynamics involved in transporting and depositing
placers, however, means that it is extremely difficult to predict their distributions
(Robb, 2005).

During sedimentation sorting of the light particles from the heavy occurs at various
scales ranging from regional systems such as alluvial fans, to intermediate and small-
scale features including point-bars and bedding laminae (Robb, 2005). Macdonald

26
(1983) summarises their formation as “the results of natural processes that link the
properties of sedimentary particles with the physical behaviour of flowing streams of
air and water in the form of wind, rivers, waves, tides and ocean currents. Fluid
energy is translated into solid movement when impact and viscous forces build up
sufficiently to overcome the inertia of the particles at rest; deposition takes place
wherever the gravitational forces prevail.” Placer minerals therefore need to be
resistant to both the physical and chemical processes which would break them down
during the weathering of the source rock, or during subsequent transport and
reworking (Macdonald, 1983; Garnett & Bassett, 2005).

The main commodities extracted from placers are gold, platinum group minerals
(PGM, commonly sperrylite and isoferroplatinum), cassiterite (SnO2), diamonds,
zircon and the Ti-bearing constituents of mineral sands which include ilmenite and
rutile. Other commodities found in placers include the tungstate minerals scheelite
and wolframite, native copper, magnetite, chromite and semi-precious stones (Table
3) (Garnett & Bassett, 2005). Although the PGM precious metals and gold may also
be concentrated during hydromorphic dispersion, this will not be discussed as it is not
considered part of the classic placer deposit, and commonly constitutes only a minor
concentrating mechanism.

Placer deposits have been exploited from all continents, save for the continent of
Antarctica. They have also been found in all the climatic zones of the globe at
elevations ranging from over 5000m in the Andes to 140m below present sea level off
the coast of Namibia (Garnett & Bassett, 2005). The main depositional environments
of placer deposits can be summarised as continental, transitional and marine settings
(Table 4) (Macdonald, 1983).

27
Table 3: Summary of the primary economic minerals and their provenances. Taken from Macdonald
(1983).

Table 4: Summary of the main placer depositional environments, the processes involved and their
distinguishing characteristics. Modified after Macdonald (1983).

Depositional Environmental
Setting Environment Processes Features of distinction
Weathering in situ of
percolating waters, soluble material. Some
chemical and biological surface material may be
reactions, heat, wind and removed by sheet flow,
Continental Eluvial rain rivulets and wind

28
Downslope movement of
weathered rock controlled
surface creep, wind, mainly by gravity, and
Colluvial rainwash, elutriation, frost poorly sorted
Wide range of depositional
land forms, mostly within
km of source rocks, particle
size decreases and sorting
increases with increased
distance from source. Only
physically and chemically
Fluvial flowing streams of water stable particles
Wind is principal
transporting agent. Flash
wind with minor stream flooding may produce some
Desert flow. Heat and frost channels
Unsorted and unstratified.
Moraines and till in outwash
plains usually upgraded to
produce economic
concentrations. Upgrading
moving streams of ice by stream action or along
Glacial and melt waters shorelines
Most resistant minerals
concentrate. Main
concentrating mechanisms
waves, currents, winds, waves, currents, winds and
Transitional Strandline tides tides
Placers formed in sand
blown up from beaches.
Dunal systems develop
from both stationary and
Coastal Aeolian wind and rain splash transgressive systems
Formed around river mouth
carrying large sediment
loads. Seaward margins
favour reworking of
sediments and repetitive
waves, currents, winds, vertical sequences
Deltaic tides and channel flow common.

29
Submerged continuation of
the adjacent land which
provides bedrock geology.
eustatic, isostatic and Modified by marine action
Drowned tectonic movements, net and potentially diluted by
Marine placers rise in sea level terrestrial sediments

Placers usually have a relatively complex history which begins with the erosion of the
source rock(s) in which the desirable minerals are hosted, and ends with the
deposition of these minerals. However, this cycle is subject to numerous episodes of
further erosion, sorting, deposition and reworking (Macdonald, 1983; Garnett &
Bassett, 2005). In some instances, such as in the case of the Namibian marine mega-
placer, reworking of the deposit is an ongoing process which is currently still active in
modifying the placer.

The processes thought to be key in initiating the formation of a placer are tectonic
activity, a change in climate, and a relative change in sea-level (Macdonald, 1983).
These events can occur alone, or in combination. Although, as stated by Afabasyev
and Boris (1984, reported in: Garnett & Bassett, 2005) the ideal situation for placer
formation involves tectonic stability and concomitant deep weathering followed by a
period of uplift and climatic change. The effects of these events are effectively
summarised in Table 5 after Garnett and Bassett (2005).

The concentration factor involved from source to final deposition of a placer deposit
is quite variable. A concentration factor of 500-1000 is estimated for the Indonesian
cassiterite-bearing alluvial deposits, while 20-1000 is estimated for the central African
and Nigerian equivalents. In general, precious metals fall within the higher category,
with a concentration factor of about 1000 estimated for these placer deposits.

From the information gathered by Garnett and Bassett (2005), it appears that most of
the world’s placers formed during the Cenozoic, with over half falling into the
Pliocene-Pleistocene age categories. However, some diamond deposits were
developed during the Phanerozoic, while some placers formed during the glacial
cycles of the Ordovician, Devonian and Permo-Carboniferous.

30
Table 5: The key events that initiate the formation of placer deposits, their effects and their consequences are listed below. Taken from Garnett and Bassett (2005).

31
Figure 11: Schematic illustration of the processes involved in the progressive formation of
placer deposits. The placer types preferentially formed are: 1 = eluvial deposits; 2 = saprolite
and regolith; 3 = laterite; 4 = rock glacier; 5 = glacial moraine; 6 = talus slope; 7 = colluvial
deposit; 8 = paleochannel and deep lead; 9 = constricted channel and braided river; 10 =
headwater creek; 11 = glaciofluvial deposit; 12 = some proximal alluvial placers; 13 = some
distal alluvial placers; 14 = beach; 15 = dunes; 16 = tidal deltas; 17 = cheniers; 18 = washover
sands; and 19 = some marine deposits. Figure taken from Garnett and Bassett (2005).

The oldest preserved tin placer deposit is postulated to have formed during the
Permian (Yim, 2000, reported in: Garnett & Bassett, 2005). Based on the main
mechanism which concentrates the placer, they can be divided into two main classes
or groups. Those that form from depositional processes are termed accumulation
placers, while those that form through erosional processes are termed lag deposits
(Garnett & Bassett, 2005). These placer types are often the result of repetitive
sedimentation processes and reworking which modify the placer continuously and
produce highly unique ore-bodies (Figure 11).

2.2.1 Continuity
Placer deposits form in a wide variety of settings across the globe, with highly
variable dimensions, therefore only general statements can be made regarding their
continuity, both in terms of size and grade distributions.

In general, the dimensions of placers can be said to elongate in a downstream or


downslope direction, although beach deposits and some eluvial and colluvial deposits
do not conform to this generalisation. Eluvial deposits usually have limited horizontal
dimensions extending only a few hundred meters, while colluvial deposits may be

32
hundreds of kilometres wide and extend for several kilometers (Garnett & Bassett,
2005). Glacial deposits are generally very small if they have not been reworked and
represent only a small portion of the moraines for a given area. In alluvial systems the
extent of placer deposits can be highly variable, with little or no relation to the size of
the valley which hosts these deposits. The Klondike district of Canada hosts a number
of gold-bearing creeks within an area of approximately 2000 km2. However, only a
small portion of this area contains placer deposits (Garnett & Bassett, 2005). In
contrast, the Kinta Valley alluvial tin deposits of Malaysia, which cover a similar
area, are riddled with placers. The gold placer-hosting Lena and Amur river basins of
Mongolia and southern Siberia extend over an estimated 2000 km in a 300 km wide
belt. The longest alluvial deposit is thought to be the gold-bearing Kolyma basin of
the Russian Far East which is ~30 km long and has widths which range from tens of
meters to over a kilometre in places.

In an alluvial system, the mineralised portion, which may or may not be covered by
considerable overburden, can vary considerably in terms of thickness and may be
located either in the valley or the flood plains of a given system (Garnett & Bassett,
2005). In general, due to their highly resistant nature, diamonds in alluvial systems
generally show the greatest lateral continuity, but often form discontinuous bodies
with an irregular downstream distribution. Beach deposits show considerable lateral
continuity, with the Namibian raised and enclosed beach deposits extending over
100km along the coastline. The strandline mineral concentrations of Western
Australia occur as a series of “elongated lenses up to several kilometres long, 15 to
200 m wide, and 1 to 3 m thick” (Garnett & Bassett, 2005).

Grain sizes in placer deposits can be said to decrease with increasing distance from
source in a downsteam direction. A relationship often exists between the grade and
the sediment granulometry of the placer, with fluvial deposits hosting higher grades in
coarser sediments such as cobbles, gravels or coarse sands, as is largely the case for
gold, cassiterite, platinum, diamonds and tin fluvial placer deposits (Garnett &
Bassett, 2005). However, exceptions occur, and economic concentrations of gold have
been found in sand. In the case of mineral sand beach deposits, they are made up
almost entirely of sand-sized material which rarely exceeds 250 μm (Garnett &
Bassett, 2005). Minerals such as cassiterite, which are brittle by nature, are reduced

33
during transport to finer grain sizes through processes such as attrition. While
diamonds whose crystal shape suffers from imperfections will often also disintegrate
during transport, and while those whose shape is more octahedral will be preserved,
multiple phases of reworking and transport of the latter will often result in rounding
and abrasion of these diamonds (Garnett & Bassett, 2005). Emeralds suffer from the
same vulnerability as diamonds. However, their imperfections result in a much faster
disintegration, while emeralds and rubies are able to persist through most fluvial
conditions (Macdonald, 1983). Precious metals display more distinct modification
through changes in their shape, with gold particles showing the widest range (Garnett
& Bassett, 2005). The fineness (parts per thousand) of PGM is also highly variable
with averages of 600-900 reported, while for gold the average is about 800 with
fineness increasing with distance from source.

The grade of a placer deposit depends largely on the local conditions specific to the
deposit, and the amount of barren overburden that may disrupt its continuity on all
scales. Placers are considered highly erratic and variable with heterogeneous shapes,
sizes and grades common. “The greatest horizontal change in grade invariably is
exhibited in transverse section, although a non-uniform distribution may also be
expected longitudinally, commonly with significant grade alterations” (Garnett &
Bassett, 2005).

In fluvial deposits, there is generally an alignment of higher-grade deposits oriented in


the dominant flow direction but within a wider zone of lower-grade material. These
zones, however, may be highly irregular in shape or even discontinuous or branching.
Garnett and Bassett (2005), however, state that regardless of the high levels of
variability inherent in placers, most show a vertical grade profile which can be used as
a somewhat predictive tool when creating geological or conceptual models for placer
deposits. This is expressed as a consistent (at a given rate) grade decline in a
particular direction, which in the case of lag placer deposits, is expressed as a rapid
decline in grade with depth. In some cases this may be extreme, with up to 50%
decrease in mineral content for every 0.2 m increase in depth from the surface in the
case of river bars, and 85% within 2m of the surface for placers located in marine
dispersal settings (Garnett & Bassett, 2005). Accumulation placer deposits tend to
show the opposite grade trend with grade increasing with depth from the surface. This

34
is the case with mineral-sand beach deposits, gold and cassiterite, whose highest
concentrations are usually located against bedrock, or the interface with older
sediments. Diamonds tend to follow the same pattern, particularly in a marine setting,
with concentrations focussing on the interface with bedrock where crevasses, joints
and gullies form excellent trapsites for these placers (Garnett & Bassett, 2005).
Diamond placers in a beach environment tend to be located near the low-tide level
where high-energy surf conditions sort the more spherical clasts from the discoidal
ones. The former are concentrated on the active toe (seaward limit) of the beach,
while the latter are deposited higher in the beach profile. Over time these produce the
gravel beach deposits that characterise the aggressive Namibian coast line (Corbett,
2002a).

2.3 Kimberlites
Kimberlites are defined as “volatile-rich, potassic ultrabasic, igneous rocks which
occur as small volcanic pipes (<1 km diameter), dykes and sills” (Skinner & Clement,
1979, reported in: Field et al., 2008). The composition of kimberlites points to a
mantle source for these rocks which often extruded at the surface as violent gas-
charged eruptions (Robb, 2005). In general, kimberlites are inequigranular bodies
comprised of macrocrysts set in a fine-grained matrix which may contain primary
phenocrysts of olivine and potentially include phlogopite, calcite, serpentine,
diopside, monticellite, altered melilite, apatite, spinel, perovskite and ilmenite (Field
et al., 2008). The macrocrysts are anhedral and dominated by olivine, although
phlogopite, picroilmenite, chromian spinel, magnesium garnet, diopside and enstatite
may also be present. Mantle-derived and crustal xenoliths are sometimes present, and
in a few rare instances, diamonds are brought to the surface in the kimberlites as well
(Kirkley et al., 1991; Field et al., 2008).

Their geometry is characterised by a crater zone at surface, a diatreme zone and a


basal root zone which contains dykes and sills (Figure 12). This is described as a
carrot-shaped geometry which tapers at depth towards the root zone.

35
Figure 12: The idealised kimberlite pipe, or classic pipe structure includes a root, diatreme
and crater zone. The present erosion levels at Orapa pipe in Botswana, Jagersfontein at
Kimberley and the Bellsbank pipes in South Africa are also shown (left). Kimberlites are
believed to have originated at depths of around 150-200km and to have sourced their
diamonds from the mantle (right). The image on the right was sourced from a Google Image
search (2009), the image on the left is taken from Kirkley et al. (1991).

Not all kimberlites contain the full geometry, and based on the shape of the kimberlite
pipe and its internal geology, kimberlites have been divided into three distinct classes
(Figure 13) (proposed by Field & Scott Smith, 1999, reported in: Skinner & Marsh,
2004).

 Class 1: These are referred to as the ‘classical pipes’ which contains crater and
diatreme zones, as well as a root zone. These are generally steep and deep
pipes, of which the Kimberley pipes of South Africa are an example. Their
origin is still under debate, with both magmatic and phreatomagmatic
processes implicated. Skinner and Marsh (2004) proposed a transitional zone
between the diatreme and root zones for this class.
 Class 2: This class is characterised by relatively shallow crater zones. Their
origin is less disputed and a phreatomagmatic origin is proposed, with
subsequent infilling by magmatic or sedimentary processes. Examples of these
are seen in the Fort à la Corne pipes in Canada.
 Class 3: This class is characterised by steep sides and infilling by
resedimented material. The infill is dominated by volcaniclastic kimberlite, as

36
well as lesser pyroclastic kimberlite. Their emplacement is thought to have
been dominated by magmatic processes, with lesser phreatomagmatic
processes also involved. An example of this type of kimberlite is the Jwaneng
pipe in Botswana, and the Lac de Gras Ekati pipes in Canada.

Figure 13: The three kimberlite classes are presented as idealised models. RVK =
resedimented volcanoclastic kimberlite; PK = pyroclastic kimberlite; mTK = massive
tufficitic kimberlite; TFK = transitional-facies kimberlite; MK = magmatic-kimberlite; HK =
Hypabyssal-kimberlite. Figure taken from Skinner (pers. comm., 2009).

Skinner and Marsh (2004) have shown that all three classes can be present in the same
geological setting, which suggests that emplacement of the different classes is a
function primarily of compositional differences between the kimberlites, and not their
geological setting. Examples where all three classes are found in the same geological
setting include NE Angola and Siberia (Skinner & Marsh, 2004).

Based largely on their isotope geochemistry, kimberlites can be divided into two main
groups. The Sr-Nd isotopic signature of Group 1 kimberlites tends to be slightly
depleted relative to Bulk Earth, while the Group 2 kimberlites show a relative
enrichment in their Sr-Nd isotopic signatures (Field et al., 2008). The Pb isotopic
signature for Group 1 indicates a radiogenic signature, while for Group 2 the signature

37
is for unradiogenic Pb. The remainder of the features that distinguish these two
Groups are summarised in Table 6. Each group shows a distinct age range for
emplacement, and petrographic characteristics unique to the group. Group 1
kimberlites are characterised by a dominance of monticellite with abundant
groundmass opaques and perovskite, while the Group 2 kimberlites are dominated by
phlogopite and coarse diopside (Field et al., 2008). In general the diamondiferous
kimberlites of Group 2 occur as dykes, while the economically significant kimberlites
of Group 1 occur as pipes, particularly in southern Africa (Field et al., 2008).

Table 6: Summary of the primary features which classify kimberlites into Group 1 and Group 2.
Adapted from Field et al. (2008) and Skinner (1989).

Feature GROUP 1 GROUP 2


Isotopes Depleted relative to Bulk Enriched relative to Bulk
Earth, radiometric Pb Earth, unradiometric Pb
Age ~70-1950 Ma (RSA) ~114-200 Ma

Primary magmatic Olivine, monticellite, calcite, Phlogopite dominated


minerals phlogopite, spinel, groundmass, olivine,
perovskite, apatite and diopside, spinel, perovskite,
ilmenite apatite and melilite
Indicator minerals Garnet, ilmenite, chromite, Garnet & chromite –
zircon & rutile restricted chemistry
Megacrysts common Rare
Geochemistry Depleted in SiO2, K2O, Pb, Enriched in SiO2, K2O, Pb,
Rb, Ba and LREE. Enriched Rb, Ba and LREE. Depleted
in Cr and Nb. in Cr and Nb.
Diamond Diamondiferous to barren Diamondiferous to barren
On-craton kimberlite 1/10 On-craton kimberlite 1/1

Unfortunately there are no modern analogues that can be used to facilitate our
understanding of the genesis and emplacement of kimberlites, and as such these topics
are still largely under debate in the literature. As reported by Field et al. (2008)
kimberlite-hosted diamond deposits, “are the end-products of an extremely complex
set of geological processes that have permitted the growth and preservation of
diamonds in the Earth’s interior in the first place and then its subsequent extraction
from its host environment and transport to the Earth’s surface where as a consequence

38
of different geological processes it may be concentrated sufficiently to permit its
economic extraction.” This paper is not particularly concerned with the origin of the
diamondiferous kimberlites, what is relevant to the nugget effect are primarily the
geometries of the different kimberlite pipes, and diamond distributions therein. For an
overview on kimberlites and diamond growth the reader is referred to Kirkley et al.
(1991).

2.3.1 Continuity
The continuity of kimberlite bodies, both in terms of geology and grade, usually
requires underground exploration techniques to resolve the question of geometry. This
is done primarily through core drilling to establish the size and shape of the kimberlite
and its geology, which ultimately leads to the creation of a 3D geological model of the
kimberlite (Harder et al., 2009). Grade distributions are more easily undertaken using
the following guidelines:
 Crater zone: Often complex post-emplacement processes redistribute the
material in this zone. Interpreting the primary from secondary processes here
will help define grade distributions.
 Diatreme zone: Developed exclusively in Class 1 kimberlites, these are
massive, fragmental and poorly sorted clastic textured rocks. In general
diamond distribution is homogenous.
 Root zone: Kimberlite is present as dykes, sills and plugs and very rarely as
lavas. Typically their diamond distribution is clustered. Sorting has been
known to occur in these dykes, such as at Star East mine in South Africa.

However, even with these guidelines, interpreting geometry is not very straight
forward in reality. This was illustrated at the DO-27 kimberlite in the Northwest
Territories in Canada (Harder et al., 2009). The original geological models that were
developed for this deposit were based on a small amount of drilling and geophysical
information, and constructed with the classic Southern African kimberlite models in
mind. The kimberlite was originally interpreted to be comprised of ‘diatreme facies’
infill material and hence a homogenous diamond distribution was predicted for
evaluation purposes (Harder et al., 2009). The recovered grades of 1.3 to 36 carats per
hundred tonnes (cpht) from a proposed 50 million tonnes yielded by sampling made
the project an unattractive prospect and further exploration was terminated. A later re-

39
interpretation of the core used to create the first model was undertaken, which
indicated that the kimberlite was not in fact ‘diatreme facies’, but rather a ‘coherent
kimberlite.’ The dominant infill in this model was interpreted to be pyroclastic in
nature with the kimberlite body interpreted as a “low-volume sub-surface intrusive
coherent sheet” of only 25 million tonnes. However, it was felt at the time that
adequate bulk sampling had not been undertaken, and that the project was still too
high risk. The initial geological interpretations were carried out in 1994, and it was
only in 2005-2007 that the project was again re-visited to carry out the bulk sampling
proposed, as well additional drilling, and further refine the geological model. Higher
grades of 90cpht were then defined for a volume of 30 million tonnes which made the
project economically viable.

Resolving the geometry is a difficult task. Even when the money is available for
further drilling, complex internal geology and an irregular external shape can
complicate the modelling process considerably. The modelling carried out at the Ekati
Diamond Mine in Canada recognised that there was often a significant amount of
uncertainty associated with the geometry of the kimberlite body, its internal geology
and hence its predicted volume (Dyck et al., 2004). Considerable interpolation and
geological interpretation are required to produce “best-fit” pipe models for evaluation
purposes which will inevitable, be smoothed simplifications of the original (Figure
14) (Dyck et al., 2004).

Figure 14: Five solid rendered wire-frame models illustrating both the variability in the sizes
and shapes of the Ekati kimberlite pipes, and the level of smoothing carried out when
modelling these pipes. Figure taken from Dyck et al. (2004).

40
At Ekati the internal geology of the pipes was modelled using the available drill and
sample information. Four generalised internal geological models have been
recognised in the Ekati Diamond Mine™ area which characterise the inherent
variability of these kimberlite pipes (Figure 15).

Figure 15: The four generalised internal geological models of the Ekati kimberlite pipes are
illustrated above as vertical section. Figure taken from Dyck et al. (2004).

In the first model the internal geology is assumed to be homogenous or uniform


across the kimberlite pipe. This is considered to be the simplest case scenario.
However, the variography for the Koala North pipe indicates that at sample
separations of less than 115m, the nugget effect strongly influences the data and a
largely random variation in grade is seen (Dyck et al., 2004). In the subsequent
models 2, 3 and 4, additional lithological boundaries and domains are created in order
to attempt to reduce the nugget effect by identifying smaller and smaller areas of
homogeneity. This can only be successful if sufficient drill/sampling information is
available to accurately identify changes in lithology and grade, as is the case with the
Koala pipe. In the Panda pipe (model 2), however, the complex lithology or
geological variability produces highly variable local grades which essentially
“preclude reliable interpolation/modelling based on drill data” (Dyck et al., 2004).

The issue of resolving the geometry and thereby the continuity of a kimberlite body
was recently illustrated in the literature when Berryman et al. (2004) attempted to
propose a geological model for the 140/141 kimberlite at Fort à la Corne (FALC) and
explain its diamond distribution. The pipe is one of the largest in the area with a
complex internal geology attributed to a prolonged emplacement of multiple batches

41
of kimberlitic material (Berryman et al., 2004). The main body was interpreted as an
irregular champagne-glass geometry dominated by a mega-graded bed with a younger
nested crater forming a second phase of kimberlite within the mega-graded bed. This
interpretation was used to explain the diamond distributions within the body, but the
diamond distribution was also used to try and define the features of the body itself
(Berryman et al., 2004). The internal geology and pipe geometry were explained by a
two-stage process of initial phreatomagmatic crater excavation followed by a phase of
crater in-fill by pyroclastic rocks to produce the complex structures seen within the
pipe (Berryman et al., 2004). Kjarsgaard et al. (2007) disputed this model claiming
insufficient data had been presented to support their postulation, and that the model
was an over-simplification of the body (Figure 16).

Figure 16: The contrasting models of the 169 kimberlite are presented. In model A by
Berryman et al. (2004) the main kimberlite body is presented as an in-fill in a reverse
champagne-glass geometry, while in model B the kimberlite has a more irregular shape and is
dominated by reworked kimberlite (Kjarsgaard, Leckie and Zonneveld. modified after
Kjarsgaard et al., 1997; reported in: Kjarsgaard et al., 2007). In model B the Westgate
Formation Shales form an overlying cap which is thought to presenrve the domal shaped
kimberlite tephra cone. Figure taken from Kjarsgaard et al. (2007).

Kjarsgaard et al. (2007) draw on data from other kimberlite pipes in the FALC area,
particularly the 169 body, and maintain that the FALC kimberlites are better
described as “polygenetic tephra or tuff cones with associated feeder vents of variable
geometry” (Kjarsgaard et al., 2007). The mega graded-bed postulated by Berryman et
al. (2004) is disputed based on evidence that the FALC kimberlites formed over a
period of 1 -5 Ma through multiple intrusive events, which they believe negate the
formation of a single in-fill event (Kjarsgaard et al., 2007). Berryman and Scott Smith
(2007) responded to this criticism by pointing out that there are very limited

42
occurrences of Cretaceous sedimentary xenoliths within the main FALC kimberlite
bodies, which they believe provide the key evidence in support of an excavation
model. They also dispute the volcanic cone model based on the presence of nested
craters, which cannot occur in a volcanic cone (Berryman & Scott Smith, 2007). This
example of disputed geometries serves to highlight the variability both in kimberlite
bodies, and in the interpretation of the individual geologist. In the debate above, all
the authors are drawing from examples in the same area in the hopes of justifying a
hypothesis. However, they also reiterate that each kimberlite is unique. In geology the
tendency is to draw from the available information in order to generate a conceptual
model of the ore body, but as the DO-27 kimberlite models from Canada illustrate,
this model needs to be treated with a certain amount of scepticism due to the
unavoidable over-simplifications inherent in conceptual models.

Other examples of the complexities inherent in kimberlite bodies include the Orapa
AK1 pipe where efficient sorting has been reported from modified grain-flow
deposits, which generally contain small diamonds sorted by hydraulic processes
(Field et al., 2008). In the case of the Marsfontein Mine in South Africa, the diamonds
have been concentrated into a ‘lag’ deposit as a consequence of weathering and post-
emplacement alteration (Field et al., 2008). These secondary modifying processes can
considerably complicate the evaluation process.

3.0 Sampling component of the Nugget Effect


The nugget effect is composed of two factors which produce variability in an ore
body. One component of the nugget is ‘inherent variability’. This is the natural
geological variability within the ore body due to its geometry, lithology and grade
distributions (Dominy et al., 2001; 2002; 2003). The second component of the nugget
effect is related to sampling practices, and Dominy et al. (2001) refer to this as the
‘sampling nugget effect.’ The sampling nugget effect (SNE) can be introduced
through the use of an incorrect sample size, or sample preparation procedure, or error
can be introduced during the analysis of the sample(s) (Dominy et al., 2001). These
factors can produce high variability in the sample population, and in some instances
can contribute significantly to the total nugget effect (Dominy et al., 2001). This
section deals with the contribution of the SNE to the overall nugget effect and the

43
different ways in which error can be introduced. While the geological portion of the
nugget effect (inherent variability of a deposit) is a fixed feature, the contribution by
the SNE is not fixed, and can therefore be substantially reduced, particularly if the
inherent variability of the deposit is properly understood (Dominy et al., 2003). High-
nugget deposits are very sensitive to error because they already contain intrinsic
variability. The tendency towards over- or under-estimation of these deposits is high,
and therefore understanding the sources of error which affect these deposits is integral
to creating a representative estimate of these deposits (Dominy et al., 2002).

In this dissertation the process of sampling is discussed, with particular reference to


the sources of error associated with the processes involved. The emphasis is on the
sampling that would occur during the exploration stages of a project as later in project
development, complexities are introduced which are beyond the scope of this paper. A
discussion on sampling-based error and variance would not be complete without
reference to Gy’s Theory of Sampling of Particulate Materials which attempts to
characterise the extent of error introduced and also assists in calculating how and
where error can be reduced (Gy, 1976; 1979). The field of sampling theory is
necessarily large and complex. What is dealt with here is an introduction to this field
as pertains to the SNE, and for further information the reader is directed to the authors
Gy (1976; 1979; 1982; etc.); Esbensen (2004; 2007; etc.); François-Bongarçon (2002;
2004; etc.) and Minkkinen (1987; 2004; etc.). Once the problems associated with
sampling have been introduced and illustrated using examples, the errors associated
with data integrity are addressed, with specific reference to databases and data quality.

3.1 Sampling Error


Olssen (2009) define sampling as “the act of collecting a small volume (the sample)
from a large volume of material (the lot).” Sampling should be guided by the desire to
be representative of the lot and, as stated by Pierre Gy (1979), should be “known to be
representative of the object to be valued (orebody, shipment of ores or concentrates,
etc…) within the limits of a certain confidence interval that can be estimated and
relied upon.” He also states that it is critical to be able to determine how
representative a sample is, based on the manner and conditions prevalent during the
extraction of the sample (Gy, 1979). Sampling should aim to account for the inherent

44
heterogeneity of the lot and the sampling procedure should ideally aim to produce
results that are repeatable for a given lot (Dominy et al., 2002; Olssen, 2009). In
reality, an exhaustive set of samples for a given area is not available as this is both
impractical and usually impossible to achieve, and by the same token, the entire lot
cannot be sampled, and therefore only a small portion of the lot can be used to unravel
the properties of the whole (Isaaks & Srivastava, 1989). Sampling campaigns are
often limited by the finances available for a given project, and in turn, the results of
sampling will determine the financial viability of the project. Therefore ‘getting it
right’ is very important, as stated by Esbensen (2004), “non-representative sampling
always is a very costly affair, economically as well as scientifically.” Often sampling
is regarded as a simple task, and during the estimation of reserves/resources, the
errors associated with sampling are largely ignored (Dominy et al., 2002).

Sampling is a complex process from the extraction of the primary sample itself to the
analysis of a measured portion of that sample in a laboratory (aliquot), and it involves
several phases where error can be introduced. As stated by Minkkinen (1987), “an
error in sampling cannot be compensated for later, even if the most sophisticated
methods and instruments are used for the actual analysis.” What is critical is
maintaining a high level of representivity, which, if not achieved, will lead to
“erroneous deduction and conclusions” (Esbensen et al., 2007).

Within the confines of a geological framework, sampling aims to characterise a given


area in terms of its mineral potential and variability. The type of deposit will
determine the type of sampling carried out, and with exception of diamond deposits,
sampling will usually be centred around geochemical analyses to characterise the area
in question (Garrett, 1983). The choice of the sampling technique should be dictated
by a clear conceptual model or hypothesis of what needs to be sampled. In the case of,
for example, a quartz-gold reef, the conceptual model should indicate which
lithologies are favourable to gold mineralization. These lithologies should then be
targeted for the sampling campaign in the hope of identifying geochemical patterns
related to mineralization (Garrett, 1983).

In designing a sampling campaign the expected size of the ore body should be known
to some degree, as defined by the conceptual model (usually from an orientation

45
survey) and perhaps some remote technique such as a geophysical survey (magnetic,
radiometric, gravity, etc.). Once this has been established, the scale of the campaign
should also be apparent, i.e. whether it should be regional or more localised. The
sampling campaign should aim to define the dispersion halo of the mineralization
under scrutiny, whether it is defined from the indicator minerals of a kimberlite
deposit, or the geochemical halo around an epithermal gold system, defining
dispersion is critical in locating mineralization (Garrett, 1983). In defining the
dispersion of the ore body it is hoped that ‘anomalous’ and ‘background’ values (or
thresholds) will be determined, which in turn will lead to identification of the ore
body proper.

Sampling in exploration geology is usually limited to soils (usually from a particular


horizon), rocks (chips from outcrop, or from drilling which includes core drilling,
rotary and percussive drilling) and stream sediments, and the choice of technique
depends on what media is being sampled (Minkkinen, 2004).

The choice of the search technique used for sampling, particularly of soils, is often
grid-based and systematic (Figure 17). Although numerous methods have been
proposed in setting up this grid, regular and evenly spaced grids are still effective in
both soil and rock sampling campaigns (Garrett, 1983). Where the target population is
not as easily accessible a ‘random stratified design’ is often used. In this method the
area is divided into cells of equal size and a single sample is drawn from each cell in a
‘random’ location within the cell (Garrett, 1983). This is particularly effective for
stream-sediment sampling and, because there is an element of randomness in the
technique, bias is not readily introduced. In a non-random ‘stratified’ sampling
procedure the sampler divides the area to be sampled into ‘strata,’ or sub-populations
based on apparent homogeneity. These ‘strata’ are then sampled independently in a
systematic fashion which is particularly effective for areas of high variability (Garrett,
1983; Minkkinen, 2004).

46
Figure 17: The main search techniques employed in a sampling campaign are a systematic
grid pattern (A); random-stratified (B); and non-random stratified procedure (C). Images
taken from Google image search (2009).

Once a sample has been collected, it undergoes a number of mass reduction phases
before finally being submitted to the lab technician who will perform the analysis of
the sample (Figure 18). The smaller samples produced from the primary sample are
known as sub-samples, and as with all sampling, these aim to be representative of the
lot in the same way that the initial sample does (Dominy et al., 2002).

Once a sample has been taken, it needs to be homogenised to ensure that when sub-
samples are taken, they are not unduly biased, and that every particle in the sample
has an equal chance of being selected for the sub-sample. In the laboratory the
sample size is reduced in accosication with processes such as crushing and pulping,
also referred to as comminution (Dominy et al., 2002). Contamination and poor
homogenisation are common errors that introduce variability into the sub-samples and
can significantly reduce their representivity of the lot (Dominy et al., 2002). This is
summarised by what Esbensen et al. (2007) refer to as the fundamental sampling
principal which cannot be violated. Esbensen et al. (2007) state that it is essential
that:
“All elements in a batch…have the same probability of being
selected, and that the elements selected are not altered in any way
after the sample has been taken. All elements that do not belong
to the batch or sample container must have a zero probability of
being selected – meaning e.g. cross contamination between
samples has been eliminated.”

47
Figure 18: Schematic illustration of the sample preparation chain and sample size reduction
phases. On the left is a sample reduction diagram showing the relationship between sample
size and weight. On the right is a summary of the typical processes a sample will go through
in the preparation chain. Figure (left) is taken from Gy (1979) and the figure on the right is
modified after Olssen (2009).

Sample preparation can be viewed as a series of mass and size reductions, and
therefore the error associated with sample preparation is directly related to the
protocol or procedure chosen (Grigorieff, 2004). Gy (1979) uses the example of drill
core to illustrate the splitting processes or sample size reductions that occur during
sampling. The campaign begins with a series of drill cores referred to as the primary
samples. These samples are then divided into sections of equal or unequal length and
the sub-samples are halved longitudinally. This core splitting stage is usually carried
out using a diamond saw or a chisel (Gy, 1979). Only one half of the ‘core sample’ is
sent for analysis, while the other is retained as reference material. During the halving
stage of sample reduction, although a mechanical device is usually used for the
splitting process, random error and bias are easily introduced. The two halves of the
core may not be the same size and core may be lost accidentally if the core is chipped.
Bias can be introduced when it comes to ‘choosing’ which half of the core is retained
and which is sent for analysis, and error can also be introduced if the equipment used

48
to split the core is not properly cleaned and maintained. Gy (1979) also states that the
dust generated from the splitting process should not be ignored, but evenly distributed
between the two halves to ensure that the fine fraction does not produce bias in the
final results, if, for example, the critical components (the component(s) which is being
estimated, e.g. gold) are present in the fine fraction. Once the core has been split the
analyst needs to ensure that the sample is sufficiently dry before it is pulverised.
During this stage of sample preparation, error can be introduced if, for example, the
heating device is too hot. This may decompose some of the minerals in the sample
which will introduce bias into the results.

Once the sample has been sufficiently dried the next set of processes involves size
reduction. This is achieved by crushing, grinding and pulverising the sample in stages
from a coarse crush through to a fine grain-size. Jaw (crush to grain sizes of 10-5mm),
cone or roll crushers (reduce to grain sizes of 2-3mm), disk pulverizers (reduce to
grain sizes of minus 100-150 microns) and batch vibrating mills (from 2-3mm to
below 100 microns) are usually used in combination for this process.

Weight reduction is then carried out using equipment such as riffle splitters, and
mixing is undertaken in order to attempt to reduce the grouping and segregation error.
The most commonly used mixing devices are twin-shell or Vee-type mixers which
blend for 10-60 minutes. This is followed by a final sample size and weight reduction
to produce the assay sample (Gy, 1979). Error can be introduced in a number of ways
during this process.

The main errors related to sampling are referred to as ‘correct’ sampling errors, which
cannot be removed, and ‘incorrect’ sampling errors, which can be controlled by
applying the correct design and procedures to the sampling campaign (Olssen, 2009).
Correct sampling errors are summarised as (Minkkinen, 1987; Vann, 2005; Olssen,
2009):
 Fundamental sampling error (FSE): This is essentially due to the
heterogeneity of any given lot for which sampling can never be fully
representative. The FSE can also be viewed as the smallest residual average
error achievable, which can never be totally removed. It is the theoretical
minimum error for a given sampling situation.

49
 Grouping or segregation error (GSE): This is due to the heterogeneous
distribution of the components of the lot and their natural tendency to group
together. When analysing a sample it is the desire of the analyst to homogenise
the sample, usually through a process of mixing. However, this is virtually
impossible to achieve and often produces the opposite effects. For example,
sticky particles such as clay will tend to coalesce and the mixing process
brings particles into contact where before they may have been spatially
separate.
 Point selection error (PSE): This refers to the heterogeneity of a flowing
sample stream for which sampling can never be fully representative.

Incorrect sampling errors include (Olssen, 2009):


 Incorrect delimitation error (IDE): This error is introduced from the
incorrect sample design process. Also incorporated in this is the increment
delimitation error, which refers to the selection of the increment used to make
up the sample.
 Incorrect extraction error (IEE): This is error produced by the incorrect
extraction of the sample from the lot and usually involves either too much or
too little material being extracted.
 Incorrect preparation error (IPE): This error is produced after extraction of
the sample, and can include contamination of the sample, loss of a portion of
the sample, oxidation or loss of water from the sample prior to analysis, or
involuntary faults due to human error.

In the core-splitting example from Gy (1979) the main sources of error would be IPE
(incorrect preparation errors), which can be subdivided into a further six categories
described in Table 7 (Gy, 1979). Recognising these sources and accepting that they
cannot all be effectively reduced is critical because, “sampling error is avoided only if
the material to be sampled is perfectly homogenous, or the whole lot is taken as the
sample, which are conditions never met in practice” (Minkkinen, 1987). However,
incorrect sampling errors can be mitigated to some degree. As described in Table 7,
human error is a critical factor during all stages of sample preparation. Human error
comes into play during the labelling of samples, identifying the correct sample sites,

50
transcribing of data, sample handling, transport, etc. Human error can, however, be
introduced at any stage during the sampling process, and in addition to the preparation
errors presented in Table 7, Dominy et al. (2002) identify several other sources of
human error through:
 Unsuitable drill method or mixed drilling techniques;
 Inappropriate drill hole inclination relative to ore body dip;
 Variable diameter/volume of the core or sample;
 Poor core/sample recovery and quality;
 Contamination through physical sampling process, e.g. slumping of hole; and
 Selection criteria for sample length;
It is important to note that sampling errors are cumulative, or additive, over every
stage of the process from extraction to analysis and effectively produce a total
sampling error (Figure 19) (Olssen, 2009).

Figure 19: Based on the origin of the error, Gy has classified the main sampling errors into
two main groups: ‘incorrect’ sampling errors which are based on poor design or operation of
the sampling equipment, and ‘correct’ sampling errors which are statistical. Figure taken from
Minkkinen (2004).

51
Table 7: Summary of the main sources of error introduced during the sample preparation stages of an analysis. Tabulated from Gy (1979).
Cause of Error Components Details
Contamination Dust Materials are usually dry and contain fine particles. Contamination in this context can occur between samples or
from an external source.
Material present in the Poor cleaning of equipment between samples, especially between samples of different grades can introduce bias.
sampling circuit and
equipment
Abrasion Minute particles of material abraded from the equipment can be introduced into the sample.
Corrosion Certain aggressive materials can cause corrosion of both the sample and sample reduction equipment. Examples:
acid flotation pulps, potash and hydrometallurgical pulps or solutions.
Salting Refer to ‘fraud or sabotage’
Loss of material Fines as dust Especially when a sample is subject to free fall, this loos of fines can be significant if mineralization is also fine.
Material remaining in If the sampling preparation equipment is not cleaned and recovered material added to the sample to which it
preparation or sampling belongs then loss of material occurs.
circuit
Loss of certain fraction This can happen when oversize is discarded during the screening process which precedes assaying, particularly if
it is done manually.
Deliberate loss of fraction Refer to ‘fraud or sabotage’
Alteration of chemical Addition or fixation Error can be introduced through the oxidation of sulphides or the fixation of water or carbon dioxide or calcined
composition of the sample minerals.
Subtraction or elimination This can occur through the elimination of carbon dioxide or combined water by overdrying.
Alteration of the physical Addition or creation of This refers to sampling for moisture or size analysis. For moisture analysis this can happen if the sample is not
component of the sample critical component protected from accidental water additions through a damp atmosphere. In relation to size analysis non-critical
components can be transformed into critical components, especially if the latter are fine and the former coarse.
Coarse particles can be reduced through breakage by any process which involves free fall or grinding of the ore.
Especially during sample transfers and loading. This problem is virtually impossible to totally eradicate.
Subtraction or destruction Moisture samples kept near a heat source prior to analysis will introduce error; breakage of the critical component
of critical component when the critical component is the oversize to a given mesh and drying sulphur ores and concentrates in an oven
can also produce destruction of the critical component
Unintentional mistakes Sampling operators Mistakes due to ignorance, carelessness, awkwardness, lack of experience, etc. This includes dropping samples,
mixing sub-samples from different samples and various other mistakes, which can be controlled by protocol, but
never fully eliminated.
Fraud or sabotage Commercial sampling This includes salting and tampering with equipment which can never fully be dealt with by automating processes.

52
From the perspective of the analyst, the objectives of sampling can be viewed
statistically as: precision, accuracy and non-bias (Figure 20). When reviewing the
sampling data there should be a narrow spread of data with good repeatability
(precision) where the average of the sample and estimated value of the lot are close
together (accurate) and where the difference between these values is low (non-bias).

Figure 20: Relationship between precision, accuracy and bias using the analogy of the target.
In this example the bulls-eye or centre of the target represents the ideal situation where there
is no bias and the result is accurate and precise. Figure taken from internet source [2].

In a more practical sense the objectives of effective and representative sampling is to


ensure that sampling is systematic in approach and reproducible in terms of results.
Bias can be introduced through convenience on the part of the sampler, whereby
certain samples are omitted and others included based on, for example, ease of
extraction; or bias can be introduced through what are referred to as ‘judgement
samples’ where samples are deliberately selected based on their being representative
of the lot (Lohr, 1999). Vann (2005) defines the objectives of sampling as ensuring
that:
 All sample material is delivered to the surface
 The sampled material is non-biased. This can be done statistically by making
sure that the average grade of any sub-sample is the same as the grade of the
sampled lot.

53
 ‘Maximum precision’ is achieved by minimising the squared difference
between grades of duplicate samples. This can, however, never be reduced to
zero unless the whole lot is sampled.

In achieving the objective of non-bias set above, the sampling design needs to be
applicable and appropriate for the lot being sampled, and the removal of this bias is
usually done through the implementation of rigorous QA/QC. The term QA/QC
stands for quality assurance (QA) and quality control (QC) and refers specifically to
the laboratory or analytical techniques employed to evaluate samples (Dominy et al.,
2002). Dominy et al. (2002) state that, “quality assurance consists of the overall
policy established to achieve the orientation and objectives of an organisation
regarding quality. Quality control designates the operational methods, and aims used
to meet the quality objectives. The four key steps of QC are: setting standards,
appraising conformance, acting when necessary, and planning for improvements”
(Figure 21).

Figure 21: ISO/IEC 17025 Requirements for Analytical Laboratories. Taken from internet
source [3].

The purpose of QA/QC is to ensure that the analytical results obtained from sampling
are precise, accurate and unbiased (Olssen, 2009). This is achieved through the use of
duplicates, standards and blanks which are measured against the results obtained from

54
the laboratory. Duplicates are good measures of precision, and should be taken at
every weight-reduction stage of the sampling process, while blanks are used to
measure contamination in the laboratory (Olssen, 2009). Standards can be viewed as
reference materials which have known values and variability, and which can be used
to assess analytical accuracy and bias (Olssen, 2009). In general this is enforced by
the industry with the Canadian National Instrument 43-101 requiring mandatory
QA/QC programmes, and the JORC Code (1999) which implies the need for QA/QC
for sampling/assaying data in its Table 1. However, it is in the auditing processes of
evaluating the data used to generate resource/reserve estimates, that this really
becomes important (Dominy et al., 2002).

In practical terms what is usually adhered to is referred to as ‘best practice’ principals,


a term which is often vague and unclear. “Firstly, ‘best practice’ is not always best;
secondly, with evolving technology, the ‘best practice’ of today is not that of
tomorrow; and finally, to potentially hold up during legal proceedings, the ‘best
practice’ needs to be published (e.g., in a formal handbook). We should perhaps be
striving for ‘adequate professional practice’ as a minimum requirement” (Vallée,
2000 & Abbott, 2003, reported in: Dominy et al., 2002). Part of the process of
ensuring that a given sample is representative is accepting that a certain amount of
error will always be present in the sample results, and that, “it is better to be
approximately right, than being precisely wrong” (John Tukey, reported in: Esbensen
et al., 2007), when dealing with the balance between the precision of the analysis and
the accuracy of the sampling process. Assessing the errors associated with sampling is
best done through the application of Gy’s sampling theory.

The purpose of sampling theory is to determine when a sample is representative of a


given lot, and this is done by establishing a set of criteria or rules that govern the
location from which a sample is extracted, how it is extracted, and how error can be
reduced by the criteria established (Gy, 1979). Referring back to the objectives of
sampling, it is evident that the squared difference between grades of duplicate
samples will never be zero, but can be minimized to reduce the sampling variance.
Gy’s (1979) sampling theory essentially provides a calculation that can be used in
order to determine what the minimum achievable error (FSE) will be in terms of the
squared error for any particulate sampling design, but can also be used backward

55
where a given acceptable error is applied to the formula to determine the best
sampling design in terms of sample mass and grain-sizes (Vann, 2005). Gy’s Theory
of Sampling (TOS) is a heavily mathematical model, and understanding it has proved
to be a difficult task, as stated by Assibey-Bonsu, “Gy’s theory has been found to
have some limitations in its implementation, which are mainly due to the
misapplication of the model” (Assibey-Bonsu, 1996, reported in: Pitard, 2004). In
recent years the model has been refined to suite its more practical applications in
industry, particularly by François-Bongarçon (1991-1998, reported in: François-
Bongarçon & Gy, 2002), who has contributed to the theory gaining acceptance in
recent years. However, as the experts are struggling with this concept which has been
developing over the last fifty years, this author cannot hope to do more than touch on
some of its more salient points as relates to the SNE. It should be noted that Gy’s
Sampling Theory is necessarily linked to geostatistics and the nugget effect, based on
the contribution of sampling error (through design or procedure) to the total nugget
effect (François-Bongarçon, 2004).

TOS allows the sampler to estimate the minimum sampling masses required to
achieve an acceptable amount of sampling error/variance (expressed as a standard
deviation usually around 10% where SFSE2 = 0.01). The formula is commonly
expressed as (François-Bongarçon & Gy, 2002):

SFSE2 = (1/MS – 1/ML) f g c l d3 [1]

Where,
SFSE2: sampling relative variance
MS: mass of the sample
ML: mass of the lot
f: particle shape factor/constant usually set to 0.5
g: size range factor/constant usually set to 0.25
c: mineralogical factor, ~ratio of metal density to the dimensionless grade of the lot
(in ‘per unit’)
d: nominal size of the rock fragments (mesh size of screen which rejects 5% of
material)
l: liberation factor

56
When the mass of the sample is substantially less than the mass of the lot (MS <<ML)
equation [1] can be further simplified to (François-Bongarçon & Gy, 2002):

SFSE2 = f g c l d3 / MS [11]

The particle shape factor, f, can be viewed as the coefficient of cubicity (Figure 22).
Minkkinen (1987) defines this as, “the ratio of the volume of a particle passing
through a certain sieve to the volume of a cube passing the same sieve. For a
particular material, it can be regarded as a constant irrespective of the size class: for a
cube f = 1, and for a sphere f = 0.524.” Experiments have shown that a default value
of 0.5 can be used for most spheroidal particles, while flaky particles such as alluvial
gold have a value of f = 0.2 (Minkkinen, 1987; 2004).

Figure 22: Estimation of the particle shape factor f is carried out through comparing the
particles critical dimension d with a cube of the same side length. Figure taken from
Minkkinen (1987).

The size range factor, g, is calculated based on a particle size analysis of the material.
For particles with a wide size distribution, e.g. for unsieved crushed ore, this value
will be close to g = 0.25 (often used as the default value), while for uniform particle
size distributions this value would be g = 1 (Grigorieff et al, 2004; Minkkinen, 2004).
“If the size range is defined as a ratio, d/d1, of the upper size limit, d (about 5%
oversize), to the lower size limit, d1 (about 5% undersize), then the value of g can be
estimated from Table (8)” (Minkkinen, 1987).

57
Table 8: Estimation of the particle size distribution factor g based on the relationship between the
upper size limit (d) and the lower size limit (d1). In sieving, these correspond to the opening of sieves
which retain 5% and 95% of the material tested. Table taken from Minkkinen (1987).

The main difficulties associated with this formula arise from the liberation factor, l,
which can be linked directly back to the nugget effect. The liberation factor, or
liberation size, “is the nominal size at which the fragments of the lot must be
comminuted so that the mineral grains become fully liberated from their gangue, i.e.
each fragment in the lot is either pure metal or pure gangue” (François-Bongarçon &
Gy, 2002). In reality it is often not possible to liberate all the particles of interest from
a sample, due to the inherent heterogeneity or geological nugget effect of many
deposits, such as hydrothermal gold deposits, where gold is intimately associated with
many gangue minerals. The liberation factor is a function of particle location, shape
and size, rather than the number of particles present (François-Bongarçon & Gy,
2002; Dominy et al., 2003) (Figure 23).

Figure 23: Schematic illustration of some of


the textural relationships between gold and other
(gangue) minerals. The examples are taken from
Getchell Mine in Nevada, USA, which is a
Carlin-type deposit from an epithermal system.
Figure modified after Bowell et al. (1999).

58
The liberation-factor calculation has caused much of the misapplication of Gy’s
model seen in industry today, and it was therefore refined by François-Bongarçon and
Gy (2001) from:

l = (dl/d)0.5 = SQRT (dl/d) [2]


to:
l = (dl/d)b [3]

Where,
b: an additional parameter adjusted on the basis of experimental results. For gold a
default value of 1.5 can be used.

The equations [2] and [3] are compatible, and [2] has been tested over a number of
years in the precious metals industry to ensure its validity (François-Bongarçon & Gy,
2002). These equations have, however, been misinterpreted by industry and have
contributed to the general disregard awarded to the TOS and added to the prevalence
of error incured through sampling which ultimately produces higher than necessary
nugget effects (Lyman, 1998; Dominy et al., 2002; François-Bongarçon & Gy, 2002).
In the paper by François-Bongarçon and Gy (2001), the erroneous calculations for l
are explored in the following example:

“If we take the case of blast-hole sampling in a gold mine, with


2" fragments (d = 1.27 cm), a grade of 1 ppm Au (=10-6), a
density of 19.3 g/cm3 for pure gold, standard values of 0.5 and
0.25 for factors f and g, and a liberation size of 10 microns for
gold (=10-3cm, i.e. very fine gold), formula [11] and model [2]
can be combined to derive the sample mass required to achieve a
standard deviation of 10% or better (i.e. SFSE2 = 0.01) as follows:
With: SFSE2 = f g c l d3 / MS
And: l = (dl/d)0.5
MS = f g c (dl/d)0.5 d3/ SFSE2
= f g c dl0.5 d2.5/ SFSE2
= 0.5 x 0.25 x (19.3/10-6) x (10-3) 0.5 x 1.272.5 / 0.01
= 13.9 x 106 grams or 13.9 tonnes

59
This result is absurd as a minimum sample weight, and probably
exceeds the total mass of the blast hole pile which is probably
around 400kg.”

Using the same formula, but using equation [3] to calculate l and using a default value
of 1.5 for b gives a mass of 11kg and a liberation size of 16 microns respectively,
which are much more acceptable and agree well with industry standards for gold
mining practices (François-Bongarçon & Gy, 2002). Pitard (2004) points out that
although sampling theory has resolved some of the aspects of the liberation factor, the
occurrences of gold (as illustrated in Figure 23) may be sub-microscopic, in which
case the liberation factor will never be reached.

Another use of TOS is its application in evaluating current practices using formula
[1], is illustrated by the example in François-Bongarçon and Gy (2001):

“Let us take the case of an ore in which the metal of interest


comes from a known sulphide of density of 5 g/cm3 and metal
content 25%, which liberates almost entirely at 120 microns. We
are asked to decide whether a 100g split is enough to reasonably
sample material crushed at 5 mm when the grade of interest is
0.1% metal. Since we know nothing more, and have no
calibration available, we will calculate the sampling variance
using equations [11] and [3] assuming the extreme and least
favourable value of 1 for b in [3]:

SFSE2 = f g c l d3 / MS
With l = dl/d this becomes:
SFSE2 = f g c dl d2 / MS
As before f = 0.5 and g = 0.25. A metal of grade 0.1%
corresponds to a mineral grade of 0.4% or 0.004. The variance
follows:

SFSE2 = f g c dl d2 / MS = 0.5 x 0.25 x (5/0.004) x 0.0120 x


0.52 / 100

60
= (6.85%)2

We have conservatively assumed a value of 1 for b. With a


higher value, the calculated variance will improve. Since a
relative standard deviation of less than 7% is within reason, the
proposed sampling is perfectly acceptable.”

In this way sampling theory both determines the variance/error associated with a
given stage in the sampling chain, or, if the desired precision is known, then TOS will
determine the optimal sample mass and grain-size, and thereby inform the procedure
selected for sampling (Grigorieff et al, 2004). Pitard (2004), however, warns that,
“tests claiming that all these residual errors have been eliminated or made negligible
should raise suspicion.” In terms of the IDE error is largely increased due to practical
considerations, therefore making it very difficult to reduce, even through using the
correct equipment and appropriate protocol. Examples of this include grab or spear
sampling from blast holes or scooping pulp directly from the bag when collecting the
analytical sub-sample (Pitard, 2004). In terms of the IEE there will always be some
incremental loss or gain of material which produces a non-constant bias into the
results. This change in the material mass may be due to the design or use of sampling
equipment, but can also be attributed to factors such as gravity. The GSE suffers from
similar difficulties as it is not a constant and is largely discounted on the basis of
mixing (homogenisation) which is thought to negate it.

Analytical error is also transient in nature and, “may greatly change from one day to
another, as it depends on the physical and mental state of the analyst, the ‘state of
grace’ of all the analytical equipment and chemical reagents, and the changing matrix
of samples” (Pitard, 2004). Managing and mitigating these types of error becomes a
tedious process of rigorous protocols which may limit the effects of this variance, but
which will have to be weighed against the resources needed to implement the
necessary changes against the overall improvement in the sampling variance.

The correct use of the TOS can help to identify areas where error is introduced, areas
where error can be reduced and also potentially ensure that valuable time and money
are not wasted on poor sampling campaigns. This topic is further discussed in the last

61
section of this report. The actual figure of the percentage contribution to the overall
nugget effect of the sampling component has not been determined, only the minimum
sampling variance for a given deposit type. In the literature reviewed by this author,
the sampling component of the nugget effect usually carries as far as the assay
process. However, data variance can also be introduced during database construction
and management which are carried out after the analytical results have been received.

3.2 Databases and data quality


Once sampling has been carried out and the results from any analytical procedures are
available, this information is usually entered into a database where it can be further
analysed and interpreted. However, as stated by Isaaks and Srivastava (1989),
“information extracted from a data set or any inference made about the population
from which the data originated can only be as good as the original data.” In this case
the ‘original data’ refers to the data entered into the database which would be used in
both a classical statistical and geostatistical analysis of the lot. Dominy et al. (2002)
refer to these as database construction errors which traditionally fall under the broad
umbrella of ‘estimation uncertainty.’ Both the original data and the
‘validated/accepted’ data can introduce error through (Dominy et al., 2002):
 Data transcription
 Database compilation
 Magnetic versus true north
 Co-ordinate transformations
 Downhole survey errors
 Missing intervals and/or inconsistent downhole stratigraphy, inconsistent
lithology coding
 Treatment of absent and below-detection-limit values
 Significant reporting figures and scales (e.g. percent and ppm)
 Selection of ‘acceptable’ values from repeat assays
 Merging of lithological and assay intervals
 Drill hole unrolling and de-surveying
 Data sub-division
 Grade compositing

62
 Bulk density determinations
 Grade weighting
 Data correction
 Inclusion of incompatible sample data sets

In the same way that samples should be representative of the lot, the database should
be an accurate representation of the data that has been collected (Olssen, 2009). This
is not necessarily a type of error that can be quantified. However, if regular checks are
not made to ensure data integrity, accuracy and quality are preserved, then any
subsequent analysis of the ore body becomes erroneous.

Once the data have been evaluated to determine the contribution of the contained SNE
and any possible error associated with database management, the data needs to be
analysed in order to begin characterising and describing the deposit. The first step
toward achieving this is a thorough understanding the geology of the area and the
carrying out of both a classical statistical analysis of the sample population(s) and a
spatial statistical analysis of the body. How to recognise a high nugget effect during
each of these phases is described. This can assist greatly in achieving the required
level of confidence in the estimation of these deposit types.

4.0 Data analysis


In the previous sections the components of the nugget effect were discussed, namely
the spatial variability introduced into a deposit through either its inherent geological
features, or through inappropriate sampling techniques and design. In this section the
manifestations of this variability are introduced. As stated by Rendu (1981), “to
determine the characteristics of a mineral deposit, the usual practice is to take
samples, analyse the properties of those samples and infer the characteristics of the
deposit from these properties. This analysis can be done using statistical
methods…classical statistics and spatial statistics (geostatistics).” The main difference
between these two approaches is that classical statistical methods assume that all the
sample values are essentially realisations of a random variable and each sample value
has an equal probability of being selected regardless of the relative position of the

63
sample value (Rendu, 1981). In a spatial statistical approach the sample values are
considered to be the realisations of random functions. “On this hypothesis, the value
of a sample is a function of its position in the mineralization of the deposit, and the
relative position of the sample is taken into consideration. The similarity between
samples values is quantified as a function of the distance between samples and this
relationship represents the foundation of spatial statistics” (Rendu, 1981).

In this section the classical and spatial statistical analyses of an ore body are
introduced with emphasis on the identification of a deposit subject to a high nugget
effect. A complete geostatistical analysis of a given deposit cannot be conducted
without the information contained within a classical statistical analysis, and both
should always refer back to the geological hypotheses postulated for the area in
question (Journel & Huijbregts, 1978). Both approaches are purely descriptive and
use the sample data set to characterise an ore body using robust mathematical
approaches (Isaaks & Srivastava, 1989).

4.1 Classical Statistical Analysis


In the previous section the geological component of deposits characterised by a high
nugget effect were reviewed, namely for quartz-gold reefs, placer deposits and
kimberlites. In this section identifying the nugget effect during statistical analyses of
the ore body is reviewed. The classical statistical analysis of an ore body here refers to
the basic descriptions of the data set, which do not include any spatial data or
variables (Isaaks & Srivastava, 1989). In a classical statistical analysis of an ore body,
a high nugget effect is best described as the presence of ‘erratic high grades/values’
(Dominy et al., 2001).

In the resource estimation process, once the geology of the area in question has been
interpreted and its processes of formation and mineral distributions constrained, the
process of domaining is undertaken. Controls on mineralization are defined, as stated
in the previous section, and areas thought to represent single statistical populations are
defined. In order to ensure that the domains do represent single populations, statistical
analyses of the domains are carried out (Journel & Huijbregts, 1978; Isaaks &
Srivastava, 1989; Olssen, 2009).

64
There are two main methods for describing statistical populations which are
conducted prior to considering the spatial distributions of datasets (geostatistics).
These are (Journel & Huijbregts, 1978; Isaaks & Srivastava, 1989; Olssen, 2009):
 Measures of central tendency
 Measures of spread

Central tendency is described by Journel and Huijbregts (1978); Isaaks and Srivastava
(1989); and Olssen (2009):
 The mean: the arithmetic average sample value (sum of sample values /
number of samples)
 The median: the middle value, or 50th percentile (data is sorted in ascending
order to find the middle value)
 The mode: most common or frequently occurring sample value (highest peak
in the histogram)

In data sets where a high nugget effect is suspected, an analysis of the values above
can confirm its presence very accurately. The reliability of the mean can be
determined by comparing it to the inter-quartile range, as the latter does not use the
mean as the centre of its distribution. If the mean shows a preference for the higher
end of the inter-quartile range then the population is most likely subject to a high
nugget effect. This is because the mean is sensitive to erratic high values which will
strongly influence this value and bias the mean towards the higher grades (Isaaks &
Srivastava, 1989; Olssen, 2009). Although the mean is sensitive to erratic high values,
the median is not, though both values are a measure of the centre of the distribution. A
high nugget effect is not easily identified from the median, as it measures only the
mid-point in the data set (Isaaks & Srivastava, 1989). The mode is a slightly more
complicated measure of central tendency as it depends largely on the precision of the
data values. If the data is being presented to one decimal place, the mode will be
different than if the data were presented to three decimal places (Isaaks & Srivastava,
1989). A high nugget effect is not readily seen in terms of the mode.

65
Spread is described by Isaaks and Srivastava (1989); and Olssen (2009):
 The range: the difference between the highest and lowest values (maximum
value - minimum value)
 The inter-quartile range: difference between the 75th and 25th percentiles
(like the median, the data is sorted into ascending order to identify the 25th and
75th percentiles)
 The variance: average squared difference between the actual sample values
and their average values (sum of [sample value - mean value]2 / number of
samples - 1)
 The standard deviation: square root of variance
 The coefficient of variation: also known as the relative standard deviation, it
compares the variability of datasets and describes the shape of the distribution
(standard deviation / mean)

The inter-quartile range has already been discussed as allowing the analyst to assess
the mean. However, the range can also be used to assess the impact of the nugget
effect. Purely by assessing the minimum and maximum values in a dataset, the analyst
can immediately gain insight into whether erratic high values are going to present a
problem during the evaluation process (Isaaks & Srivastava, 1989). Variance is
particularly sensitive to erratic high values as it involves squared differences and the
mean. For deposits with high nugget effects the variance is generally quite high, and
as the standard deviation is the square root of variance, this value is also generally
high for these types of deposits. The coefficient of variation (CV) is usually the best
indicator of a highly erratic deposit, as defined by a CV of 1 or more (Isaaks &
Srivastava, 1989l; Kerry & Oliver, 2007). If the CV is less than 1 then the deposit is
trending towards a more normal distribution or spread of data (Isaaks & Srivastava,
1989). All of the statistical methods described above are essentially mathematical,
whereby a set of figures are produced and analysed to unravel the characteristics of
the deposit. However, these concepts can also be represented graphically (Isaaks &
Srivastava, 1989; Olssen, 2009).

Graphical representations of the data are used to determine the population


characteristics for a given data set (Journel & Huijbregts, 1978; Isaaks & Srivastava,

66
1989; Olssen, 2009). In both domaining and most estimation methods, the objective is
to ensure that the deposit is divided into single statistical populations of geological
homogeneity (Olssen, 2009). Histograms, cumulative distribution functions and
probability plots are the main graphs used for this process (Figure 24) (Journel &
Huijbregts, 1978; Isaaks & Srivastava, 1989; Olssen, 2009).

In a frequency table, the data set is divided into a series of classes or intervals, and
how often observed values fall within these classes is recorded. The histogram is a
graphical representation of this information in the form of a bar graph where, “the
height of each bar is proportionate to the number of values within that class” (Isaaks
& Srivastava, 1989). A histogram can be used to effectively determine whether or not
mixed populations are present, and whether these populations show a normal or
skewed distribution. In a standard histogram, mixed populations are identified as
multiple peaks on the histogram (Figure 25), while single populations should show
only one peak in the data (Olssen, 2009).

The distribution of the data is essentially an indication of the relationship between the
mean, median and mode where the population is described as (Olssen, 2009):
 Normal if: mean = median = mode
 Positively skewed if: mean > median > mode
 Negatively skewed if: mean < median < mode

In earth sciences the common practice is to rank data in ascending order to produce
cumulative frequency tables above a lower limit (Isaaks & Srivastava, 1989). As
described by Olssen (2009),
“A cumulative distribution function (CDF) is an accumulated
histogram where proportions of samples below each grade
threshold (cumulative probability) are plotted against that grade.
CDF’s can be generated by sorting the data in ascending order,
calculating the percentile values for each sample and plotting the
percentiles against sample grades. The percentile is simply the
relative position of the grade, for example, the 10th percentile has
10% of the samples being lower grade and 90% being higher

67
grade. CDF’s are ‘S’ shaped when the data is not skewed. CDF’s
for negatively skewed data are steep at the high grade end while
CDF’s for positively skewed data are steep at the low grade end.”

Figure 24: Graphs showing the main styles of population distribution, namely: normal,
negatively skewed and positively skewed population distributions. The populations are
illustrated using histograms, CDF’s (cumulative distribution functions) and probability plots.
Figure modified after Olssen (2009).

68
Figure 25: An example of multiple sample populations taken from the Panda kimberlite pipe
on the Ekati Property, Lac de Gras kimberlite field in the east-central portion of the Slave
Province in the Northwest Territories of Canada. Figure modified after Dyck et al. (2004).

Probability plots are a form of cumulative frequency plots (same as CDF’s) which
attempt to define how close the population distribution is to a normal, or Gaussian,
distribution (Isaaks & Srivastava, 1989). This is achieved by adjusting the CDF’s
probability scale so that the cumulative frequencies will plot as a straight line (1:1) if
the population is Gaussian or normally distributed (Isaaks & Srivastava, 1989; Olssen,
2009). Populations with a high nugget effect usually fall into the ‘positively skewed’
population distribution category as they possess are generally low grade with a small
number of very high values (Isaaks & Srivastava, 1989).

In the examples discussed above a univariate description of the dataset was presented
where each sample is considered individually. In a bivariate analysis, the relationship
between two variables is described, and by the same token, in a multivariant statistical
analysis, numerous variables are considered (Isaaks & Srivastava, 1989; Olssen,
2009). As with univariate statistics, graphical presentations of the data are the most
useful in identifying relationships between variables in a multivariant analysis
(variables are also referred to as attributes for which several may be present in the
domain being analysed) (Isaaks & Srivastava, 1989).

Scatter-plots are generally the preferred method for displaying bivariate data as these
compare directly the values between two variables. Such plots can be used to validate

69
the original dataset, and can be used to identify potential outliers (or erratic high
values). In the example seen in Figure 26 a total of 100 data pairs have been plotted.

Figure 26: Scatter-plots of 100 U versus V values. The actual 100 data pairs are plotted in (a)
while in (b) the V value indicated by the arrow has been plotted as 14ppm instead of 143ppm.
The purpose of graph (b) is to illustrate the usefulness of scatter-plots in detecting errors in
the data. Figure taken from Isaaks and Srivastava (1989).

In the first image (a) there is a general correlation between the data pairs with only
one pair plotting away from the rest. The general spread of the data, or the data trend,
suggests that this point (V = 143ppm; U = 55ppm) is not due to error as it follows the
trend. In graph (b) however, the same sample pairs were plotted. However, for the
isolated pair in the top right-hand corner of graph (a) the value for V was entered as V
= 14ppm instead of 143ppm (Isaaks & Srivastava, 1989). The recognition of an
‘unusual data pair’ is very useful when checking a dataset for error. These errors are
usually linked to the collection or recording of data, but outliers can also suggest a
potentially high nugget effect (Isaaks & Srivastava, 1989). Another way of
determining the extent of the relationships between variables is to consider the
correlation coefficient (r) which measures the linear correlation between attributes or
variables. If r is 1 then a perfect positive linear correlation exists, while if r is -1 a
perfect negative correlation exists. As r trends towards 0 correlation declines, as is the
case for non-linearly correlated variables, or variables which have outliers (Isaaks &
Srivastava, 1989; Olssen, 2009). The latter would be the case for high nugget effect
deposits where r ~ 0.

70
Using classical statistical methods skewed populations can be identified, and the
presence of a high nugget effect detected., The problems associated with skewed
populations are most readily seen when small datasets are used to represent the
population during the estimation process (Olssen, 2009). “If the data is positively
skewed, then it is likely most of the samples will be relatively low grade and a small
number of samples will have relatively extreme grades. The estimated grade will be
biased by the extreme grade, which may not be a true reflection of the underlying
block grade. This means it becomes harder to produce reasonable estimates of the
population characteristics” (Olssen, 2009).

In the field of geostatistics, methods such as indicator kriging can be used to reduce
the effects of both highly skewed datasets and those which contain multiple statistical
populations that cannot be separated by domaining (this is dealt with in the last
section of this work). In the following section the nugget effect is defined within its
geostatistical framework and the concept of variography is introduced as a tool for the
spatial analysis of an ore body.

4.2 Spatial Analysis


The spatial analysis of data is a unique requirement of the earth sciences. Spatial
analysis acknowledges that the location of sample values is vitally important to
unravelling the characteristics of a given area but should compliment the results of a
classical statistical analysis (Journel & Huijbregts, 1978; Isaaks & Srivastava, 1989).
In areas where there is a high nugget effect, being able to identify where the high-
grade sample values occur, or identifying an overall trend in the distribution of the
data, is necessarily linked to the spatial distributions of the data (Isaaks & Srivastava,
1989). Spatial analysis is concerned both with identifying the distribution patterns of
sample values, but mainly it is concerned with the relationship between samples
(Olssen, 2009). The spatial analysis of an ore body is also referred to an analysis of
spatial continuity, or a (spatial) structural analysis (Journel & Huijbregts, 1978; Isaaks
& Srivastava, 1989).

Isaaks and Srivastava (1989) suggest using contour maps or indicator maps to help
with gaining an initial understanding of the spatial relationships between samples, and

71
also to check for any potentially erroneous data points such as a high value
surrounded by low ones (Figure 27). These types of visual data representations often
reveal patterns otherwise ‘hidden’ in the data and can usually be easily constructed
using computer software (Isaaks & Srivastava, 1989).

Figure 27: Two ways to display the spatial relationships between sample values are to
construct contour maps (left) and indicator maps (right). Both were created using 100 data for
the variable/attribute V. The contour lines are 10 ppm apart and range from 0 to 140ppm. In
the indicator maps the indicators are defined using the cut-off values as defined above for the
variable V. Figures taken from Isaaks and Srivastava (1989).

The underlying assumption in the spatial analysis of an ore body is that samples taken
closer together will most likely be more similar than samples taken further apart
(Rendu, 1981; Isaaks & Srivastava, 1989). This suggests two things, the first is that
there is a certain amount of correlation between samples, and secondly, that this
correlation is a function of the separation distance between the samples (Rendu,
1981). The modelling of this correlation as a function of separation distance is
encompassed in the semi-variogram (variogram). “In these models the fact that two
samples taken next to each other will most probably not have the same value must
also be considered; even for very short distances the correlations are usually not
perfect and a purely random component is present in the value distribution. The
mathematical models will therefore assume the presence of two sources of variability
in the values: a correlated component and a random component” (Rendu, 1981).

To understand how a variogram is constructed it is first necessary to understand the


basics of constructing an h-scatterplot. An h-scatterplot begins with a sampling grid
where each data point is plotted. The objective of this scatterplot is to pair the data

72
points according to a specific separation distance in a specific direction (Figure 28)
(Isaaks & Srivastava, 1989).

Figure 28: Examples of how data can be paired to obtain an h-scatterplot. Samples are paired
in a given direction and based on a specific separation distance (Isaaks & Srivastava, 1989).

In the example above the grid is spaced over a 10 X 10 m2 area and in (a) 90 sample
pairs have been identified at a separation distance (h) of 1m in a northerly direction;
while in (b) 81 sample pairs have been identified at a separation distance of √2 m2 in a
northeasterly direction for the same sample grid (Isaaks & Srivastava, 1989). This
data pairing is used to construct the h-scatterplot for various values of h, the
separation distance (Figure 29).

The more similar the data pairs, the closer they will plot to the line x = y on an h-
scatterplot, and by the same token, the less similar the data points, the further from
this line they will plot (Isaaks & Srivastava, 1989). From these plots the separation
distances at which the sample pairs begin to loose their correlation (spatial continuity)
is evident for a given direction (Isaaks & Srivastava, 1989).

To generate h-scatterplots in an exhaustive manner that encompasses all directions


and all separation distances for a given deposit would be cumbersome and therefore a
method for summarising this data is preferable. There are three main methods used for
summarising the data encompassed in h-scatterplots and these are the correlation
function ρ(h); the covariance function С(h); and the variogram γ(h) (Isaaks &
Srivastava, 1989). Only the latter will be discussed herein as it is the most commonly
used.

73
Figure 29: Examples (a) through (d) are h-scatterplots for four different separation distances,
but for the same direction, in this instance the direction is northeasterly. With increasing
separation distances the sample pairs becomes more diffuse and the similarity between pairs
declines. In (a) the separation distance is 1m, in (b) 2m, in (c) 3m and in (d) 4m (Isaaks &
Srivastava, 1989).

Each point on a variogram represents the average spread of the sample pairs in one h-
scatterplot and as a variogram is made up of multiple data points, the variogram is
able to summarise the data from numerous h-scatterplots as illustrated in Figure 30
(Isaaks & Srivastava, 1989; Olssen, 2009). A variogram is constructed by plotting the
average variability γ(h) of all sample pairs which are separated by a specific distance
(h), also referred to as the lag. The average variability is plotted against the lag for a
series of lags to produce the variogram (Olssen, 2009). The average variability γ(h) is
also referred to as the variogram value (Isaaks & Srivastava, 1989; Olssen, 2009).

In the example illustrated in Figure 30 it is evident that there is a general increase in


the variogram value γ(h) with increasing separation distance (h) between sample pairs
However, at a certain point the variogram value does not change even though the
sample separation distance continues to grow (Isaaks & Srivastava, 1989; Olssen,
2009). At this point, the variogram is said to have reached a plateau. The sample
separation distance at which this occurs is called the range, while the corresponding
variogram value is referred to as the sill (Journel & Huijbregts, 1978; Isaaks &

74
Srivastava, 1989; Olssen, 2009). Another way of looking at this stage is that it
represents the point beyond which the sample pairs show no correlation, and that the
average variability has become equal to the total variance inherent in the data (also
referred to as the ‘total sill’).

Figure 30: The points on a variogram are compiled from h-scatterplots of the sample pairs for
a given sample separation or lag. The sample separation at which little/no correlation between
sample pairs is evident is referred to as the range and the variogram value at which this occurs
is referred to as the sill. Figure modified after Olssen (2009).

If a trend line was projected through all the points on the variogram; this line would
not pass through the origin of the variogram. In other words, at (h) = 0 the variogram
value would ≠ 0. “The vertical jump from the value of 0 at the origin to the value of
the variogram at extremely small separation distances is called the nugget effect. The
ratio of the nugget effect to the sill is often referred to as the relative nugget effect and
is usually quoted in percentages” (Isaaks & Srivastava, 1989).

The line that is projected through the points on a variogram is modelled


mathematically using computer software such as Snowden’s Supervisor program
(Olssen, 2009). The user essentially applies a ‘best fit’ model through the data points
to determine the nugget effect, the sill and the range. The most commonly used model
is a spherical model which, “is linear for short separation distances and then curves

75
into the sill near the range of influence” (Figure 31) (Olssen, 2009). Variograms are
either modelled for each orthogonal direction if the data show directional continuity,
or the variogram is omni-directional, in which case direction is not taken into account
and all sample pairs at a specific h are used to calculate the variogram values γ(h)
(Isaaks & Srivastava, 1989; Olssen, 2009). Deposits with a high nugget effect may
show some broad-scale directional continuity, but most commonly omni-directional
variograms are preferred for these highly erratic deposit types (Isaaks & Srivastava,
1989).

Figure 31: The variogram has been modelled using a spherical model. The nugget is the
discontinuity at the origin for very small separation distances and the total sill represents the
total inherent variability in the data. Figure modified after Olssen (2009).

When constructing the variogram, such as the one in Figure 31, the nugget effect must
be modelled first. This is done using closest-spaced data available, which are usually
from adjacent samples taken from drill holes, as these generally provide a clear
indication of the data variability at short distances for the domain being modelled
(Olssen, 2009). It should be noted, however, that for deposits such as placers, this
information is often not available as the domains are usually narrow and only limited
data are available in a downhole direction (Olssen, 2009). The nugget effect is
modelled using the first few data points on the variogram which unfortunately are
often only supported by a few sample pairs which lowers the confidence levels
associated with modelling the nugget effect. For deposits where limited data are

76
available in a downhole direction, Olssen (2009) recommends using an omni-
directional variogram.

Once the nugget effect has been modelled, it is common practice to standardise the
variogram so that the ‘total sill,’ or total data variance, is 1. In this way the nugget
effect can be read as a percentage of the total data variance (Olssen, 2009). Dominy et
al. (2003) have stated that deposits with a nugget value of 50% and above present a
serious challenge to the evaluation process, stating that, “the higher the nugget effect,
the greater the potential error during the estimation and, hence, project risk.” Dominy
et al. (2001) report on a statistical analysis undertaken for the quartz-gold reef deposit
of the Empire Ranch project in the U.S.A which showed the grade population to be
highly skewed with a CV in excess of 2.5. Variography conducted on the available
data in the direction of strike were unstable and the consequent nugget effect
calculated to over 80% with ranges of less than 5m (Dominy et al., 2001). The
practical implications on the confidence levels associated with the results of this
variography are obvious. Attracting investors to support such a highly erratic deposit
where the total data variance is reached within <5m would be a difficult task.
Snowden (2001) highlights that although the nugget effect for most gold deposits is
between 30 and 50%, some coarse gold and alluvial deposits can have such randomly
distributed mineralization that the nugget effect approaches 100%.

There are cases where the nugget effect and the total data variance (sill) are the same,
i.e., “when the variability consists only of microstructures…with ranges much smaller
than the support of the data, the experimental semi-variogram will appear flat, or
more exactly, fluctuating around their sill values” (Journel & Huijbregts, 1978). This
is referred to as a pure nugget effect, and ironically, in practice this would correspond
to mineralization with extreme homogeneity (Journel & Huijbregts, 1978; Isaaks &
Srivastava, 1989). Journel and Huijbregts (1978) caution that, “in mining practice, an
actual pure nugget effect is extremely rare; on the other hand, insufficient or
inadequate data will quite often result in experimental variograms having the
appearance of a pure nugget effect.” Methods of dealing with this are discussed in the
final section of this work.

77
5.0 Recommendations for reducing the Nugget Effect

5.1 Geological Nugget Effect


The geological component of the nugget effect is a fixed feature that cannot be
reduced, however, “if its nature and origins can be understood it is possible to: (i)
modify nature of sampling procedure; and (ii) provide a reasonable statement about
the reliability of the resource/reserve estimate” (Dominy et al., 2003). As stated by
King et al. (1982, reported in: Dominy et al., 2002), “it is the geological factor that
has impressed itself on us more and more as being the key deficiency where serious
weaknesses in Ore Reserve estimations have appeared.”

Although the geological component of the nugget effect cannot be reduced, there are
ways of managing its effects to lower the overall uncertainty of the resource.
Snowden (2001) summarises the geological fields were this can be achieved as
during:
 Drill hole logging
 Lithological interpretations
 Structural interpretations
 Establishing the relevant geological controls which affect mineralization
 Quantifying spatial continuity within geologically meaningful domains

Establishing continuity requires a framework which can be used at different stages of


project development to define the levels of continuity required. Continuity in this
context refers to both, “1. geological continuity - the geometric continuity of the
geological structure(s) or zone(s) hosting mineralization; and 2. grade continuity – the
continuity of grade that exists within a specific geological zone, sometimes called the
value continuity” (Dominy et al., 2002). Both of these are scale-specific, and in the
case of geological continuity, should be considered in three dimensions. Dominy et al.
(2002) have attempted to define the continuity required for different levels of resource
confidence (Table 9).

78
Table 9: General guidelines on the continuity required for the different Mineral Resource and Ore
Reserve categories. Table taken from Dominy et al. (2002).

Delineation drilling greatly assist in resolving some of the issues surrounding


continuity, but each deposit is unique and the effectiveness of any given technique
should be deposit specific and guided by the style of mineralization present
(Snowden, 2001; Dominy et al., 2002).

Placer diamond deposits are particularly difficult to describe in terms of continuity, as


these are, “complex, highly variable sedimentary systems (which) interact in time and
space to release, transport and concentrate diamonds” (Corbett & Burrell, 2001). The
geological models produced for such deposits, “not only provide the basis for
exploration target selection, but also high-resolution orebody characterisation, a
prerequisite for high confidence geostatistical evaluation, mining system design and
mine planning” (Corbett & Burrell, 2001). De Beers Marine was faced with a unique
challenge when attempting to define geological models for the offshore marine
diamond placers on the West Coast of Namibia (Corbett & Burrell, 2001). Due to the
depths (>90m below sea level) at which these deposits are located geological models
were created using, “very high resolution seabed mapping technologies combined
with direct visual observation from the ‘Jago’ submersible” (Corbett & Burrell, 2001).
The objective was to develop a predictive model of the Orange River system which
had a sound foundation in sedimentological studies augmented by mining activities
both present and past. Laboratory studies designed to simulate diamond transport in
fluvial systems were also used to supplement field studies when generating geological

79
models for this area (Corbett & Burrell, 2001). In delineating the Orange River
offshore fan-delta (Figure 32) some of the seabed mapping technologies employed
included, “high resolution airgun seismic and ultra high-resolution Chirp seismic
coverage over the entire fan-delta plus the surrounding area, covering some 400 km2.
These datasets provided an excellent window on the internal geometry of the fan-delta
(Fig. 11a) and a very high-resolution sonar surface texture map (Fig. 11b)
supplemented by airgun seismic and high-resolution Chirp seismics to define gully
morphology (Fig. 11c).”

Figure 32: Figures 11a through to 11c are presented with figure captions from Corbett and
Burrell (2001).

The previous example serves to illustrate the diverse methods available for ore body
delineation, and serves to emphasise the importance of selecting techniques which are
applicable to the environment in question. This type of detailed study of an essentially
remote and highly variable ore body can effectively contribute to unravelling the
geological contribution to the nugget effect.

Scott Smith and Smith (2009) have developed a summary diagram of the main
geological features of the different kimberlite bodies in Canada in the hope of

80
providing a predictive geological model for future discoveries (Figure 33). As stated
by Scott Smith and Smith (2009), “such summaries can be used to improve evaluation
strategies and lead to more reliable results because they (i) provide a norm for
comparison or indicate new geological situations, and (ii) act as a guide for the
successful extrapolation between data points and application of predictive geology
during the development of new geological models.”

Figure 33: Schematic summary of the macroscopic components of the different emplacement
products infilling the three main types of kimberlite pipes in Canada. Such models can be
used to guide extrapolations and interpretations of sample data when modelling a new
kimberlite body. Figure taken from Scott Smith and Smith (2009). For further details on the
abbreviations used in this diagram the reader is referred to the source article.

Defining the external shape of these unique ore bodies can often be complicated by
unconsolidated country rock-kimberlite contacts which are not always clear from
drilling, however, defining the internal geology can assist in predicting the shape of
the kimberlite body (Scott Smith & Smith, 2009). Such a diagram can also aid in the
identification of the different internal units of the kimberlite, which in turn can be
effectively used to predict the geometry and internal contacts within the kimberlite
body (Scott Smith & Smith, 2009). The authors were also able to develop a system
whereby the macroscopic petrographic features of each kimberlite phase could be
used to, “predict macro-diamond contents and distributions relative to the pre-
emplacement magma” (Figure 34) (Scott Smith & Smith, 2009).

81
Figure 34: Summary of the qualitative macroscopic petrography of the main textural types of
kimberlite in Figure 33. Figure taken from Scott Smith and Smith (2009). For further details
on the abbreviations used in this diagram the reader is referred to the source article.

As stated by Scott Smith and Smith (2009), any geological models developed will be
based on an interpretation of the available geological data, but also, on the
extrapolation of the geology between the sampled points. The more data available, the
easier it will be to extrapolate with confidence and resolve issues surrounding
continuity.

82
5.2 Sampling Nugget Effect
The contribution of sampling based error to the total nugget effect can often be
significant, and as emphasised by Dominy et al. (2003), it is not a fixed feature, which
means that it represents a great opportunity for reducing the nugget effect. Snowden
(2001) however, reiterates that although resolving continuity issues is vital to
reducing resource uncertainty, the choice of how many holes to drill will always be
guided by cost. The best defence against sampling based errors, which can maximise
the efficacy of sampling campaigns and results, is Gy’s Theory of Sampling (TOS)
(Gy, 1976; 1979). Gy (1976) makes the following statements about the importance of
sampling:
“The first thing that should be emphasized is that sampling is not
a simple mechanical technique like crushing, for instance: it is a
random process liable to introduce errors such as in chemical
analysis. But whereas this latter is always carried out in
laboratory conditions (I was tempted to say in ‘aseptic’
conditions) by a well-trained specialized staff conscious of the
necessity of accuracy and precision, sampling is usually carried
out under field or plant conditions by unspecialized labour
perfectly unaware of the importance of their work and
unconscious of the mistakes to be avoided at any cost. Sampling
and analysis (chemical, size or moisture analysis) are the two
complementary links of the quality estimation chain with the
consequence that the total estimation error is the sum of the
sampling error and of the analysis error. The optimization of the
‘accuracy-cost’ characteristics of an estimation demands that the
same care be taken in sampling and in analysis. Sampling should
always be placed under the responsibility of the head of ‘quality
control’, not of the head of ‘production’. It should be carried out
by a specialized staff conscious of the numerous errors that may
take place and knowing how to suppress or reduce these.”

In Minkkinen (2004) the classification of sampling errors is proposed as a framework


in itself for auditing and designing sampling procedures to minimised error or

83
variance due to sampling. Minikkinen (2004) states that, “sampling correctness must
be preventatively implemented,” for which a three step process is presented:
 Step 1 Check that all sampling equipment and procedures
obey the rules of correct sampling. Replace incorrect
equipment and procedures with correct ones. Correct
sampling largely eliminates the materialization and
preparation errors. Weighting error is made if the lot consists
of sublots of different sizes or if the flow rate varies during
the sampling periods in process streams, and simple average
is calculated to estimate the lot mean. This error is eliminated
if proportional cross-stream sampling can be carried out, and
the average is calculated as the weighted mean weighted by
the sample sizes.
 Step 2 Estimate the remaining errors (fundamental sampling
error, grouping and segregation error, and point selection
error) and what is their dependence on increment size and
sampling frequency. If the necessary data are not available,
design pilot studies to obtain the data.
 Step 3 Define the acceptable overall uncertainty level or cost
of the investigation and optimize the method, i.e., the
increment sizes, selection strategy (systematic or stratified),
and the sampling frequency so that the required uncertainty
or cost level is achieved.

He also states that by calculating the standard deviation for each step will reveal
where to modify the procedure in order to reduce the variance (Minikkinen, 2004).
Once the target stage (where error is being introduced) has been identified, the
solution will be either to increase the mass of the sample, or to decrease the grain size
of the sample. As stated by (Grigorieff et al, 2004):

1. Large grain-sizes in the extract can produce large errors


2. Small sample masses extracted can produce large errors

Which method is selected depends entirely on cost and the precision required by the
project. The approach that is the cheapest should be pursued because both will
produce an improvement in the standard deviation (or acceptable minimum sampling
error) (Grigorieff et al, 2004; Minikkinen, 2004).

When the primary (first) sample is taken, one of the most effective ways to reduce
variability is to take larger samples. This is due to the inverse relationship between

84
sample weight and variance, if the sample weight increases, the variance should
decrease, and indeed, the results from several high nugget deposits using sampling
theory have shown that using bulk sampling methods can reduce variance (Dominy et
al., 2001; Dominy & Platten, 2007).

Esbensen et al. (2007) have proposed the use of the simulation software VARIO
which is available as freeware from the webpage www.acabs.dk for selecting the most
appropriate sampling scheme when attempting to reduce the TSE (total sampling
error) to an acceptable level. The software requires an initial sample set of at least 60
samples from which, “it is possible – at no additional sampling or analytical cost – to
simulate all conceivable sampling schemes that might be contemplated for
improvement of a current sampling procedure” (Esbensen et al., 2007). The software
is able to import a variety of data types, perform basic statistical analyses of the data
set and also perform a variographic analysis of the data. The software cannot,
however, be used for static (known as 0-D) sampling, but only for process (D-1)
sampling. For further details on this software the reader is referred to the ACABS
webpage (Esbensen et al., 2007).

In 1987 Minikkinen presented a paper on a piece of software known as SAMPEX


which is based entirely on Gy’s TOS and which is able to solve the following
problems:
 Minimum sample size for a tolerated relative standard deviation of the
fundamental sampling error
 Relative standard deviation for a given sample size
 Maximum particle size for the material for a specific standard deviation and
sample size
 Balanced design of a multi-stage sampling and sample-reduction process; and
 Sampling for particle size determination

A Google internet search revealed detailed descriptions of a further eight software


packages that can be used to assess sampling errors, namely: CENVAR, CLUSTERS,
Epi Info, PC CARP, Stata, SUDAAN, VPLX, and WesVarPC (internet source 4). The

85
authors of the page admit that there likely exist numerous other such packages and
stress that the list is by no means exhaustive (internet source 4).

The difficulties associated with a high nugget effect in quartz-gold reef deposits has
been well documented in the literature (Dominy et al., 2001; 2003; Dominy & Platten,
2007). The general conclusion in this context is that while drilling can be used to
establish geological continuity, grade continuity can only be established with any
certainty if underground development and bulk sampling methods are used (Dominy
et al., 2001; 2003; Dominy & Platten, 2007). For some excellent examples on how
TOS has been used to successfully reduce the nugget effect in gold deposits the reader
is referred to Dominy et al. (2001; 2003); and Dominy and Platten (2007).

Nichol et al. (1994) describe the use of an approach pioneered by Clifton et al. (1969)
in the sampling of gold. The assumption made in this approach is that all the gold
particles in a given sample are:
 the same size
 randomly distributed
 represent less than 0.1% of the particles in the sample

This method also assumes that the sample contains more than 1000 particles, and that
analytical errors are neglected (Nichol et al., 1994). This theory was able to show that
the precision of the sample decreases as the number of gold particles in the sample
decreases. Based on this relationship Clifton et al. (1969) were able to show that a
minimum of 20 particles of gold need to be in each sample to achieve a precision of
50% at the 95% confidence level (Figure 35). The measure of precision is used to
assess the representivity of the sample of the lot, at the levels stipulated the sample is
‘adequately representative’ (Nichol et al., 1994).

86
Figure 35: The variation of precision with the number of gold particles contained in a
sample. Figure taken from Nichol et al. (1994).

Clifton et al. (1969) were also able to show that the weight of a sample containing 20
particles of gold has a distinct relationship to the particle size and gold concentration
in the sample (Figure 36).

Figure 36: The variations of sample weights containing 20 particles of gold according to
particle size and gold concentration. Gold particles are flakes in which the diameter is five
times the thickness. Figure taken from Nichol et al. (1994).

“For example, in a sample containing 1 ppm gold, in which the gold occurs as discs
with diameter five times the thickness (250 μm in diameter and 50 μm thick), a
sample of about 1kg would contain the necessary 20 particles. In a sample containing

87
the same 1 ppm Au content, but as discs of 32 μm diameter, 20 particles would be
contained in a sample weight of approximately 2g” (Nichol et al., 1994). The
information in Figure 36 has been further summarised in Table 10.

Table 10: The weight of a sample containing 20 particles of gold (diameter is five time the thickness)
according to the concentration of gold and the gold particle sizes determined by the methods of Clifton
et al. (1969). Table taken from Nichol et al. (1994).

For the Table above it is clear that the size of the sample required (to fulfil the
objective of a 50% precision at the 95% confidence level) dramatically increases with
increasing particle size (common in deposits with high nugget effects) for a given
gold concentration. If the particle size is constant then the required sample size also
increases, based on the decreasing concentrations of gold (Nichol et al., 1994). In
Table 10 any figure under the stepped line is considered to be adequately
representative, while anything above the line is considered to be a poor representation
of the lot.

For the method proposed by Clifton et al. (1969), all the gold particles must be of
equal size, a feature rarely seen in nature. The use of averaging out the particle sizes
in a real sample cannot be used to estimate the appropriate sample size, but what can
be used is the effective grain size (Nichol et al., 1994). The effective grain size can be
determined through (Nichol et al., 1994):
 the measured gold particle size distribution;
 the variance of replicate analyses of unsized splits of the sample; and
 the maximum size of gold particles that make a significant contribution to the
gold content of the sample

88
Methods have been developed which can be used to estimate the effective size (or
diameter) of gold particles in a sample based on the equation developed by Prigonine
(1961), Gy (1967) Clifton et al. (1969), and Nichol et al. (1989). The effective
diameter of the gold particles in a sample can be estimated from Table 11 and the
sample weights necessary to achieve representivity can be read from Table 12.

Table 11: Effective diameter of gold particle of varying size range in the minus 63μm fraction of ten
samples determined by the method of Clifton et al. (1969). Table taken from Nichol et al. (1994).

Table 12: Sample weights (g) necessary to achieve ~50% precision, at the 95% confidence limits,
according to the effective diameter and concentration, determined by the method of Clifton et al.
(1969). Table taken from Nichol et al. (1994).

In areas with a high nugget effect Nichol et al. (1994) propose estimating the sample
size by using the size equal to the maximum significant size as the effective size. This
size is the size below which 95% of the total gold is contained. “This approach

89
assumes that gold particles are all the same size and equivalent to the 95th percentile
size and therefore this procedure provides an additional safety margin relative to the
procedures involving consideration of the size distribution of the gold and the
variability of replicate analyses” (Nichol et al., 1994).

The methods outlined by Nichol et al. (1994) can effectively increase the precision of
the samples being taken by estimating the appropriate sample sizes, but should be
used in conjunction with TOS to ensure that the overall standard deviation is also
within acceptable limits. Reducing the sampling error must take the considerations
outlined in the previous section into account, such as the level of information required
for the project at hand, as this will guide the sampling campaign itself.

Dyck et al. (2004) have tried to classify the resources for the kimberlite pipes on the
Ekati property based on their complexity, and have then used this to come up with a
method of determining the amount of information that is still required to mitigate the
risks associated with the deposit (Figure 37).

Figure 37: Schematic qualitative relationship between resource variability (complexity) and
sampling (drilling) density. Figure modified from Dyck et al. (2004).

In this way deposits with high complexity will require much greater levels of
information, and therefore much more sampling, than deposits with low complexity
(Dyck et al., 2004). As stated by Snowden (2001) however, regardless of the
sampling density, the effective control of the sampling related error must necessarily
be linked to the control of the quality of the data being used.

90
5.3 Data Quality
François-Bongarçon (2004) includes “measurement, location and data entry errors” in
the variance attributed to the SNE, which Snowden (2001) further unpacks into the
reliability and accuracy of sample co-ordinates, and the measurement and
interpretation of bulk densities. Snowden (2001) suggests that these can be reduced to
some extent by maintaining a “high standard of surveying of both drillhole collars and
downhole samples,” but also highlights the need for ensuring data quality.

The process of conducting rigorous QA/QC checks at all stages of exploration,


evaluation and exploitation of a given deposit have already been introduced in a
previous section. As stated by Dominy et al. (2002), “the consequences of poor data
quality are obvious, and could include, for example, investor dissatisfaction,
inappropriate planned engineering and financial requirements, poorly supported
decisions, upward revision of budgets, cost overruns, late completions, lower mining
production, and project failure and bankruptcy. The importance role of QA/QC
systems throughout all facets of the life of a mining project cannot be over-stressed.”
Olssen (2009) suggests documentation of the following:

 Details on the QA/QC procedures used


 Assessment of the representivity of QA/QC data
 Analysis of duplicates and assessment of precision
 Analysis of standard sample results
 Blanks analysis and assessment of contamination during sample preparation
 Any QA/QC issues and documentation of any correction of data
 Discussion of the risks associated with quality of the assay data

Through this process the analyst is constantly aware of the quality of the data being
used to describe, or even estimate for the ore body in question. Although many errors
can be corrected, such as those identified during a random audit of the database,
(where 5-10% of the database and cross-check against a hard-copy) there are some
which cannot. Being aware of these potential errors and how they may impact the
reliability and confidence associated with characterising the ore body can contribute
significantly to controlling the contribution of SNE to the total nugget effect. As

91
stated by Snowden (2001), “good data management, data flow and database integrity
are critical aspects of data quality.” Dominy et al. (2002) believe that the use of
QA/QC programs can significantly reduce the sampling component of the nugget
effect.

5.4 Classical statistics analysis


In a classical statistical analysis the objective is to describe the sample populations in
as much detail as possible. At this stage of data analysis recognising the potential
contribution of the nugget effect is important, although there is no way of removing
the nugget effect, identifying outliers can significantly assist when interpreting the
results of estimation (Isaaks & Srivastava, 1989). There are a number of methods that
can be used to simulate the effects of removing or changing erratic high values in
order to get a better sense of the underlying population characteristics, and at this
stage statistics, “can be used to help both in the validation of the initial data and in the
understanding of later results” (Isaaks & Srivastava, 1989).

The high nugget effect can be seen in statistical populations usually as a positively
skewed histogram. By transforming the data to fit a log scale when the graphs are
plotted the range of the high grade values can be compressed while the range of the
low grade values is expanded. The log histogram, log CDF and the log probability
graphs of such transformed data will look like those of a normal population
distribution as presented in Figure 24 if the population has a log normal distribution
(Rendu, 1981; Kerry & Oliver, 2004; Olssen, 2009). These can then be used to assess
the underlying population characteristics.

The most effective management of skewness in a statistical population will be during


the actual estimation process. This is done through domaining, top cuts, indicator
kriging or simulation (Olssen, 2009). These processes will not be dealt with herein as
this paper is focussed on the recognition of the nugget effect and not actual estimation
techniques or methods.

In dealing with skewness in the population, a data transformation may be appropriate,


and may remove the skewness from the dataset. This can be performed on the raw

92
data, and that data can then be used to construct the experimental variogram. The
most common types of transformations are to square roots and logarithms (Kerry &
Oliver, 2007). However, as the size of the data set decreases, the effects of asymmetry
(or skewness) has an increasing effect on both the form of the variogram and the
results of cross-validations if less than 100 data points are used. Kerry and Oliver
(2007) were able to show that 100 data is the minimum number that can be used to
create a reliable variogram, the same figure that was obtained by Webster and Oliver
(1992 reported in: Kerry & Oliver, 2007). The effects of transformation versus the
raw data should be assessed, and where there is little difference, the raw data should
preferentially be used when creating the variogram. Dealing with skewed populations
is summarised in Figure 38.

Figure 38: Simplified flow chart summarising the ‘standard best practice’ for dealing with
skewness in geostatistical analysis. Figure taken from Kerry and Oliver (2007).
There is some overlap between the methods that can be used prior to a classical
statistical analysis, and when preparing the data for variography. For ease of reference
these sections have been broken up, however, they should be considered together
when dealing with skewed populations.

93
5.5 Spatial statistical analysis and variography
One of the difficulties associated with the interpretation of the variogram is in
establishing the true contribution of the nugget effect. This section deals with the
various methods that can be utilised in identifying and removing the factors which
add to the nugget effect unnecessarily.

Isaaks and Srivastava (1989) suggest using an omni-directional variogram as the


starting point for any spatial analysis as this approach does not take direction into
account and will contain more sample pairs than any given directional variogram. If,
however, no clear structure can be obtained from such a variogram, the belief is that
using directional variograms will not meet with much success. In reviewing an omni-
directional variogram it is still possible to isolate the sources of erratic behaviour
through examination of the h-scatterplots upon which the variogram is based (Isaaks
& Srivastava, 1989).

In the case of deposits with a large nugget effect single sample values may have a
significant impact on the variogram which need to be accounted for. “This may
involve entirely removing certain samples from the data set, or removing particular
pairs of samples only from particular h-scatterplots” (Isaaks & Srivastava, 1989). This
process may produce an improvement in the overall appearance of the variogram, but
removing data is never a good idea, and there are often other factors which bias the
variogram towards a high nugget effect which are artefacts (Isaaks & Srivastava,
1989).

Determining the actual contribution of the nugget effect requires a detailed


understanding of the deposit itself. One of the contributing factors in this situation
may be sample clustering. The sample data used for the construction of a variogram
may not be from a single sampling campaign, but from numerous ones, each with a
different spacing, and each with a different target (Olssen, 2009). The latter stages of
sampling, including infill sampling, usually focus on the delineating high grade areas,
which will bias the results of any statistical analysis (Isaaks & Srivastava, 1989;
Olssen, 2009). There are several methods that can be used to decluster the sample data

94
before conducting a statistical analysis or creating the variograph, these include
(Olssen, 2009):

 Interactive filtering: This is the process by which certain samples are


removed when classical statistical analyses are carried out, but which are
retained when the actual variography and estimations are done.
 Polygonal declustering: “Involves the formation of polygons around each
sample using the vertices equidistant between each surrounding sample point.
The area defined by each polygon is then used to weight the samples. Bad
edge effects can occur using this method if there are a large unsampled areas
on the edges. The unsamples edges result in large polygons and hence large
weighting being applied to these samples. The reverse effects can occur if the
edge blocks are too small” (Olssen, 2009).
 Nearest neighbour declustering: In this method a regular grid of cells is
placed over the data. Samples which have their centre in a given cell are
retained, and those that don’t are excluded during statistical analysis.
 Cell weighting declustering: “Involves placing a grid of cells over the data.
Each cell that contains at least one sample is assigned a weight of one. That
weight of one is distributed evenly between the samples within each cell”
(Olssen, 2009).

The method of cell weighting declustering is generally the preferred method because
all the samples are considered when averaging out the data. Choosing the cell size for
the grid can be a bit tricky, but the drill hole spacing is a good place start, after which
a number of cell sizes can be tested to determine an optimum (Olssen, 2009).
However, if the sampling campaigns were carried out on a regular grid, or if the
grades are similar between the various sampling campaigns then declustering of the
data is not necessary (Olssen, 2009).

95
6.0 Final remarks
The nugget effect can have a huge impact on the viability of a project due to the
uncertainty related to this phenomenon. However, a high nugget effect is a largely
predictable feature which should be evident during an orientation study, or during the
early phases of geological exploration. This means that from an early stage the
geologist is aware of the potential problems associated with a given ore body before
any sample results are even available. As stated by Dyck et al. (2004), “a pragmatic
approach, based on a sound understanding of the geological characteristics of the ore
bodies, an appreciation of the sampling limitations, careful assessment of available
data and application of the appropriate estimation methods, can lead to highly
effective evaluation of resources as well as identification of key risks to be addressed
by forward work.”

It is recommended that future research focus on identifying the nugget effect during
resource estimation and the potential methods for reducing its effects explored within
that context.

96
Acknowledgments
I would like to thank De Beers Marine Namibia for making it possible for me to
attend the M.Sc. course in Exploration Geology, and for the support and guidance of
Godfrey Ngaisiue.

I wish to thank the Geology Department at Rhodes University for the outstanding
quality of the courses given during 2009, and the staff in general for their excellent
attitude. In particular I wish to thank Prof. John Moore for his contributions to both
my formal and informal education this year, and for always making himself available
to the inquiring mind. It was an absolute privilege to have him as a mentor this year. I
would also like to thank Ashley Goddard, without whom I may never have attended
this course, for all her invaluable assistance this year, and her willingness share coffee
breaks with a frazzled student.

I would also like to thank Lynn Olssen of the Snowden Group (Australia) for her
advice and input during the construction of this thesis. John Vann is also thanked for
making the Quantitative Group’s (Australia) applied geostatistics manual available to
me.

I wish to thank my class mates for their collaborative attitude towards work and the
integrity they have shown throughout the year. They contributed substantially to
making this year memorable.

Finally, I would also like to thank my friends and family for supporting and
encouraging me this year.

97
References
Berryman, A., Scott Smith, B.H., and Jellicoe, B. (2004). Geology and diamond
distribution of the 140/141 kimberlite, Fort à la Corne, central Saskatchewan, Canada.
Lithos, 76, 99-114.

Bisso, C.R., Cuandra, W.A., Dunkerley, P.M., and Aguirre, E. (1991). The epithermal
gold-silver deposits of Choquelimpie, Northern Chile. Economic Geology, 86, 1206-
1221.

Bowell, R.J., Baumannb, M., Gingrich, M., Tretbar, D., Perkins, W.F., and Fisher,
P.C. (19999). The occurrence of gold at the Getchell mine, Nevada. Journal of
Geochemical Exploration, 67, 127–143.

Corbett, A. (2002a). Diamond beaches: A history of Oranjemund. 2nd Ed. Cape Town:
Namdeb Diamond Corporation (Pty) Ltd. 144pp.

Corbett, G. (2002b). Epithermal gold for explorationists. AIG Journal – Applied


geoscientific practice and research in Australia, paper 2002-01, 1-26.

Corbett, I., and Burrell, B. (2001). The earliest Pleistocene(?) Orange River fan-delta:
an example of successful exploration delivery aided by applied Quaternary research in
diamonds placer sedimentology and palaeontology. Quaternary International. 82, 63-
73.

Dominy, S.C., Johansen, G.F., and Annels, A.E. (2001). Bulk sampling as a tool for
the grade evaluation of gold-quartz reefs. Applied Earth Science: Transactions of the
Institution of Mining & Metallurgy, Section B. 110, B176-B191.

Dominy, S.C., Noppé, M.A., and Annels, A. (2002). Errors and uncertainty in Mineral
Resource and Ore Reserve Estimation: The importance of getting it right. Exploration
Mining Geology, 11, 77-98.

98
Dominy, S.C., and Platten, I.M. (2007). Gold particle clustering: a new consideration
in sampling applications. Applied Earth Science: Transactions of the Institution of
Mining & Metallurgy, Section B. 116, 130-142.

Dominy, S.C., Platten, I.M., and Raine, M.D. (2003). Grade and geological continuity
in high-nugget effect gold-quartz reefs: implications for resource estimation and
reporting. Applied Earth Science: Transactions of the Institution of Mining &
Metallurgy, Section B. 112, B239-B259.

Dyck, D.R., Oshust, P.A., Carlson, J.A., Nowicki, T.E., and Mullins, M.P. (2004).
Effective resource estimates for primary diamond deposits from the EKATI
Diamonds Mine, Canada. Lithos, 76, 317-335.

Esbensen, K.H. (2004). 50 years of Pierre Gy’s “Theory of Sampling” – WCSB1: A


Tribute. Chemometrics and Intelligent Laboratory System, 74, 3-6.

Esbensen, K.H., Friis-Petersen, H.H., Petersen, L., Holm-Nielsen, J.B. and


Mortensen, P.P. (2007). Representative process sampling – in practice: Variographic
analysis and estimation of total sampling errors (TSE). Chemometrics and Intelligent
Laboratory System, 88, 41-59.

Field, M., Stiefenhofer, J., Robey, J., and Kurszlaukis, S. (2008). Kimberlite-hosted
diamonds deposits of southern Africa: A review. Ore Geology Reviews, 34, 33-75.

François-Bongarçon, D. (2004). Theory of sampling and geostatistics: an intimate


link. Chemometrics and Intelligent Laboratory System, 74, 143-148.

François-Bongarçon, D. and Gy, P. M. (2002). The most common error in applying


‘Gy’s Formula’ in the theory of mineral sampling and the history of the liberation
factor. The Journal of the South African Institute of Mining and Metallurgy. 475-479.

Garnett, R.H.T., and Bassett, N.C. (2005). Placer Deposits. In: Hedenquist, et al.
(Eds.). Economic geology, one hundredth anniversary volume, 1905-2005. Littletons:
Society of Economic Geologists Inc. 1136pp.

99
Garrett, R.G. (1983). Chapter 4: Sampling methodology. In: Govett, G.J.S (Ed.),
Handbook of Exploration Geochemistry. Amsterdam: Elsevier Scientific Publishing
Company. 437 pp.

Grigorieff, A., Costa, J.F. and Koppe, J. (2004). Quantifying the influence of grain top
size and mass on a sample preparation protocol. Chemometrics and Intelligent
Laboratory System, 74, 201-207.

Groves, D.I., Goldfarb, R.J., Gebre-Mariam, M., Hagemann, S.G., and Robert, F.
(1998). Orogenic gold deposits: A proposed classification in the context of their
crustal distribution and relationship to other gold deposit types. Ore Geology Reviews,
13, 7–27.

Gy, P.M. (1976). The sampling of particulate materials – A general theory.


International Journal of Mineral Processing, 3, 289-312.

Gy, P.M. (1979). Developments in Geomathematics 4: Sampling of particulate


materials: Theory and practice. Amsterdam: Elsevier Scientific Publishing Company.
431pp.

Hammond, N.Q. and Moore, J.M. (2006). Archaean lode gold mineralization in
banded iron formation at the Kalahari Goldridge deposit, Kraaipan Greenstone Belt,
South Africa. Mineralium Deposita, 41, 483-503.

Harder, M., Scott Smith, B.H., Hetman, C.M., and Pell, J. (2009). The evolution of
geological models for the DO-27 kimberlite, NWT, Canada: implications for
evaluation. Lithos, Supplement 1, 112, 61-72.

Hedenquist, J. W., Arribas, A., Jr., and Gonzalez-Urien, E. (2000). Exploration for
epithermal gold deposits. Reviews in Economic Geology, 13, 245-277.

Isaaks, E.H., and Srivastava, R.M. (1989). An Introduction to Applied Geostatistics.


New York: Oxford University Press. 561 pp.

100
Journel, A.G. and Huijbregts, C.J. (1978). Mining Geostatistics. Bury St Edmunds: St
Edmundsbury Press Ltd. 600 pp.

Kerry, R., and Oliver, M.A. (2007). Determining the effects of asymmetric data on the
variogram. I. Underlying asymmetry. Computers and Geosciences, 33, 1212-1232.

Kirkley, M.B., Gurney, J.J. and Levinson, A.A. (1991). Age, origin and emplacement
of diamonds: Scientific advances in the last decade. Gems and Gemology, 27 (1), 2-
25.

Kjarsgaard, B.A, Leckie, D.A., and Zonneveld, J.P. (2007). Discussion of “Geology
and diamond distribution of the 140/141 kimberlite, Fort à la Corne, central
Saskatchewan, Canada” by A.K. Berryman, B.H Scott Smith and B.C. Jellicoe (Lithos
76, 99-114). Lithos, 97, 422-428.

Lyman, G.J. (1998). The influence of segregation of particulates in sampling


variance- the question of distributional heterogeneity. International Journal of
Mineral Processing, 55, 95-112.

Macdonald, E.H. (1983). Alluvial Mining: The geology, technology and economics of
placers. London: Chapman and Hall Ltd. 508pp.

Minkkinen, P. (1987). Evaluation of the fundamental sampling error in the sampling


of particulate solids. Analytica Chimica Acta, 196, 237-245.

Minkkinen, P. (2004). Practical applications of sampling theory. Chemometrics and


Intelligent Laboratory System, 74, 85-94.

Nichol, I., Hale, M., and Fletcher, W.K. (1994) Drainage Geochemistry in Gold
exploration. Hale, M., and Plant, J.A. (Eds.), In: Govett, G.J.S. (Ed.), Handbook of
Exploration Geochemisty, Volume 6. Amsterdam: Elsevier. 499-557.

101
Olssen, L. (Ed.) (2009). Snowden Professional Development Courses: Resource
Estimation. Short Course Manual. Perth: Snowden Mining Industry Consultants.
184pp.

Pitard, F.F. (2004). Effects of residual variance on the estimation of the variance of
the Fundamental Error. Chemometrics and Intelligent Laboratory System, 74, 149-
164.

Rendu, J.-M. (1981). An introduction to Geostatistical methods of mineral evaluation.


South African Institute of Mining and Metallurgy Monograph Series. Johannesburg:
South African Institute of Mining and Metallurgy. 84pp.

Robb, L. (2005). Introduction to ore-forming processes. Malden: Blackwell


Publishing. 373pp.

Scott Smith, B.H., and Berryman, A.K. (2007). Reply to Discussion of “Geology and
diamond distribution of the 140/141 kimberlite, Fort à la Corne, central
Saskatchewan, Canada” by A.K. Berryman, B.H Scott Smith and B.C. Jellicoe (Lithos
76, 99-114) by B.A. Kjarsgaard, D.A. Leckie and J.P. Zonneveld. Lithos, 97, 429-434.

Scott Smith, B.H., Smith, S.C.S. (2009). The economic implications of kimberlite
emplacement. Lithos, Supplement 1, 112, 10-22.

Simmons, S.F., White, N.C. & John, D.A. (2005). Geological characteristics of
epithermal precious and base metal deposits. In: Hedenquist, et al. (Eds.). Economic
geology, one hundredth anniversary volume, 1905-2005. Society of Economic
Geologists Inc. 523-560.

Skinner, E.M.W., (1989). Contrasting Group 1 and Group 2 kimberlite petrology:


Towards a genetic model for kimberlites. In: Ross, J., Jacques, A.L., Ferguson, J.,
Green, D.H., O'Reilly, S.Y., Danchin, R.V., Janse, A.J.A. (Eds.), Kimberlites and
Related Rocks, Volume 1. Proceedings of the Fourth International Kimberlite
Conference. Geological Society of Australia Special Publication 14, pp. 528–544.

102
Skinner, E.M.W. (2009). Short course on kimberlites. [course notes] (Personal
communication: February 2009).

Skinner, E.M.W. and Marsh, J.S. (2004). Distinct kimberlite pipe classes with
contrasting eruption processes. Lithos, 76, 183-200.

Snowden, D. V. (2001). Practical Interpretation of Mineral Resource and Ore Reserve


Classification Guidelines. In: Mineral Resource and Ore Reserve Estimation, The
AusIMM Guide to Good Practice. AusIMM Monograph, 23, 643 -652.

Stephen, G., and Peters, S.G. (1993). Formation of oreshoots in mesothermal gold-
quartz vein deposits: examples from Queensland, Australia. Ore Geology Reviews, 8,
277-301.

Vann, J. (2005). Applied Geostatistics for Geologists and Mining


rd
Engineers. Short Course Manual. 3 Ed. Fremantle: Quantitative Group (Pty) Ltd.
228 pp.

White, N.C., and Hedenquist, J.W. (1990). Epithermal environments and styles of
mineralization: variations and their causes, and guidelines for exploration. Journal of
Geochemical Exploration, 36, 445-474.

White, N.C., Leake, M.J., McCaughey, S.N., and Parris, B.W. (1995). Epithermal
Gold deposits of the southwest Pacific. Journal of Geochemical Exploration, 54, 87–
136.

Witt, W.K. (1993). Lithological and structural control on gold mineralization in the
Archaean Menzies-Kambalda Region, Western Australia. Australian Journal of Earth
Science, 40, 65-86.

Internet Sources:

1. Oxford English Dictionary. 2009. Definition of ‘nugget, n.’ [Online]. (Updated


September, 2009). Available at:

103
http://dictionary.oed.com/cgi/entry/00328266?query_type=word&queryword=
nugget&first=1&max_to_show=10&sort_type=alpha&result_place=1&search
_id=YHd6-ONmf3X-88&hilite=00328266 [ Accessed: 05/12/09].
2. US Department of Health and Human Services. 2009. FDA US Food and
Drug administration: Medical Devices: Equipment and Calibration. [Online].
(Updated: 18 June 2009). Available at:
http://www.fda.gov/ucm/groups/fdagov-
public/documents/image/ucm122472.gif [Accessed: 22/11/09].
3. LabCompliance. 2009. ISO/IEC 17025 Requirements for Analytical
Laboratories. [Online]. (Updated: February 2008). Available at:
www.labcompliance.com/tutorial/iso17025/ [Accessed: 18/11/09].
4. The Survey Statistician newsletter. 1996. Lepkowski and Bowles: Sampling
Error software for personal computers. [Online]. (Updated: December 1996).
Available at: http://www.hcp.med.harvard.edu/statistics/survey-
soft/docs/iass.html [Accessed: 9/12/09].

104

View publication stats

You might also like