Geostats ENG 042111 PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 387

M inesight

fo r M o delers -
G eostatistics

SOFTWARE SOLUTIONS FROM MINTEC, INC.


© 2011, 2010, 2009 by Mintec, Inc. All rights reserved. No part of this document shall be reproduced, stored
in a retrieval system, or transmited by any means, electronic, photocopying, recording, or otherwise, without
written permission from Mintec, Inc. All terms mentioned in this documet that are known to be trademarks
or registered trademarks of their respective companies have been appropriately identified. MineSight® is a
registered trademark of Mintec, Inc.
MINESIGHT PRODUCT VIEW
P R O D U C T V I E W

GEOMODELING DESIGN LONG TERM PLANNING SHORT TERM PLANNING PRODUCTION UTILITIES

MineSight 3D MineSight DART

MineSight Compass MineSight Grail

MineSight Torque MineSight Interactive Planner MineSight Visualizer

MineSight Data Analyst MineSight Planning Database MineSight DSS\Synch

MineSight Economic Planner MineSight Tools

MineSight Strategic Planner

MineSight Haulage

MineSight Schedule Optimizer

MineSight Axis

M i n t e c , I n c . | Tu c s o n , A Z U S A | 5 2 0 . 7 9 5 . 3 8 9 1 | w w w. m i n e s i g h t . c o m SOFTWARE SOLUTIONS FROM MINTEC, INC.


MINESIGHT FUNCTION VIEW
F U N C T I O N V I E W

GEOLOGY MODELING DESIGN PLANNING/SCHEDULING PRODUCTION COMMON

Drillhole Block/Stratigraphic/ Economic Long Term Blast & Ring Data


Management Surface Modeling Pit Design Planning Design Reformatting

Compositing Solid & Surface Geometric Medium Term Surveying 3D CAD


Operations Pit Design Planning
Interpreting Drillhole & Model Dump/Spoil/Dyke Interactive Short Term Visualization/
Coding Design Planning Planning Presentation
Data Analysis Geotech Display Ramp & Road Design Schedule Grade Control Data Security &
Optimization Data Hub
Geologic Mapping Ore Percentages Underground Layout Equipment Scheduling Drill & Blast Plotting/Display

Interpolation Stope Design Centralized Stockpile Handling Scripting/


Planning Database Calculation
Validation & Analysis Mineable Reserves Reconciliation Customizable
Reporting
Geologic Resources

M i n t e c , I n c . | Tu c s o n , A Z U S A | 5 2 0 . 7 9 5 . 3 8 9 1 | m a r k e t i n g @ m i n t e c . c o m | w w w. m i n e s i g h t . c o m SOFTWARE SOLUTIONS FROM MINTEC, INC.


Proprietary Information of Mintec, Inc. Table of Contents

MineSight for Modelers—Geostatistics


Training Workbook
Table of Contents
Geostatistics PowerPoint Handouts...................................... 9
Using this MineSight Workbook.......................................... 71
MineSight Overview.............................................................. 73
MineSight Data Analyst (MSDA)........................................... 77
MineSight Data Analyst (MSDA) Basic Concepts............... 79
Geostatistics for Modelers Exercises
Calculating Variograms and Modeling............................. 83
Declustering................................................................................ 91
Model Interpolation.................................................................... 93
Debugging Interpolation Runs.................................................. 97
Point Validation/Cross Validation of Estimation Methods and/
or Search Parameters.............................................................. 101
Model Statistics/Geologic Reserves....................................... 105
Model Calculations................................................................... 107
Quantifying Uncertainty........................................................... 109
Change of Support................................................................... 113
Outlier Restricted Kriging........................................................ 121
Indicator Kriging to Define Geologic
Boundary Above a Cutoff........................................................ 123
Multiple Indicator Kriging (M.I.K.)........................................... 129
Other Non-Kriging Interpolation Methods.............................. 131
Practical Geostatistics for Earth Sciences
Introduction............................................................................... 133
Basic Statistics......................................................................... 137
Data Analysis and Display....................................................... 151
Analysis of Spatial Continuity................................................. 163
Random Processes and Variances......................................... 185
Declustering.............................................................................. 193
Ordinary Kriging....................................................................... 197
Other Kriging Techniques........................................................ 207
Multiple Indicator Kriging........................................................ 215
Change of Support................................................................... 225

MineSight for Modelers—Geostatistics Page - 5


Table of Contents Proprietary Information of Mintec, Inc.

Conditional Simulation............................................................ 231


References................................................................................ 243
Geostatistics Technical Papers...................................... 245
1. Arik, Abdullah. (1990) “Effects of Search Parameters on
Kriging Reserve Estimates,” International Journal of Mining
and Geological Engineering, (8) pp. 305-18.
2. Arik, Abdullah. “Outlier Restricted Kriging: A New
Kriging Algorithm for Handling of Outlier High Grade data
in Ore Reserve Estimation.” Paper presented at the 23rd
APCOM Proceeding.
3. Arik, Abdullah. “Nearest Neighbor Kriging: A Solution to
Control the Smoothing of Kriged Estimates.” Paper
presented at the SME Annual Meeting, Orlando, Florida,
March 1998.
4. Journel, A.G. and Arik, A. (1988) “Dealing with Outlier High
Grade Data in Precious Metals Deposits,” Computer
Applications in the Mining Industry, Balkema, Rotterdam,
pp. 161-71.
5. Arik, Abdullah. “Application of Cokriging to Integrate Drill-
hole and Blasthole data in Ore Reserve estimation.” Paper
presented at the 26th APCOM Proceedings, University Park,
PA., September 1996.
6. Arik, Abdullah. “Uncertainty, Confidence Intervals and
Resource Catergorization: A Combined Variance
Approach.” Paper presented at the International
Symposium on Geostatistical Simulations in Mining, Perth,
Australia, October 1999.
7. Papanikolaou, Lefteris. “Modeling Zonal Anisotropy”.
Mintec’s Newsletter, June, 2001
8. Papanikolaou, Lefteris. “Ellipsoidal Searches in MED-
SYSTEM®.” Mintec’s Newsletter, December 1998.
9. Arik, Abdullah. (2001), “Performance Analysis of Different
Estimation Methods on Conditionally Simulated Deposits,”
SME Transactions, Vol. 310, pp. 36-40.
10. Arik, Abdullah. “Comparison of Resource Classification
Methodologies with a New Approach.” Paper presented at
the 30th APCOM Proceedings, Phoenix, AZ., February 2002
11. Arik, Abdullah. (2002), “Area Influence Kriging.
Mathematical Geology, Vol. 34, No. 7 pp. 783-796.
12. Landis, Yelena. “Relative Elevation in Interpolation”.
Mintec’s Newsletter, March 2003.

Page - 6 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Table of Contents

13. Papanikolaou, Lefteris. “Variograms in MineSight Data


Analyst (MSDA)”. Mintec’s Newsletter, November, 2006.
14. Foote, Jim. “MSDA Custom Reporting and 3-D Variogram
Modeling”. Mintec’s 22nd Annual Seminar, April 2005.
15. Foote, Jim. “MineSight Data Analyst (MSDA) Practical
Examples”. Mintec’s 23rd Annual Seminar, March 2006.
16. Arik, Abdullah. “Grade Variability Index for the Estimates.”
Paper presented at the 33th APCOM International
Symposium, Santiago, Chile, April 2007.
17. Arik, Abdullah. “Conditional Simulation, Overview and
Applications”. Mintec’s 25th Annual Seminar, May 2008.

MineSight for Modelers—Geostatistics Page - 7


Table of Contents Proprietary Information of Mintec, Inc.

Page - 8 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc.

MineSight® for Modelers ‐ Introduction to Geostatistics
Geostatistics
Objective: To make you familiar with the 
basic concepts of statistics, and the 
Date:  geostatistical tools available to solve 
Instructor:  
Instructor: problems in geology and mining of an ore
problems in geology and mining of an ore 
deposit

Topics Classical Statistics

• Basic Statistics • Sample values are realizations of a random variable
• Data Analysis and Display • Samples are considered independent
• Analysis of Spatial Continuity (variogram) • Relative positions of the samples are ignored
• Model interpolation with inverse distance and
Model interpolation with inverse distance and  • Does not make use of the spatial correlation of
Does not make use of the spatial correlation of 
ordinary kriging samples
• Model statistics / resource reporting by geology and 
bench etc.

Geostatistics Basic Statistics ‐ Definitions


• Sample values are realizations of random  • Statistics
functions  • Geostatistics
• Samples are considered spatially correlated 
• Universe
• Value of a sample is a function of its position in 
the mineralization of the deposit
the mineralization of the deposit • Sa p g U t
Sampling Unit
• Relative position of the samples is taken under  • Support
consideration. • Population
• Random Variable

MineSight for Modelers - Geostatistics 1


Proprietary Information of Mintec, Inc.

Statistics Geostatistics

• The body of principles and methods for dealing  Throughout this training, geostatistics will


with numerical data refer only to the statistical methods and tools 
• Encompasses all operations from collection and  used in ore reserve analysis
analysis of the data to the interpretation of the 
results

Universe Sampling Unit

The source of all possible data  Part of the universe on which a measurement is 
(for example, an ore deposit can be defined as the  made (can be core samples, channel samples, 
universe; sometimes a universe may not have  grab samples etc.) 
well‐defined boundaries) One must specify the sampling unit when making 
statements about a universe.

Support Population

• Characteristics of the sampling unit  • Like universe, population refers to the total category 
• Refers to the size, shape and orientation of the  under consideration 
sample (for example, drillhole core samples will  • It is possible to have different populations within the 
not have the same support as blasthole samples) same universe (for example, population of drillhole
grades versus population of blasthole grades; 
grades versus population of blasthole grades;
sampling unit and support must be specified)

MineSight for Modelers - Geostatistics 2


Proprietary Information of Mintec, Inc.

Random Variable Frequency Distribution
A variable whose values are randomly  Probability Density Function (pdf)
generated according to a probabilistic  • Discrete:                                                      
mechanism (for example, the outcome of a  1. f(xi)  0 for xiR (R is the domain)
coin toss, or the grade of a core sample in a
coin toss, or the grade of a core sample in a  2 f(xi) = 1
2. f(x )=1
diamond drill hole) • Continuous:
1.f(x)  0 
2.f(x)dx = 1

Frequency Distribution Example
Cumulative Density Function (cdf) Assume the following population of 
Proportion of the population below a certain measurements:
value: 1, 7, 1, 3, 2, 3, 11, 1, 7, 5
F(x) = P(Xx)
F(x)  P(Xx)
1. 0F(x)  1 for all x
2. F(x) is non decreasing
3. F(‐)=0 and F()=1

PDF CDF

MineSight for Modelers - Geostatistics 3


Proprietary Information of Mintec, Inc.

Descriptive Measures Mean
Measures of location: 
• Mean m = 1/n xi i=1,...,n
• Median
• Mode Arithmetic average of the data values
• Min, Max
• Quartiles
• Percentiles

Mean Weighted Mean

What is the arithmetic mean of the example  Assume the weight (for example, the length of 
population: sample) is 1 for all values except 11, which has 
1, 7, 1, 3, 2, 3, 11, 1, 7, 5 a weight  of 0.5
m= (1+ 7+ 1+ 3+ 2+ 3+ 11+ 1+ 7+ 5)/10=
(1+ 7+ 1+ 3+ 2+ 3+ 11+ 1+ 7+ 5)/10 Sum of weights = 9 5
Sum of weights = 9.5
= 41/10= m= {1*(1+ 7+ 1+ 3+ 2+ 3+ 1+ 7+ 5) +0.5*11}/9.5=
= 4.1 = 3.74

Mean Median

What is the mean if we remove highest  Midpoint of the data values if they are sorted 
value? in increasing order
M = x(n+1)/2 if n is odd
m= (1+ 7+ 1+ 3+ 2+ 3+ 1+ 7+ 5)/9=
/ M = [x n/2+x(n/2)+1]/2   if n is even
/
= 30/9=
= 3.33

MineSight for Modelers - Geostatistics 4


Proprietary Information of Mintec, Inc.

Median Other

What is the median of example population? • Mode
• Minimum
M=? • Maximum
• Quartiles
Sort data in increasing order: • Deciles
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 • Percentiles
• Quantiles
M = 3

Mode Quartiles

The value that occurs most frequently Split data in quarters
Q1 = 1st quartile
In our example: Q3 = 3rd quartile
In example:
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 Q1=?
Q3=?
Mode = 1

Quartiles Deciles, Percentiles,Quantiles

1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 1, 1, 1, 2, 3, 3, 5, 7, 7 ,11
Q1= 1
Q3= 7 D1= 1
D3= 1
=1
D9= 7

MineSight for Modelers - Geostatistics 5


Proprietary Information of Mintec, Inc.

Mode on the PDF Median on the CDF
Mode (also min)

Max

Descriptive Measures Variance

Measures of spread:  S2 = 1/(n‐1) (xi‐m)2 i=1,...,n


• Variance
• Standard Deviation • Sensitive to outlier high values
• Interquartile Range • Never negative
• n‐1 denominator ensures that estimator S2 is 
unbiased. The use of the term n − 1 is called 
Bessel's correction

Variance Variance

Example: Remove high value:
1, 1, 1, 2, 3, 3, 5, 7, 7 
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11
M=3.33
M=4.1
S2= 1/8 {(1‐3.33)2+ (1‐3.33)2+ (1‐3.33)2+ (2‐3.33)2+ 
S2= 1/9 {(1‐4.1)2+ (1‐4.1)2+ (1‐4.1)2+ (2‐4.1)2+ (3‐4.1)2+ 
(3 3 33)2+ (3‐3.33)
(3‐3.33) + (3 3 33)2+ (5‐3.33)
+ (5 3 33)2+ (7‐3.33)
+ (7 3 33)2+ 
+
(3‐4.1)2+ (5‐4.1)2+ (7‐4.1)2+ (7‐4.1)2+ (11‐4.1)2 } =
(7‐3.33)2 =
= 1/9 (9.61+ 9.61+ 9.61+ 4.41+ 1.21+ 1.21+ 0.81+ 8.41+                   
= 1/8 (5.43+ 5.43+ 5.43+1.769+ 0.109+ 0.109+ 2.789+                      
8.41+ 47.61) =
13.469+ 13.469) =
= 100.9/9 =  11.21
= 48/8 =  6

MineSight for Modelers - Geostatistics 6


Proprietary Information of Mintec, Inc.

Standard Deviation Standard Deviation

s = s2 Example:
S2= 11.21
• Has the same units as the variable S = 3.348
• Never negative
Never negative
S2 = 6
S =2.445

Interquartile Range Descriptive Measures

IQR = Q3 ‐ Q1 Measures of shape:
• Skewness
Not used in mining very often • Peakedness (kurtosis)
• Coefficient of Variation

Skewness Positive Skewness

Skewness = [1/n (xi‐m)3] / s3
• Third moment about the mean divided by the 
cube of the std. dev.
• Positive ‐
P iti t il t th i ht
tail to the right
• Negative ‐ tail to the left

MineSight for Modelers - Geostatistics 7


Proprietary Information of Mintec, Inc.

Skewness Skewness
Example: Remove high value:
1, 1, 1, 2, 3, 3, 5, 7, 7 ,11 1, 1, 1, 2, 3, 3, 5, 7, 7 
M=4.1 M=3.3
Sk= [1/10 {(1‐4.1)3+ (1‐4.1)3+ (1‐4.1)3+ (2‐4.1)3+  Sk= [1/9 {(1‐3.3)3+ (1‐3.3)3+ (1‐3.3)3+ (2‐3.3)3+ 
((3‐4.1))3+ (3‐4.1)
( )3+ (5‐4.1)
( )3+ (7‐4.1)
( )3+ 
(3 3 3)3+ (3‐3.3)
(3‐3.3) + (3 3 3)3+ (5‐3.3)
+ (5 3 3)3+ (7‐3.3)
+ (7 3 3)3+ 
+
(7‐4.1)3+ (11‐4.1)3 } ]/ 3.348 3=
(7‐3.3)3 } ]/ 2.445 3 =
= {1/10 (‐29.79‐29.79‐29.79‐8.82‐1.33 1.33+ 0.73+          
= {1/9 (‐12.17‐ 12.17‐ 12.17‐ 2.2‐ 0.03‐ 0.03+ 4.91+          
24.39+ 24.39+328.51)} /37.52 =
= 277.2/375.2 = 0.738 50.65+ 50.65)} / 14.61 =
= 67.44/131.54 = 0.513

Peakedness Coefficient of Variation

Peakedness = [1/n (xi‐m)4] / s4 CV = s/m


• Fourth moment about the mean divided by the  • No units
fourth power of the std. dev. • Can be used to compare relative dispersion of 
• D
Describes the degree to which the curve tends 
ib th d t hi h th t d values among different distributions
l diff t di t ib ti
to be pointed or peaked • CV > 1 indicates high variability
• Higher values when the curve is peaked
• Usefulness is limited

Coefficient of Variation Normal Distribution

In our example: f(x) = 1 / (s 2) exp [‐1/2 ((x‐m)/s)2]
CV = 3.348/4.1 =0.817 • symmetric, bell‐shaped
• 68% of the values are within one std. dev.
Remove high value: • 95% of the values are within two std. dev.
CV = 2.445/3.33=0.743

MineSight for Modelers - Geostatistics 8


Proprietary Information of Mintec, Inc.

Normal Distribution curve Std. Normal Distribution
• m = 0 and 
s = 1
• standardize any variable using:
z = (x‐m)
z = (x m) / s
/s

Normal Distribution Tables Example of cdf (normal)


Find the proportion of sample values above 0.5 cutoff in a 
• The cumulative distribution function F(x) is not  normal population that has m =0.3, and s = 0.2
easily computed for the normal distribution. Solution:
• First, transform the cutoff, x0 , to unit normal.
• Extensive tables have been prepared to  z = (x0 ‐ m) / s = (0.5 ‐ 0.3) / 0.2 = 1
simplify calculation
simplify calculation • Next, find the value of F(z) for z = 1. The value of F(1) = 0.8413   from Table  
fi d h l f ( )f h l f ( ) f bl
• Calculate the proportion of sample values above 0.5 cutoff, 
• Most statistics books include tables for the std. 
P(x > 0.5), as follows:
normal distribution P(x > 0.5) = 1 ‐ P(x  0.5) = 1 ‐ F(1) = 1 ‐0.8413 = 0.16
• Therefore, 16% of the samples in the population are > 0.5

Lognormal Distribution Conversion Formulas

Logarithm of a random variable has a normal  Conversion formulas between the normal and 
lognormal distributions:
distribution Lognormal to normal:
f(x) = 1 / (x 2 ) e ‐u for x > 0, > 0 • µ = exp (+2 /2) 
where  • 2 = µ2 [exp(2) ‐ 1] 
u= (ln x ‐ ) 2 / 22 Normal to lognormal:
= mean of logarithms •  = logµ ‐ 2 /2 
= variance of logarithms • 2 = log [1 + (2 /µ 2)]

MineSight for Modelers - Geostatistics 9


Proprietary Information of Mintec, Inc.

Lognormal Distribution Curve Three‐Parameter LN Distribution

Logarithm of a random variable plus a
constant, ln (x+c) is normally distributed

Constant c can be estimated by:  
c = (M2 ‐ q1 q2 ) / (q1 + q2 + 2M) 

Bivariate Distribution Statistical Analysis

• Joint distribution of outcomes from two  • To organize, understand, and/or describe data
random variables X and Y: • To check for errors
F(x,y) = Prob {Xx, and Yy} • To condense information
• In practice, it is estimated by the proportion of 
I ti it i ti t d b th ti f • To uniformly exchange information
pairs of data values jointly below the respective 
threshold values x, y.

Error Checking Data Analysis and Display Tools

• Avoid zero for defining missing values • Frequency Distributions
• Check for typographical errors • Histograms
• Sort data; examine extreme values • Cumulative Frequency Tables
• Plot sections and plan maps for coordinate  • Probability plots
errors
• Box Plots
• Locate extreme values on map 
• Scatter Plots
Isolated?  Trend?
• Q‐Q plots

MineSight for Modelers - Geostatistics 10


Proprietary Information of Mintec, Inc.

Data Analysis and Display Tools Histograms
• Correlation
• Visual picture of data and how they are 
• Correlation Coefficient
• Linear Regression
distributed
• Data Location Maps • Bimodal distributions show up easily
• Contour Maps
Contour Maps • Outlier high grades
O tli hi h d
• Symbol Maps • Variability
• Moving Window Statistics
• Proportional Effect

Histogram Plot Histograms with skewed data

• Data values may not give a single informative 
histogram

• One histogram may show the entire spread 
O hi t h th ti d
of data, but another one may be required to 
show details of small values.

Histograms with skewed data Cumulative Frequency Tables

MineSight for Modelers - Geostatistics 11


Proprietary Information of Mintec, Inc.

Probability Plots Probability Plot

• Shows if distribution is normal or lognormal
• Presence of multiple populations
• Proportion of outlier high grades

Box Plots Box Plots
Graphically depicting groups of numerical 
data through their five‐number summaries:
‐the smallest observation
‐lower quartile (Q1)
‐median
median
‐upper quartile (Q3)
‐largest observation

It also indicates which observations, if any, 
might be considered outliers. 

Scatter Plots Scatter Plot

• Simply an x‐y graph of the data
• It shows how well two variables are related
• Unusual data pairs show up
• For skewed distributions, two scatter plots 
may be required to show both details near  
origin and overall relationship.

MineSight for Modelers - Geostatistics 12


Proprietary Information of Mintec, Inc.

Linear Regression Linear Regression
Different ranges of data may be described
• y = ax + b
adequately by different regressions
a = slope, b = constant of the line

• a = r (y/x)   
• b = my ‐ amx

Cu<5, Mo<0.5 ρ=0.8215 y= 0.109x +0.0029

Linear Regression Conditional Expectation Line

Conditional Expectation line is a moving-window average. The value of the


CE curve is the average Y-value of all "nearby" points, which depends on the
CE Window setting. It is expressed in percent-of-domain. E.g. if CE window is
10, then the width of the moving window is 10% of the entire domain of the
Cu<0.2, Mo<0.2       ρ=0.8361   y= 0.102x +0.0033 data (domain means xmax-xmin).

Q‐Q Plots Q‐Q plot

• Quantile‐Quantile plots
• Straight line indicates the two distributions 
have the same shape
• 45‐degree line indicates that mean and 
45 d li i di t th t d
variance are the same.

MineSight for Modelers - Geostatistics 13


Proprietary Information of Mintec, Inc.

Covariance High Positive Covariance

Covxy= 1/n (xi‐mx)(yi‐my)   i=1,...,n
x-mx<0 x-mx>0
Where  y-my>0
mx = mean of x values and my
y-my<0
my = mean of y values
mx

Covariance Near Zero Large Negative Covariance

Covariance Covariance

It is affected by the magnitude of the data C = 2097.5
values:
Multiply x and y values by C, then
covariance increases by C2.

C=20.975

MineSight for Modelers - Geostatistics 14


Proprietary Information of Mintec, Inc.

Correlation Correlation Coefficient

Three scenarios between two variables:  r = Covxy / xy
• Positively correlated • r =  1, straight line, positive slope
• Negatively correlated • r = ‐1, straight line, negative slope
• Uncorrelated • r =  0, no correlation
r = 0 no correlation
• May be affected by a few outliers
• It removes the dependence on the
magnitude of the data values.

Correlation Coefficient Correlation Coefficient

 = 0.99  = ‐0.03

Correlation Coefficient Correlation Coefficient
It measures linear dependence 
 = -0.08

 = ‐0.97

MineSight for Modelers - Geostatistics 15


Proprietary Information of Mintec, Inc.

Data Location Map Contour Maps

Symbol Maps Moving Window Statistics

• Each grid location is represented by a symbol  • Divide area into several local areas of same size
that denotes the class to which the value  • Calculate statistics for each smaller area
belongs • Useful to investigate anomalies in mean and 
• Designed for the line printer
Designed for the line printer variance
i
• Usually not to scale

Proportional Effect Proportional Effect Plot

• Mean and variability are both constant
• Mean is constant, variability changes
• Mean changes, variability is constant
• Both mean and variability change

Predict rescaling of relative variance

MineSight for Modelers - Geostatistics 16


Proprietary Information of Mintec, Inc.

Spatial Continuity

H‐ scatter plots

 Plot the value at each sample location
versus the value at a nearby location
h l b l i

Spatial Continuity Spatial Continuity
A series of h‐scatter plots for several 
separation distances can show how the 
spatial continuity decays with increasing 
distance.
Y
You can further summarize spatial continuity 
f th i ti l ti it
by calculating some index of the strength of  
the relationship seen in each h‐scatter plot.

Moment of Inertia Moment of Inertia

For a scatter plot that is roughly symmetric about the 
Y
line x=y, the moment of inertia  about this line can 
serve as a useful index of the strength of the 
relationship. X-Y
 = moment of inertia about x=y (X-Y)/2
= average squared distance from x=y
(X,Y)
=1/n  [1/2 (xi‐yi)] 2
=1/2n  (xi‐yi)2 X

MineSight for Modelers - Geostatistics 17


Proprietary Information of Mintec, Inc.

Variogram Variogram
• Measures spatial correlation between samples
• Function of distance
• Vector
(h) = 1 / 2n  [Z(xi) ‐ Z(xi+h)]2
• Depends on distance and direction
• Semi‐variogram will be referred as variogram 
for convenience

Variogram Parameters Variogram Elements

• Range
• Sill
• Nugget Effect

Data for Computation Computation 1
For the first step (h=15), there are 4 pairs:
1. x1 and x2 , or .14 and .28
2. x2 and x3 , or .28 and .19
3. x3 and x4 , or .19 and .10
4. x4 and x5 , or .10 and .09
Therefore for h=15
Therefore, h=15, we get
(15)=1/(2*4)[(x1-x2)2+(x2-x3)2+(x3-x4)2+(x4-x5)2 ]
= 1/8 [ (.14-.28)2 + (.28-.19)2 + (.19-.10)2 + (.10-.09)2]
= 0.125 [(-.14)2 + (.09)2 + (.09)2 + (.01)2 ]
= 0.125 ( .0196 + .0081 + .0081 + .0001 )
= 0.125 ( .0359 ) = 0.00448

MineSight for Modelers - Geostatistics 18


Proprietary Information of Mintec, Inc.

Computation 2 Computation 3
For the second step (h=30), there are 3 pairs: For the third step (h=45), there are 2 pairs:
1. x1 and x3 , or .14 and .19 1. x1 and x4 , or .14 and .10
2. x2 and x4 , or .28 and .10 2. x2 and x5 , or .28 and .09
3. x3 and x5 , or .19 and .09 Therefore, for h=45, we get
Therefore, for h=30, we get (45) = 1/(2*2) [(x1-x4 )2 + (x2-x5)2]
(30) = 1/(2
1/(2*3)
3) [(x1-xx3)2 + (x2-xx4) 2 + (x3-xx5)2 ] = 1/4 [(.14-.10)
[( 14 10)2 + (.28-.09)
( 28 09)2 ]
= 1/6 [(.14-.19)2 + (.28-.10)2 + (.19-.09)2 ] = 0.25 [(.04)2 + (.19)2 ]
= 0.16667 [(-.05)2 + (.18)2 + (.10)2 ] = 0.25 ( .0016 + .0361 )
= 0.16667 ( .0025 + .0324 + .0100 ) = 0.25 ( .0377 ) = 0.00942
= 0.16667 ( .0449 )= 0.00748

Computation 4 MSDA Variogram Parameters Panel


For the fourth step (h=60), there is only one pair:
x1 and x5 . The values for this pair are .14 and .09,
respectively. Therefore, for h=60, we get
(60) = 1/(2*1) (x1 - x5 ) 2
= ½ (.14-.09)2
= 0.5
0 5 ((.05)
05)2
= 0.5 ( .0025 )= 0.00125
If we take another step (h=75), we see that there are no
more pairs. Therefore, the variogram calculation stops at
h=60.

Class Size Variogram Lag Statistics


Possible options:
• Lag distance = 50
0‐50,  50‐100,  100‐151 etc..
• Lag = 50, tolerance = 25
0‐75,  75‐125,  125‐175 etc..
• Lag = 50, tolerance = 10
0‐60,  60‐110,  110‐160 etc..
• Lag = 50, strict tolerance = 10
0‐10,  40‐60,  90‐110 etc..

MineSight for Modelers - Geostatistics 19


Proprietary Information of Mintec, Inc.

Lag, Window and Band Width – Multiple 
Windows and Band Widths
Directions

Rotation Systems Vertical Direction Option
GSLIB = (ZXY, LRR)
GSLIB-MS (the one that m624v1 uses) = (ZXY, LRL)
SAGE 2001 = (ZYX, RRR)
MEDS:
First two rotations are same as GSLIB-MS
The last rotation does not correspond to a
pure rotation about an axis.
However, there is a relationship between MEDS and GSLIB-MS,
as follows:

Minimum absolute angle to consider


tan ( r3_MEDS ) = tan ( r3_GSLIB-MS ) /cos(r2_GSLIB-MS)

SAGE MEDS output equals M624V1 GSLIB or MSDA GSLIB-MS


SAGE GSLIB output equals MSDA GSLIB
whether a pair is vertical
vertical.

For example, if this angle is set to 10, then


all holes -80 to -90 dip, are treated as
vertical.

Drillhole Distance Analysis Drillhole Distance Analysis

MineSight for Modelers - Geostatistics 20


Proprietary Information of Mintec, Inc.

Existence of Drift Variogram Models


A general decrease or increase in data values
• Spherical
with distance for the direction specified.
It is the average difference between the samples • Linear
separated by a distance h. • Exponential
Positive or negative.
negative • Gaussian
The existence of high drift values in each lag with
• Hole‐Effect
the same sign can be an indication of a drift in the
direction the variogram computed.

Variogram Models Variogram Models


Spherical

Exponential

Gaussian

Linear

Sample Variogram Plot Fitting a Theoretical Model
• Draw the variance as the sill (c + c0 )
• Project the first few points to the y‐axis. This is an estimate of the 
nugget (c0 ).
• Project the same line until it intercepts the sill. This distance is two 
thirds of the range for spherical model. 
• Using the estimates of range, sill, nugget and the equation of the 
Using the estimates of range sill nugget and the equation of the
mathematical model under consideration, calculate a few points and 
see if the curve fits the sample variogram.
• If necessary, modify the parameters and repeat Step Four to obtain a 
better fit.

MineSight for Modelers - Geostatistics 21


Proprietary Information of Mintec, Inc.

Anisotropy Types of Anisotropy

• Structural character of the mineralization may  • Geometric
be different for various directions. same sill and nugget, different ranges
• The mineralization maybe more continuous 
• Zonal 
along one direction than the other.
• Can be determined by comparing the  same nugget and range, different sills
t d diff t ill
variograms calculated in different directions within
the deposit.

Modeling Anisotropy Modeling Anisotropy 

• Geometric • Zonal

Modeling Geometrical Anisotropy: a recipe Modeling Zonal Anisotropy: a recipe

• Calculate variograms in different directions • Calculate variograms in different directions


• Keeping nugget the same, fit one‐dimensional models to the 
• Keeping nugget and sill the same, fit one‐dimensional 
sample variograms in different directions, and save these as 
models to the sample variograms in all directions
overlays
• Make a rose diagram of ranges and find the direction  • Determine the direction of major, minor, and vertical axes.
of the longest range
• Use nested structures trying to match visually the initial fitted 
• If diagram looks like a circle, no anisotropy.  If diagram  models while keeping the sills the same (see June 2001 
looks like an ellipse, there is anisotropy.  Use ellipse  newsletter).
pattern in search parameters.

MineSight for Modelers - Geostatistics 22


Proprietary Information of Mintec, Inc.

Model Variograms in MSDA Model Variograms in MSDA


Model one direction at the time Model all directions at once

Model Variograms in MSDA Rose Diagram


Use the Variogram 3D Manager 45o Length of axes correspond to
90o variogram ranges
0o

135o

Variogram Contours Nested Structures

MineSight for Modelers - Geostatistics 23


Proprietary Information of Mintec, Inc.

Variogram Types Relative Variogram


R (h) = (h) / [m(h) + c]2
• Normal 
c is a constant parameter used in the case of a
• Relative three- parameter lognormal distribution.
• Logarithmic
• Covariance Function
Covariance Function • Pairwise Relative Variogram:
g
PR (h) = 1/(2n) [(vi -vj ) 2 /((vi +vj )/2)2 ]
• Correlograms
• Indicator Variograms vi and vj are the values of a pair of samples at
• Cross Variograms locations i and j, respectively.

Logarithmic Variogram Transformation from Logs 

• Variogram using the logarithms of the data  To transform log parameters back to normal values:


1. Ranges stay the same
instead of the raw data 2. Estimate the logarithmic mean () and variance (2). Use the sill of the
y = ln x or  logarithmic variogram as the estimate of 2
3. Calculate the mean, (µ) and the variance ( 2 ) of the normal data:
y = ln
y  ln (x + c)
(x + c) for 3
for 3‐parameter
parameter lognormal
lognormal µ = exp p (( + 2 /2))
 2 = µ2 [exp (2 ) - 1]
• Reduces or eliminates the impact of  4. Set the sill of the normal variogram = the variance ( 2 )
extreme data values on the variogram 5. Compute c (sill-nugget) and c0 (nugget) of the normal variogram:
c = µ 2 [exp (clog ) - 1]
structure c0 = sill - c

Covariance Function Variograms Correlogram

C(h) = 1/N  [vi vj ‐ m‐h . m+h ] (h) = C(h) / ( ‐h .  +h ) 


• v1 ,...,vn are the data values  ‐h is the standard deviation of all the data values 
• m‐h is the mean of all the data values whose  whose locations are ‐h away from some other data 
locations are ‐h away from some other data  location:
l ti
location.    2‐h = 1/N 
= 1/N  (vi2 ‐ m2‐h ) 
)
• m+h is the mean of all the data values whose  +h is the standard deviation of all the data values 
locations are +h away from some other data  whose locations are +h away from some other data 
location. location:
(h) = C(0) ‐ C(h) 2+h = 1/N  (vj 2 ‐ m 2+h ) 

MineSight for Modelers - Geostatistics 24


Proprietary Information of Mintec, Inc.

Indicator Variogram Cross Variogram

1, if z(x) < zc CR (h) = 1/2n [u(xi)‐u(xi+h)]2 * [v(xi)‐v(xi+h)]2


i(x;zc) ={
0, otherwise • Used to describe cross‐continuity between two 
where: variables
x is location,
• Necessary for co‐kriging and probability kriging
zc is a specified cutoff value,
z(x) is the value at location x.

Cross Validation Cross Validation

• Predicts a known data point using an 
interpolation plan
• Only the surrounding data points are used to 
estimate this point, while leaving the data 
point out.
• Other names: Point validation, jack‐knifing
• To check how well the estimation procedure  
can be expected to perform.

Cross Validation Report Cross Validation
Variable : CU
ACTUAL KRIGING DIFF

Mean = 0.6991 0.7037 -0.0045


Std. Dev. = 0.5043 0.3870 0.2869
Minimum = 0.0000 0.0200 -0.9400
Maximum = 3.7000 2.1000 2.2100
Skewness = 1.0641 0.5634 1.3559
Peakedness = 2.0532 -0.0214 7.0010
Ave. kriging variance = 0.3890
Weighted square error = 0.0815

MineSight for Modelers - Geostatistics 25


Proprietary Information of Mintec, Inc.

Cross Validation Cross Validation

• The least amount of average estimation  • The weighted square error (WSE) is given by 
error the following equation:
• Either the variance of the errors or the  WSE =   [(1/i 2) (ei)2 ] /  (1/i2) 
weighted square error (WSE) is closest to 
g q ( ) • The weighting of the error (e) by the inverse 
e e g t g o t e e o (e) by t e e se
the average kriging variance. of the kriging variance gives more weights to 
Check: those points that should be closely 
estimated.
• Histogram of errors
• Scatter plots of actual versus estimate 

Cross Validation Cross Validation

• It may suggest improvements Remember:
• All conclusions are based  on observations 
• It compares, does not determine 
of errors at locations where we do not need 
parameters
estimates.
• Reveals weaknesses/shortcomings
Reveals weaknesses/shortcomings
• We remove values that, after all, we are 
going to use.

The Necessity of Modeling

Suppose we have the data set below
It provides virtually no information about 

 the entire profile

MineSight for Modelers - Geostatistics 26


Proprietary Information of Mintec, Inc.

Deterministic Models Probabilistic Models
Depend on: The variables of interest in earth science data are typically
• Context of data the end result of vast number of processes whose complex
interactions cannot be described quantitatively.
• Outside information (not contained in data)
Probabilistic random function models recognize this
uncertainty and provide tools for estimating values at
unknown locations once some assumptions about the
statistical characteristics of the phenomenon are made.

Probabilistic Models Random Variables

In a probabilistic model, available sample A random variable is a variable whose values


data are viewed as the result of a random are randomly generated according to some 
process. probabilistic mechanism. 
The result of throwing a die is a random 
Data are not generated by a random
process; rather, their complexity appears as variable. There are 6 equally probable 
random behavior values of this random variable: 1,2,3,4,5,6

Functions of Random Variables Conditions of Probability

The set of outcomes and their corresponding  If the probabilities are represented by pi, 
probabilities is sometimes referred to as the  for the n possible events, then
“probability distribution” of a random variable.
To denote the values of a probability distribution, 
symbols such as f(x), P(x) etc. are used. 1.  0  pi  1
For example: f(x)=1/6 for x=1,2,3,4,5,6
gives the probability distribution for the number  2. Σ pi = 1
of points we roll with a fair die.

MineSight for Modelers - Geostatistics 27


Proprietary Information of Mintec, Inc.

Parameters of a Random Variable Parameters of a Random Variable

These “probability distributions” have  The complete distribution can not be  
parameters that can be summarized. determined from the knowledge of only a 
Example:  Min, Max etc… few parameters.
It’s often helpful to visualize probability
Two random variables may have the same 
distributions by means of graphs, such as 
mean  and variance but their distributions 
histogram.
may be different.

Parameters of a Random Variable Parameters of a Random Variable
The parameters can not be calculated by  observing the 
outcomes of a random variable. From a sequence of 
observed outcomes all we can calculate is sample  The two most commonly parameters used in 
statistics based on that set of data.  probabilistic approaches to estimation are the 
Different set of data will produce different statistics.  
As the number of outcomes increases, the sample statistics 
mean or expected value of the random  
mean or expected value of the random
becomes more similar to their model parameters. In practice, we  variable and its variance.
assume that the parameters of our random variable are the same 
as the sample statistics.

Expected Value Variance of a Random Variable
Expected value of a random variable is  its  The variance of a random variable is the 
mean or average outcome.
expected squared difference from the mean 
µ = E(x) 
E(x) refers to expectation: of the random variable. 
( ) ‐  x f(x) dx
E(x) =  ( ) 2 = E (x‐µ)
E ( )2 =  ‐  (x‐µ)
( )2 f(x) dx
f( ) d
where f(x) is the probability density function 
of the random variable x. Std. dev. is 

MineSight for Modelers - Geostatistics 28


Proprietary Information of Mintec, Inc.

Expected Value Joint Random Variables
Example:
Random variables may also be generated in 
Define Random Variable 
L=outcome of throwing two dice and taking pairs according to some probabilistic 
the larger of the two values. mechanism;  the outcome of one of the 
Wh i h
What is the expected value of L?
d l f L? variables may influence the outcome of the 
E(L)=1/36 (1)+3/36 (2)+5/36 (3)                              other.
+7/36(4)+9/36 (5)+11/36 (6) = 4.47

Covariance Independence

The dependence between two random  Random variables are considered 
variables is described by covariance independent if the joint probability density 
Cov(x1 ,x2) = E {[x1 ‐ E(x2)] [x2 ‐ E(x2)]} function satisfies:
= E(x
( 1 x2) ‐
) [E(x
[ ( 1)] [E(x
)] [ ( 2)] p(x
( 1 ,x2 ,...,xn) = p(x
) ( 1) p(x
) ( 2) ... p(x
) ( n)
i.e., probability of two event happening is
the product of each event’s probability

Independence Conditional Probability

Example: Probability of A given some sample space B 
What is the probability of getting 6,6 when  is the conditional probability of A relative to 
we roll two fair dice? B, and is denoted by P(AB).

p(x1) = 1/6  and  p(,x2) = 1/6  P(AB) = P(A,B) / P(B)
p(x1,x2)= p(x1) * p(x2) = 1/36 P(BA) = P(A,B) / P(A)

MineSight for Modelers - Geostatistics 29


Proprietary Information of Mintec, Inc.

Conditional Probability Expectation and Variance

Example: In a factory, there are 60% male and  Properties:
40% female workers. 20% of females commute,  • C is a constant, then E(Cx) = C E(x)
the rest stay in the factory housing. What is % of  • If x1 , x2 , ..., xn have finite expectation, then
female residents? E(x1 +x2 ...+xn ) = E(x1) + E(x2) + ... + E(xn)
P(M) = 0 6 and P(F) = 0 4
P(M) = 0.6 and P(F) = 0.4 If C is a constant, then Var(Cx) = C
• If C is a constant, then Var(Cx)  C2 Var(x)
P(RF) = 1 – 0.2 = 0.8 • If x1 , x2 , ..., xn are independent, then
P(F,R) = P(F) * P(RF) = 0.4 * 0.8 = 0.32 Var(x1 +x2 ...+xn) = Var(x1)+Var(x2)+...+Var(xn)
So, 32% of total are female residents  • Var(x+y) = Var(x) + Var(y) + 2 Cov(x,y)

Central Limit Theorem Confidence Limits

Sample means of a group of independent variables tend Can be expressed by applying error bounds
toward a normal distribution regardless of the 
distribution of the samples.
around the estimate
Factors affecting the dispersion of the distribution of  For example, at 95% confidence level:
sample means:
l 
Lower limit = m ‐ 2 (s / n)
1. The dispersion of the parent population. 
2. The size of the sample. 
Upper limit = m + 2 (s / n)
Standard Error of the Mean = x=  / n

Random Functions Random Functions

R.F. is a set of random variables that have  Parameters commonly used to summarize 


the behavior of the random function:
some spatial locations and whose 
Expected value
dependence on each other is specified by  Variance
some probabilistic mechanism.
b bili ti h i Covariance
Correlogram
Variogram

MineSight for Modelers - Geostatistics 30


Proprietary Information of Mintec, Inc.

Reality vs Model Linear Estimators

Reality: • All estimation methods involve weighted 
• sample values linear combinations:
• summary statistics estimate = z* =  wi z(xi)    i = 1,...,n
Model:
• possible outcomes with corresponding probabilities of 
possible outcomes with corresponding probabilities of
occurrence The questions:
h
• parameters • What are the weights, wi ?
It is important to recognize the distinction between a model and  • What are the values, z(xi) ?
the reality

Random Process Assumptions ‐
Desirable Properties of an Estimator
Stationarity
• Average error = 0 (unbiased) 
E (Z ‐ Z * ) = 0 
The independence of univariate and 
where Z * is the estimate and Z is the true value of  the  bivariate probability laws from the location x 
random variable 
is referred as stationarity.
• Error variance (spread of errors) is small
(p )
Var (Z ‐ Z * ) = E (Z ‐ Z * )2 = small The question of “Are the nearby samples 
• Robust relevant?”
Question:
• How to calculate the weights so that they satisfy the 
required properties?

Intrinsic Hypothesis Notes on Stationarity

The intrinsic hypothesis of order zero: •Stationarity assumption does not apply to the entire data
E[Z(x)] = m, m = finite and independent of x set, but only to the search neighborhood.
E[Z(x+h)- Z(x)]2 = 2(h) = finite and independent •Local stationarity is assumed by all estimation methods. It’s
often a viable assumption even in data sets for which global
of x (variogram function)
stationarityy is clearlyy inappropriate.
pp p
W assume no drift,
We d ift andd th
the existence
i t and
d th
the •If there is sufficient information to support that stationarity
stationarity of the variogram only. assumption is not valid, then subdivide the data into smaller
If condition of no drift in a deposit cannot be zones (populations) within which stationarity assumption is
satisfied, the intrinsic hypothesis of order one is more appropriate.
invoked.

MineSight for Modelers - Geostatistics 31


Proprietary Information of Mintec, Inc.

Ensuring Unbiasedness Estimation Methods

• Sum of weights,  wi = 1 Traditional:


Two limitations: • Polygonal
• The average error is not guaranteed to be  • Triangulation
zero, only the expected value
, y p
• Inverse distance
I di
• The result is valid only if the linear 
combination belongs to the same statistical  Geostatistical:
population • Kriging

Polygonal Polygonal

Assigns all weight to nearest sample. Disadvantages:
Advantages: • Discontinuous local estimates
• Easy to understand • Edge effect
• Easy to calculate manually • No anisotropy
• Fast • No error estimation
• Declustered global histogram

Triangulation Triangulation

Weight at each triangle is proportional to the  Disadvantages:
area of the opposite sub triangle. • Not unique solution
• Only three samples receive weights
Advantages:
• Extrapolation?
p
• Easy to understand and calculate manually • 3d?
• Fast • No anisotropy
• No error control

MineSight for Modelers - Geostatistics 32


Proprietary Information of Mintec, Inc.

Inverse Distance Inverse Distance
Each sample weight is inversely proportional to the 
distance between the sample and the point being 
(1/1002)*1.1+(1/1002)*0.5
estimated: Est=
z* = [ (1/dip) z(xi ) ] /   (1/ dip) i = 1,...,n  (1/1002+1/1002)

where
h
Est= 0.8
z* is the estimate of the grade of a block or a point, 
z(xi) refers to sample grade, p is an arbitrary  Grades Weights
exponent, and n is the number of samples 1.1 0.5
0.5 0.5

Only based on distances

Inverse Distance Inverse Distance

(1/1002)*1.1+(1/502)*0.5 (1/1002)*1.1+(1/1002)*0.5
Est= Est=
(1/1002+1/502) (1/1002+1/1002)

Est= 0.62 Est= 0.8

Grades Weights Grades Weights


0.5 0.8 1.1 0.5
1.1 0.2 0.5 0.5
Only based on distances Only based on distances;
relative location is irrelevant.

Inverse Distance Inverse Distance

If p tends to zero =>local mean sample Advantages:
• Easy to understand
If p tends to  => nearest neighbor  • Easy to implement
method (polygonal)
th d ( l l) • Flexible in adapting weights to different 
Fl ibl i d ti i ht t diff t
estimation problems
Traditionally, p = 2 • Can be customized

MineSight for Modelers - Geostatistics 33


Proprietary Information of Mintec, Inc.

Inverse Distance Ordinary Kriging

Disadvantages: Definition:
• Susceptible to data clustering Ordinary Kriging (OK) is an estimator designed 
• p? primarily for the estimation of block  grades 
• Anisotropy possible with adjusted distances
A i t ibl ith dj t d di t as a linear combination of  available data in 
li bi ti f il bl d t i
• No error control or near the block, such that estimate is 
unbiased and has minimum variance. 

Ordinary Kriging Kriging Estimator

B.L.U.E. for best linear unbiased estimator. z* = wi z(xi )    i = 1,...,n
Linear because its estimates are weighted where 
linear combinations of available data
z* is the estimate of the grade of a block or a 
Unbiased since the sum of the weights
g adds
up to 1 point, z(xi) refers to sample grade, wi is the 
Best because it aims at minimizing the corresponding weight assigned to z(xi),  and n
variance of errors. is the number of samples.

Kriging Estimator Error Variance
Using R. F. model, you can express the error 
Desirable Properties: variance as a function of R.F. parameters:
• Minimize 2 = F (w1, w2, w3,…,wn)
2R= 2z +  (i j Ci,j ) ‐ 2  i C i,o
• r = average error = 0 (unbiased)
where
•  wi = 1 2z  is the sample variance
Ci,j is the covariance between samples
Ci,o is the covariance between samples and 
location of estimation.
See Isaaks and Srivastava pg 281‐284

MineSight for Modelers - Geostatistics 34


Proprietary Information of Mintec, Inc.

Error Variance Ordinary Kriging

2R= 2z +  (i j Ci,j ) - 2  i C i,o • Minimize error 


• Error increases as variance of data increases 2R= 2z +  (i j Ci,j ) - 2  i C i,o
•  i = 1
• Error variance increases as data become more  • Use Lagrange method 
redundant (Isaaks and Srivastava, pg 284
and Srivastava pg 284‐285)
285).

• Error variance decreases as data are closer to the 
location of estimation Result:
Ci,o = (i Ci,j) + 
 i = 1

Kriging System (point) Point Kriging (cont.)
Previous equation in matrix form: • Matrix C consists of the covariance values Cij
between the random variables Vi and Vj at the
sample locations.
• Vector D consists of the covariance values Ci0
between the random variables Vi at the sample
locations and the random variable V0 at the location
where an estimate is needed.
• Vector  consists of the kriging weights and the
Lagrange multiplier.

Kriging Kriging
Grades Weights
1.1 0.744
C11 C12 1 WI C1B 0.5 0.256
C21 C22 1 x W2 = C2B
1 1 0 µ 1 Est = 0.95
C0 = 0.2
C1 = 0.8
08 Relative location is important.
RY = 500m Grade 1.1 gets most of the weight
RX = 150m because of RY=500m at 90o rotation
RZ = 150m (Y axis is rotated 90o to the east).
R1/R2/R3 = 90/0/0
Spherical
1.0 0.1 1 WI 0.56 1.0 0.1 1 WI 0.56
0.1 1.0 1 x W2 = 0.12 0.1 1.0 1 x W2 = 0.12
1 1 0 µ 1 1 1 0 µ 1

MineSight for Modelers - Geostatistics 35


Proprietary Information of Mintec, Inc.

Kriging System (block) Block Kriging 

• In point kriging, the covariance matrix D consists of point-


to-point covariances. In block kriging, it consists of block-to-
point covariances.
• Covariance values CiA no longer a point-to-point covariance
like Ci0 , but the average
g covariance between a p particular
sample and all of the points within A:
CiA = 1/A  Cij
In practice, the A is discretized using a number of points in x,
y and z directions to approximate CiA .

Kriging Variance Advantages of Kriging

• Takes into account spatial continuity 
2ok = CAA - [(i CiA) + ] characteristics
• Built‐in declustering capability
Data independent • Exact estimator
act est ato
• Calculates the kriging variance for each block 
• Robust

Disadvantages of Kriging Assumptions

• No drift is present in the data
• prior variography required (Stationarity hypothesis)
• more time consuming • Both variance and covariance exist and are 
• smoothing effect
g finite.
• The mean grade of the deposit is unknown.

MineSight for Modelers - Geostatistics 36


Proprietary Information of Mintec, Inc.

Effect of Scale Effect of Shape

Nugget Effect Effect of Range

Effect of Anisotropy Search Strategy

• Define a search neighborhood within which


a specified number of samples is used
• If anisotropy, use an ellipsoidal search
• Orientation of this ellipse is important
• If no anisotropy, search ellipse becomes a
circle and the question of orientation is no
longer relevant

MineSight for Modelers - Geostatistics 37


Proprietary Information of Mintec, Inc.

Search Strategy Search Volume (2D)
• Include at least a ring of drill holes with enough samples
around the blocks to be estimated
• Don’t extend the grades of the peripheral holes to the
undrilled areas too far
• Increasing vertical search distance has more impact on
numberb off samples
l available
il bl ffor a given
i bl
block,
k th
than
increasing horizontal search distance (in vertically oriented
drillholes)
• Limit the number of samples used from each individual
drillhole

IDW and Kriging in MineSight® PINTRP.DAT

For IDW use P62001.DAT


For Kriging use P62401.DAT
Use above procedures in a multi run.
For more advanced options see
procedure PINTRP.DAT

Interpolation Options Interpolation Options

MineSight for Modelers - Geostatistics 38


Proprietary Information of Mintec, Inc.

Interpolation Options

Interpolation Options Interpolation Options

Default:
2 grd1 x (wt1 /d1)2 + grd2 x (wt2 /d2)2

4
-----------------------------------------------

3 (wt1/d1)2 + (wt2/d2)2
Where grd = grade, d = distance, wt = weighting item
Apply weighting factor from a weight item after taking
inverse
distance:
grd1 x wt1 /(d1)2 + grd2 x wt2 /(d2)2
-----------------------------------------------
wt1/(d1)2 + wt2/(d2)2

Interpolation Options
Composites at the toe

The composite length weighting is done after the


Kriging weights are computed:

Estimate=[sum (grd x Kwt x wt)] / [sum (Kwt x wt)] Composites at mid-bench


Fixed length
grd = grade
wt = weighting item
Kwt = Kriging weights

MineSight for Modelers - Geostatistics 39


Proprietary Information of Mintec, Inc.

Interpolation Options

Plan View

North-South East-West
View View

Interpolation Options

NO
YES

MineSight for Modelers - Geostatistics 40


Proprietary Information of Mintec, Inc.

Interpolation Options

Interpolation Options
Block Discretization
Composite Distance to Rock
#
Value
the block type To be considered:
1 0.50 50 1
• Range of influence of the
2 1.50 60 2
variogram used in kriging.
3 0.05 70 2
• Size of the blocks with
4 0 55
0.55 80 1
respect to this range.
5 2.50 90 1
6 0.75 100 2
• Horizontal and vertical
anisotropy ratios.

Interpolation Options Interpolation Options
0 = The negative weights are allowed ( Default ).

VALUE1 DH KRIG WEIGHT


0.0100 52 0.60119E-01
Composite variogram limit controlling item.
0.0300 52 0.57898E-01
0.0300 52 0.59559E-01
.............
0 2000 104 0
0.2000 0.57146E-02
57146E-02
The values entered should be10 maximum 0.2100 35 0.36001E-01
variogram values allowed for composites with 0.1700 35 0.36404E-01
controlling item values of 1,2,3,4,5,6,7,8,9 and 10
respectively. If the computed variogram value 0.0700 104 -0.29174E-02
between a block and the sample exceeds the user 0.1300 35 0.43567E-01
specified value corresponding to the controlling 0.1200 35 0.44737E-01
item, then the sample is rejected.
0.1300 104 -0.55253E-02
0.0400 104 -0.27076E-01
..............

MineSight for Modelers - Geostatistics 41


Proprietary Information of Mintec, Inc.

1 = if the estimated block value is below the


min value initialized for the model, then the
Interpolation Options
negative weights are set to 0, the sum of Combined Variance:

the remaining weights is normalized to 1, and 2cv= (2w* 2k)


where
the block value is recalculated. VALUE1 DH KRIG
WEIGHT local variance of the weighted average (2w ) is:
0.0100 52 0.58057E-01 2 w = w2 i * (Z0 – zi) 2 i = 1, n (n>1)
0.0300 52 0.55912E-01 and 2k is the kriging variance
0.0300 52 0.57516E-01
.......
0 2000 104 0
0.2000 0.55186E-02
55186E 02
0.2100 35 0.34767E-01
0.1700 35 0.35156E-01
0.0700 104 0.00000E+00
0.1300 35 0.42072E-01
0.1200 35 0.43202E-01 Relative Variability Index:
0.1300 104 0.00000E+00
square root of combined
2 = negative weights are always set to 0, and the 0.0400 104 0.00000E+00
variance divided by the Kriging grade or (2cv)/ Z0
…….
sum of the remaining weights is normalized to 1.

Search volume (3D) Importance of Kriging Plan


An easily overlooked assumption in every estimate
is the fact the sample values used in the weighted
linear combination are somehow relevant, and that
they belong to the same group or population, as
the point being estimated
estimated.
Deciding which samples are relevant for the
estimation of a particular point or a block may be
more important than the choice of an estimation
method.

Declustering Declustering
Clustering in high grade area: Clustering in mean grade area:
Naïve mean= Naïve mean=
(0+1+3+1+7+6+5+6+2+4 (7+1+3+1+0+6+5+1+2+4+
+0+1)/12 = 3 0+6)/ 12 =
=3
Declustered mean=
[(0+1+3+1+2+4+0+1) + Declustered mean=
(7+6+5+6)/4] /9 = 2 [(7+1+3+1+2+4+0+6) +
(0+6+5+1)/4] /9 =
=3

MineSight for Modelers - Geostatistics 42


Proprietary Information of Mintec, Inc.

Declustering Declustering
Clustering in low grade area:
Naïve mean=
• Data with no correlation, do no need 
(7+1+6+1+0+3+4+1+2+5+ declustering (pure nugget effect model)
0+6)/12
=3
• If variogram model has a long range and low 
Declustered mean=
[(7+1+6+1+2+5+0+6) + nugget, you may need to decluster.
(0+3+4+1)/4] /9 =
=3.33

Declustering Cell Declustering
Each datum is weighted by the inverse of the 
number of data in the cell
• Cell declustering

• Polygonal

Statistics of Declustered Data Polygonal

MineSight for Modelers - Geostatistics 43


Proprietary Information of Mintec, Inc.

Quantifying Uncertainty

One approach:
• Assume that the distribution of errors is Normal

 • Assume that the ordinary kriging estimate provides 
the mean of the normal distribution
the mean of the normal distribution
• Build 95 percent confidence intervals by taking 2 
standard deviations either of the OK estimate

Quantifying Uncertainty Quantifying Uncertainty

Kriging Variance
2ok = CAA - [(i CiA) + ]
Advantages
Does not depend on data.
It can be calculated before sample data are 
available (from previous/known variography)
available (from previous/known variography).
Disadvantages Same Kriging Variance!!!
Does not depend on data.
If proportional effect exists, previous assumptions 
are not true.

Quantifying Uncertainty Quantifying Uncertainty

Other approach: Combined Variance
= sqrt (local variance * kriging variance) 
Incorporate the grade in the error variance 
where local variance of the weighted average (2w ) is:
calculation: 2w = w2i * (Z0‐ zi )2   i = 1, n (n>1)
Relative Variance
l =  n is the number of data used,
i h b fd d
wi are the weights corresponding to each datum,
Kriging Variance /Square of Kriged Grade
Z0 is the block estimate,
and zi are the data values.

MineSight for Modelers - Geostatistics 44


Proprietary Information of Mintec, Inc.

Quantifying Uncertainty Quantifying Uncertainty

Relative Variability Index(RVI) = 
SQRT(Combined Variance) / Kriged Grade

Note:  This is similar to Coefficient of Variation,
Note: This is similar to Coefficient of Variation
C.V. =  / m

Regression Slope Change of Support

N=4
M = 8.825

Regression slope:
(block var - kriging var +[ LGM]) / (block var - kriging var + 2 * [LGM])
where LGM = Le Grange Multiplier

Change of Support Change of Support

>10
N = 16 N = 2 = 50%
M = 8.825 M = 11.15

MineSight for Modelers - Geostatistics 45


Proprietary Information of Mintec, Inc.

Change of Support Change of Support

• The mean above 0.0 cutoff does not change with a 
>10 change in support
N = 5 = 31% • The variance of block distribution decreases with 
M =18.6
M  18.6 larger support
• The shape of the distribution tends to become 
symmetrical as the support increases
• Recovered quantities depend on block size

Krige’s Relation Krige’s relation

Determine variance at 2x2 blocks


Variance of point values:
36 BHs

Variance of block values:


Nine 2x2 blocks

Variance of points within blocks:


Four BHs in a block

Krige’s Relation Krige’s Relation

Variance of point values: Variance of block values:

2p = 1/n Σ(zi–m)2 2b = 1/n Σ(zi–m’)2


= 1/36 Σ(zi-8.03)
8.03)2 = 1/9 Σ(zi-8.03)
8.03)2

= 33.53 = 3.20

MineSight for Modelers - Geostatistics 46


Proprietary Information of Mintec, Inc.

Krige’s Relation Krige’s Relation

2p = 2b + 2 pb


Variance of point values within 2p = Dispersion variance of composites in the deposit (sill)
blocks: 2b = Dispersion variance of blocks in the deposit
2 pb = Dispersion variance of points in blocks
2 pb = 1/n Σ(1/nb Σ(zi,b–mb)2
= 1/9 Σ(1/4 Σ(zi,b–mb)2 This is the spatial complement to the partitioning of
variances which simply says that the variance of point
= 30.33 values is equal to the variance of block values plus the
variance of points within blocks.

Krige’s Relation (cont’d) Krige’s Relation (cont’d)

Total 2 = between block 2 + within block 2 How to calculate 2 pb ?


2p = calculated directly from the composite or Integrating the variogram over a block
blasthole data
2 pb = calculated
l l t db by iintegrating
t ti ththe variogram
i
provides variance of points within the block
over the block b
2b = calculated using the Krige’s relation: 2 pb = block = 1/n2   (hi,j)
2b = 2p - 2 pb

Affine Correction Calculation of Affine Correction

Assumptions: K2 = 2b / 2p  1


• The distribution of block or SMU grades has same (from the variogram averaging):
shape as the distribution of point or composite
2p = (D,D) and 2b = (D,D) - (smu,smu)
samples.
• The
Th ratio
ti off the
th variances,
i i.e.,
i variance
i off bl
blockk K2 = [ (D,D) - (smu,smu) ] / (D,D)
grades (or the SMU grades) over that of point = 1 - [ (smu,smu) / (D,D) ]  1
grades is non-conditional to surrounding data used
for estimation. Affine correction factor, K = K2  1

Use affine correction if: (2p -2b) /2 p  30%

MineSight for Modelers - Geostatistics 47


Proprietary Information of Mintec, Inc.

Volume Variance Correction Indirect Lognormal Method

Assumption: all distributions are lognormal;


the shape of distribution changes with changes in
variance.
Transform:
znew = azbold
a = Function of (m, new ,old ,CV)
b = Function of (new,old,CV), see the notes
CV: coefficient of variation = old / mold

Indirect Lognormal Method Volume Variance Correction in MineSight®

Disadvantage:
If the original distribution departs from log
normality, the new mean may require
rescaling:
znew = (mold/mnew) zold

Change of Support (other) Change of Support (Hermite Polynomials)

Hermite Polynomials:
• Declustered composites are transformed into a 
Gaussian distribution 
• Volume‐variance correction is done on the 
o u e a a ce co ect o s do e o t e
Gaussian distribution
• Then this distribution is back transformed 
using inverse Hermite Polynomials

MineSight for Modelers - Geostatistics 48


Proprietary Information of Mintec, Inc.

Change of Support (other) Change of Support (applications)

Conditional Simulation: Design a search strategy:
• Simulate a realization of the composite (or  • Decluster composites/variogram
• Define SMU units
blasthole) grades on a very closely spaced  • Apply change of support from composites to SMU
grid (for example 1x1)
grid (for example, 1x1) • Calculate the SMU G‐T (grade tonnage) curves.
l l h ( d )
• Average simulated grades to obtain  • “Guess at a search scenario 
• Krige blocks => create G‐T curves
simulated block grades
• Compare G‐T curves of block estimates to G‐T
curves of SMUs (see next slide)
• Adjust search scenario etc…

Change of Support (G‐T Curves) Change of Support (applications)

Reconciliation between BH model and
Exploration model:
• Calculate G‐T curves of exploration model
• Apply change of support from BH model to 
E l ti
Exploration model
d l
• Calculate the adjusted BH model G‐T curves.
• Compare G‐T curves of block estimates to G‐T of 
adjusted BH model estimates.

C. of S. for Ore Grade/Tonnage Estimation Change of Support (applications)

Other:
• needed in MIK (Multiple Indicator Kriging)
• needed in UC (Uniform Conditioning)

MineSight for Modelers - Geostatistics 49


Proprietary Information of Mintec, Inc.

Simple Kriging

Z*sk = i [Z(xi ) - m] + m i = 1,...,n

•Z*sk - estimate of the grade of a block or a point


•Z(xi ) - refers to sample grade
•i - corresponding simple kriging weights assigned to Z(xi )
•n - number of samples
•m = E{Z(x)} - location dependent expected value of Z(x).

Simple Kriging Cokriging

•Suitable when the primary variable has not been


sampled sufficiently.
•Precision of the estimation may be improved by
considering the spatial correlations between the
primary
i variable
i bl and dab better-sampled
tt l d variable.
i bl
•Example: extensive data from blastholes as the
secondary variable - Widely spaced exploration
data as the primary variable.

Cokriging Cokriging‐steps for Drill and Blasthole data
[Cov{didi}] [Cov{dibj}] [1] [0] [λi] [Cov{x0di}]
• Regularize blasthole data into a specified block size. Block size could be the 
[Cov{dibj}] [Cov{bjbj}] [0] [1] [δj] [Cov{x0bj}]
x =
same as the size of the model blocks to be valued, or a discreet sub‐division 
[1] [0] 0 0 μd 1 of such blocks. A new data base of average blasthole block values is thus 
[0] [1] 0 0 μb 0 established.
• Variogram analysis of drillhole data.
[Cov{didi}] = drill hole data (dhs) covariance matrix,
matrix i=1,n
i=1 n • Variogram analysis of blasthole
analysis of blasthole data.
[Cov{bjbj}] = blast hole data (bhs) covariance matrix, j=1,m
[Cov{dibj}] = cross-covariance matrix for dhs and bhs • Cross‐variogram analysis between drill and blasthole data. Pair each 
[Cov{x0di}] = drill hole data to block covariances
[Cov{x0bj}] = blast hole data to block covariances
drillhole value with all blasthole values.
[λi] = Weights for drill hole data
[δj] = Weights for blast hole data
• Selection of search and interpolation parameters.
μd and μb = Lagrange multipliers
• Cokriging.

MineSight for Modelers - Geostatistics 50


Proprietary Information of Mintec, Inc.

Co‐Kriging Universal Kriging

Outlier Restricted Kriging ORK matrix
•Determine the outlier cutoff grade
•Assign indicators to the composites based on the
cutoff grade
0 if the grade is below the cutoff
1 otherwise
•Use OK with indicator variogram, or simply use
IDW, or any other method to assign the probability of
a block to have grade above the outlier cutoff.
•Modify Kriging matrix.

ORK Nearest Neighbor Kriging

Utilize nearest samples (assign more weight)


wok + (1- wok) * f (nearest sample)
wnnk =
wok * ((1-f)) ((all other samples)
p )
f = factor between (0-1)

MineSight for Modelers - Geostatistics 51


Proprietary Information of Mintec, Inc.

Area  Influence Kriging Non‐Linear Kriging Methods


AIK is a modified version of OK where the Parametric (assumptions about distributions)
nearest composite to the block can be given as  or non-parametric (distribution-free)
much influence as specified by the user. The •Indicator kriging
sum of other composite weights add up to a •Probability kriging
remainder sum to satisfy the unbiasedness •Lognormal kriging
•Multi-Gaussian kriging
condition. 
•Lognormal short-cut
•Disjunctive kriging

Why Non‐Linear? Why use non‐linear methods?
• To overcome problems encountered with outliers
• To estimate the distribution rather than simply an expected 
• To provide “better” estimates than those provided by linear value for the blocks/panels 
methods • When dealing with a strongly skewed distribution, simply 
• To take advantage of the properties on non-normal estimating the mean by a linear estimator might be too 
distributions of data and thereby provide more optimal smooth
estimates • Non‐linear estimation provides the solution to the “small 
• To provide answers to non-linear problems block” problem. We cannot precisely estimate small (SMU‐
• To provide estimates of distributions on a scale different sized) blocks by direct linear estimation. However, we can 
from that of the data (the “change of support” problem) estimate the proportion of SMU‐sized blocks above a 
specified cut‐off, within a panel. Thus, the concept of change 
of support is critical in most practical applications of non‐
linear estimation

Lognormal Kriging Lognormal Short Cut

• The ordinary kriging of logarithms of the grades  • In addition to calculating the block grades using 
is back transformed to give the  desired block  OK, the grade and percent of the blocks above a 
grades. 
• Extremely sensitive to the assumption of  specified cutoff can be calculated.
lognormality of the grades. Therefore, it is not
of the grades Therefore it is not • The theoretical  distribution of the grades within 
The theoretical distribution of the grades within
as robust as the ordinary kriging.  each block can be assumed either normal or 
• Never use it without checking the results  lognormal.
carefully.

MineSight for Modelers - Geostatistics 52


Proprietary Information of Mintec, Inc.

Lognormal Short Cut Indicator Kriging
The basis of indicator kriging is the indicator function:
At each point x in the deposit, consider the following
indicator function of zc defined as:
1, if z(x) < zc
i(x;zc ) =
0, otherwise
where:
x is location,
zc is a specified cutoff value,
z(x) is the value at location x.

Indicator Kriging Indicator Kriging

Examples:
Separate continuous variables into categories: 1 if Rock =1,
0 if Not Rock =1
I(x) = 1 if k(x)  30, 0 if k(x) >30
Characterize categorical variables and
Characterize categorical variables and
differentiate types: 
I(x) = 1 for oxide, 0 for sulfide

Indicator Kriging (coding uncertainty) Indicator Kriging (coding uncertainty)
Some drill holes have encountered a particular 
horizon, some were not drilled deep enough, some 
penetrated the horizon but the core or the log is 
missing:
Use I(x) = 1 for drill hole assays above the horizon
Use I(x)  1 for drill hole assays above the horizon
and I(x) = 0 for assays below the horizon.   I1 1 1 1 1 1
I2 1 1 1 1 1
Use indicator kriging and calculate the probability of  I3
I4
1
0
1
1
1 1 1
0
I5 0 0 0
the missing assays to be 1 or 0. I6 0 0 0 0

P1 1 1 1 1 1
P2 1 1 1 1 1
P3 1 1 1 1 1
P4 0 1 0.8 0.5 0
P5 0 0 0.2 0.3 0
P6 0 0 0 0 0

MineSight for Modelers - Geostatistics 53


Proprietary Information of Mintec, Inc.

Indicator Kriging (spatially mixed populations) Indicator Kriging (applications)

Some data may represent a spatial mixture of two or  Extreme values:
more statistical populations (for example, clay  and sand.
• Separate populations: Separate population to 1 and 0 based on 
I(x) = 1 for clay, 0 for sand. outlier cutoff.  Proceed then as though you 
• Then calculate the probability of an unsampled
l
location to be clay or sand.
b l d are dealing with two spatially mixed
d li ith t ti ll i d
• Krige (local estimates) unsampled locations using 
only data belonging to that population populations.
• Final estimate can be a weighted (by probabilities) average of 
the local estimates.

Multiple Indicator Kriging (MIK) Multiple Indicator Kriging

Same as indicator kriging but instead of one  THE INDICATOR FUNCTION:


At each point x in the deposit, consider the following
cutoff, we use a series of cutoffs to calculate indicator function of zc defined as:
1, if z(x) < zc
the cdf for each block at these cutoffs
i(x;zc ) =
0, otherwise
where:
x is location,
zc is a specified cutoff value,
z(x) is the value at location x.

Indicator Function at point x The (A;zc) function

(A;zc ) = 1/AA i(x;zc ) dx  [0,1]

Proportion of grades z(x) below


cutoff zc within panel or block A

MineSight for Modelers - Geostatistics 54


Proprietary Information of Mintec, Inc.

Proportion of Values z(x) zc within area A Indicator Variography

I(h;zc ) = 1/2 E [ I(x+h);zc ) - I(x;zc ) ]2

Median Indicator Variogram MIK

Indicator variogram where cutoff corresponds 
to median of data
m(h;zm ) = 1/2n  [ I(xj+m+h);zm ) ‐ I(xj;zm ) ]2
j=1,…,n

Indicator Coding
Sample Value
Cutoff
0 37 42 171 265 391 521 732
65 1 1 1 0 0 0 0 0 i(x;65)
225 1 1 1 1 0 0 0 0 i(x;225)
430 1 1 1 1 1 1 0 0 i(x:430)

MIK MIK
Cutoff = 65 Cutoff = 225
Indicator variogram Indicator variogram
Co = 0.1 Co = 0.0
Spherical Spherical
C1 = 0.9 C1 = 1.0
Range = 300; Azim = 50 Range = 400; Azim = 50
Range = 150; Azim = 140 Range = 175; Azim = 140

Sample  Kriging  Cumulative  Sample  Kriging  Cumulative 


i(x;65) i(x;225)
Value Weight Weights Value Weight Weights
0 1 ‐0.02 ‐0.02 0 1 ‐0.02 ‐0.02
Probability <65 =ip(x;65) = 0.14 37 1 ‐0.01 ‐0.03 ip(x;225) = 0.36 37
42
1
1
‐0.03
0.12
‐0.05
0.07
42 1 0.17 0.14
171 0 0.21 0.35 171 1 0.29 0.36
265 0 0.22 0.57 265 0 0.26 0.62
391 0 0.00 0.57 391 0 ‐0.05 0.57
521 0 0.20 0.77 521 0 0.16 0.73
732 0 0.23 1.00 732 0 0.27 1.00

MineSight for Modelers - Geostatistics 55


Proprietary Information of Mintec, Inc.

MIK MIK
Cutoff = 430
Indicator variogram
Co = 0.15
Probability intervals:
Spherical
Prob{z(xo) Є(65,430)}=0.46
C1 = 0.85
Range = 250; Azim = 50 Probability of exceedance:
Range = 80; Azim = 140 Prob{z(xo)>430 = 00.40
40

The mean (E-type estimate):


Sample 
Value
i(x;225)
Kriging  Cumulative 
Weight Weights
Z(xo) = Σclass frequency *class mean
0 1 0.00 0.00
ip(x;430) = 0.60 37 1 ‐0.05 ‐0.05
42 1 0.10 0.05
171 1 0.24 0.29 Class <65 65‐225 225‐430 >430
265 1 0.26 0.55
391 1 0.05 0.60 Frequency 14.0 22.0 24.0 40.0
521 0 0.10 0.70
732 0 0.30 1.00

Order Relations (potential problem) MIK (change of support)

•The local probability distributions from MIK are


based on the same support as the indicator
data

•If estimates of block grades are required,


a change of support must be applied to each
local MIK distribution

Advantages of MIK Disadvantages of MIK

• It estimates the local recoverable reserves within each panel or  • It may be necessary to compute and fit a variogram for each 


block. cutoff.
• Estimators for various cutoff values may not show the expected 
• It provides an unbiased estimate of the recovered tonnage at any  order relations.
cutoff of interest. • Mine planning and pit design using MIK results can be more 
• It is non
It is non‐parametric
parametric, i.e., no assumption is required concerning 
i e no assumption is required concerning complicated than conventional methods.
p
distribution of grades. • Correlation between indicator functions of various cutoff values 
are not utilized. More information becomes available through 
• It can handle highly variable data. the indicator cross variograms and subsequent cokriging. These 
• It takes into account influence of neighboring data and continuity  form the basis of the Probability Kriging technique.
of mineralization.

MineSight for Modelers - Geostatistics 56


Proprietary Information of Mintec, Inc.

Uniform Conditioning (UC) UC or MIK?

• UC is a non‐linear estimation technique, like MIK,  • MIK calculates the proportion of samples 
where one determines the grade distribution of each  above the cut‐off within the panels 
block to calculate the grade and proportion of ore  • UC requires the Gaussian anamorphosis of 
above any given cutoff.  data and calculation of polynomials at each 
p y
• The mean grade of the block is obtained from the OK  data point 
to ensure the estimation is locally well constrained.  • Recoverable resources are calculated on the 
The proportions of ore within the block are 
Gaussian model and the dispersion variances 
conditional to the kriged value.
accommodating the change of support for the 
smu

Change of Support Conditional Simulation (CS)

Function *(A;zc) and grade-tonnage • CS model, like any model, represents the real 


relationship for each block is based on deposit for some purpose
distribution point samples (composites). • CS model has:
Selective mining unit (SMU) volume is much – the same grade distribution
the same grade distribution
larger than sample volume, therefore, one – the same spatial correlation as the deposit
must perform a volume-variance correction to – model values conditioned to the data
the initial grade-tonnage curve of each block.

Why CS? Simulation or Estimation

• CS models reproduce the actual variability  • Estimation and simulation are complementary 
(histogram) and spatial continuity (variogram)  tools
of the attributes of interest.  • They don’t have the same objectives 
• CS can be used to address the problem of 
CS can be used to address the problem of • Estimation is for determining averages
i i i f d i i
measuring the uncertainty associated with an  • Simulation is for determining variability
estimate. 

MineSight for Modelers - Geostatistics 57


Proprietary Information of Mintec, Inc.

Estimation Simulation

Comparison to Reality Simulation or Estimation

• CS does not produce a best estimate of reality, 
rather it yields equi‐probable models with 
characteristics similar to those observed in 
reality. 
reality
• When multiple simulations are made their 
average values will approximate the 
smoothed, best fit curve. 

Typical Uses Advanced  Geostatistics in MineSight®

Sequential Gaussian Simulation - SGS


• Application in grade control Sequential Indicator Simulation - SIS
Uniform Conditioning – UC
• Comparative studies Multiple Indicator Kriging - MIK

• Sensitivity analysis
• Risk analysis
• Drilling and sampling level necessary
• Quantify the variability of contaminants
• Prediction of recoverable reserves

MineSight for Modelers - Geostatistics 58


Proprietary Information of Mintec, Inc.

Grade Zoning Grade Zoning 

• Grade zoning is usually applied to control  Determine how the grade populations are 
the extrapolation of grades into statistically  separated spatially
different populations • Is there a reasonably sharp discontinuity 
between the grades of the different 
• Often grade zones or mineralization 
Often grade zones or mineralization populations?
envelopes correspond to different geologic 
• Or is there a larger transition zone between the 
units grades of the different populations?

Grade Zoning  Grade Zoning 

Discontinuity between grade populations: Transition zone between grade populations:

Grade Zoning  Grade Zoning 

• Discontinuity between the grade  Characterizing the contact between different 
populations is best modeled using a  spatial populations:
deterministic model, i.e., digitized the  Calculate the difference between the 
outlines
average grades within each population as a
average grades within each population as a 
• Transition zone between the grade  function of distance from the contact:
populations is best modeled using a 
probabilistic model, i.e., indicator kriging Dzi = zi ‐ z(‐i)

MineSight for Modelers - Geostatistics 59


Proprietary Information of Mintec, Inc.

Grade Zoning  Grade Zoning 
• If the average difference in grade Dzi vs distance from the  • If the average difference in grade Dzi vs distance from the 
contact is more or less constant, then there is probably a  contact is small for small distances but increases with 
discontinuity between the different populations : increasing distance, then there is likely a transition zone
between the different populations:

Contact Analysis in MineSight® Grade Zone Bias Check
• Often mineralization envelopes lead to biased ore reserve 
models. To check:
• Interpolate using the nearest neighbor (polygonal) method)
• Use the search parameters corresponding to the model of 
spatial continuity
• Disregard all grade zoning
• Compare at 0.0 cutoff grade, the tons and grade of the 
polygonal model to those of the mineralization envelope 
model.

Grade Control Misclassification

To predict the tons and grade that will  What is misclassification?
be delivered to the mill: • Waste mined as ore 
– dilution of ore grade
• The G‐T curves must be based on SMU support
• Ore mined as waste 
Ore mined as waste
• The impact of misclassification at the time of 
– ore sent to the dump
mining should be taken into account
One type of misclassification may be more 
prevalent than the other

MineSight for Modelers - Geostatistics 60


Proprietary Information of Mintec, Inc.

Impact of Misclassification Impact of Misclassification
Waste is mined as ore  Ore is mined as waste 
• ore grade is diluted, not concentrated • Increases grade of waste
• may or may not increase the tonnage, depends on grade  • may or may not increase the waste tonnage (stripping ratio), 
control procedure depends on grade control procedure
• actual $ loss, waste material does not pay for processing cost
actual $ loss waste material does not pay for processing cost • actual $ loss, potential revenue of ore material is lost
actual $ loss potential revenue of ore material is lost
• net revenue decreases with increased dilution • net revenue decreases with increased grade of waste
• you lose the processing capacity (that you could have used to 
process “true” ore)

Control of Misclassification
Ideally, the impact of misclassification should be quantified
 This would enable a more accurate prediction of mining 
reserves
 This would also enable to evaluate the efficiency of a grade 
control procedure and thereby test options which may reduce
control procedure and thereby test options which may reduce 
misclassification
The impact of misclassification can be quantified or 
measured through Conditional Simulation

MineSight for Modelers - Geostatistics 61


Proprietary Information of Mintec, Inc. Using This MineSight Workbook

Using this MineSight Workbook Notes


The objective of this workbook is to provide hands on training
and experience with MineSight Data Analyst. This workbook does
not cover all the capabilites of MineSight, but concentrates on typical
mine geologists duties using a given set of data.
Introduction to the Course
To begin, thank you for taking the opportunity to enrich your
understanding of MineSight through taking this training course
offered by Mintec Technical Support. Please start out by reviewing
this material on workbook conventions prior to proceeding with the
training course documentation.
This workbook is designed to present concepts clearly and then
give the user practice through exercises to perform the stated tasks
and achieve the required results. All sections of this workbook contain
a basic step, or series of steps, for using MineSight with a project.
MineSight provides a large number of programs with wide ranges
of options within each program. This may seem overwhelming
at times, but once you feel comfortable with the system, the large
number of programs becomes an asset because of the flexibility it
affords. If you are unable to achieve these key tasks or understand
the concepts, notify your instructor before moving on to the next
section in the workbook.
What You Need to Know
This section explains for the student the mouse actions, keyboard
functions, and terms, and conventions used in the MineSight
workbooks. Please review this section carefully to benefit fully from
the training material and this training course.
Using the Mouse
The following terms are used to describe actions you perform
with the mouse:
Click - press and release the left mouse button
Double-click - click the left mouse button twice in rapid succession
Right-click - press and release the right mouse button
Drag - move the mouse while holding down the left mouse button
Highlight - drag the mouse pointer across data, causing the image
to reverse in color
Point - position the mouse pointer on the indicated item .
Select - highlight a menu list item, move the mouse over the
menu item and click the mouse.
Questions or Comments?
Note: if you have any questions or comments regarding this
training documentation, please contact the Mintec Technical Support
at (520) 795-3891 or via e-mail at ts@mintec.com.

MineSight for Modelers—Geostatistics Page - 71


Proprietary Information of Mintec, Inc. MineSight Overview

MineSight Overview Notes


Learning Outcomes
When you have completed this section, you will know:
A. The basic structure and organization of MineSight.
B. The capabilities of each MineSight module.
C. Ways to run MineSight programs.
What Is MineSight?
MineSight is a comprehensive software package for the mining
industry containing tools used for resource evaluation and analysis,
mine modeling, mine planning and design, and reserves estimation
and reporting. MineSight has been designed to take raw data from
a standard source (drillholes, underground samples, blastholes,
etc.) and extend the information to the point where a production
schedule is derived. The data and operations on the data can be
broken down into the following logical groups.
Digitized Data Operations
Digitized data is utilized in the evaluation of a project in many
ways. It can be used to define geologic information in section
or plan, to define topography contours, to define structural
information, mine designs and other information that is important to
evaluate the ore body. Digitized data is used or derived in virtually
every phase of a project from drillhole data through production
scheduling. Any digitized data can be triangulated and viewed as a
3D surface in MineSight.
Drillhole Data Operations
A variety of drillhole data can be stored in MineSight, including
assays, lithology and geology codes, quality parameters for coal,
collar information (coordinates and hole orientation), and down-the-
hole survey data. Value and consistency checks can be performed
on the data before it is loaded into MineSight. After the data has
been stored in the system, it can be listed, updated, geostatistically
and statistically analyzed, plotted in plan or section and viewed in
3D. Assay data can then be passed on to the next logical section of
MineSight which is compositing.
Compositing Operations
Composites are calculated by benches (for most base metal mines)
or mineral seams (for coal mines) to show the commodity of interest
on a mining basis. Composites can be either generated in MineSight
or generated outside the system and imported. Composite data can
be listed, updated, geostatistically and statistically analyzed, plotted
in plan or section and viewed in 3D. Composite data is passed on to
the next phase of MineSight, ore body modeling.
Modeling Operations
Within MineSight, deposits can be represented by a computer
model of one of two types. A 3D block model (3DBM) is generally
used to model base metal deposits, such as porphyry copper or
other non-layered deposits. A gridded seam model (GSM) is used

MineSight for Modelers—Geostatistics Page - 73


MineSight Overview Proprietary Information of Mintec, Inc.

Notes for layered deposits, such as coal or oil sands. In both models, the
horizontal components of a deposit are divided into blocks that are
usually related to a production unit. In a 3DBM, the deposit is also
divided horizontally into benches, whereas in a GSM the vertical
dimensions are a function of the seam and interburden thicknesses.
For each block in the model, a variety of items may be stored.
Typically, a block in a 3DBM will contain grade items, geological
codes, and a topography percent. Many other items may also be
present. For a GSM, the seam top elevation and seam thickness are
required. Other items, such as quality parameters, seam bottom,
partings, etc. can also be stored. A variety of methods can be used
to enter data into the model. Geologic and topographic data can
be digitized and converted into codes for the model, or they can
be entered directly as block codes. Solids can also be created in
the MineSight 3D graphical interface for use in coding the model
directly. Grade data is usually entered through interpolation
techniques, such as Kriging or inverse distance weighting. Once the
model is constructed, it can be updated, summarized statistically,
plotted in plan or section, contoured in plan or section, and viewed
in 3D. The model is a necessary prerequisite in any pit design or pit
evaluation process.
Economic Pit Limits and Pit Optimization
This set of routines works on whole blocks from the 3D block
model, and uses either the floating cone or Lerchs-Grossmann
technique to find economic pit limits for different sets of economic
assumptions. Usually one grade or equivalent grade item is used
as the economic material. The user enters costs, net value of the
product, cutoff grades, and pit wall slope. Original topography is
used as the starting surface for the design, and new surfaces are
generated which reflect the economic designs. The designs can
be plotted in plan or section, viewed in 3D, and reserves can be
calculated for the grade item that was used for the design. Simple
production scheduling can also be run on these reserves.
Pit Design
The Pit Design routines are used to geometrically design
pits that include ramps, pushbacks, and variable wall slopes to
more accurately portray a realistic open pit geometry. Manually
designed pits can also be entered into the system and evaluated. Pit
designs can be displayed in plan or section, can be clipped against
topography if desired, and can be viewed in 3D. Reserves for these
pit designs are evaluated on a partial block basis, and are used in the
calculation of production schedules.
Production Scheduling
This group of programs is used to compute schedules for long-
range planning based upon pushback designs (or phases), and
reserves computed by the mine planning programs. The basic input
parameters for each production period include mill capacity, mine
capacity, and cutoff grades. Functions provided by the scheduling
programs include:

Page - 74 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. MineSight Overview

 Calculation and reporting of production for each period, Notes


including mill production by ore type, mill head grades and
waste
 Preparation of end-of-production period maps
 Calculation and storage of yearly mining schedules for
economic analysis
 Evaluation of alternate production rates and required mining
capacity
Ways to Run MineSight® Programs
MineSight® consists of a large group of procedures and programs
designed to handle the tasks of mineral deposit evaluation and
mine planning. Each procedure allows you to have a great amount
of control over your data and the modeling process. You decide
on the values for all the options available in each procedure. When
you enter these values into a procedure to create a run file, you
have a record of exactly how each program was run. You can easily
modify your choices to rerun the program. To allow for easier use,
the MineSight® Compass™ menu system has been developed. Just
select the procedure you need from the menu. Input screens will
guide you through the entire operation. The menu system builds
run files behind the scenes and runs the programs for you. If you
need more flexibility in certain parts of the operations, the menus
can be modified according to your needs, or you can use the run
files directly. The MineSight® 3-D graphical interface provides a
Windows-style environment with a large number of easy-to-use,
intuitive functions for CAD design, data presentation, area and
volume calculations and modeling.
Basic Flow of MineSight®
The following diagram shows the flow of tasks for a standard
mine evaluation project. These tasks load the drillhole assays,
calculate composites, develop a mine model, design a pit, and
prepare long-range schedules for financial analysis. There are many
other MineSight programs which can be used for geology, statistics,
geostatistics, displays, and reserves.

MineSight for Modelers—Geostatistics Page - 75


MineSight Overview Proprietary Information of Mintec, Inc.

Notes
Flow of tasks
for a standard
Mine Evaluation
Project

Page - 76 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. MineSight Data Analyst Basic Concepts

MineSight Data Analyst (MSDA) Notes


Basic Concepts
MSDA is a package of statistical and geostatistical programs
that is roughly a superset of the older MineSight m300 and m400
series programs. It includes histograms, scatterplots, cumulative
probability plots, variograms, variogram 3D modeling and custom
(user-defined) reports. MSDA reads all MineSight drillhole,
blasthole and block model files, as well as ODBC compliant
databases, spreadsheets, and text files.
Topics discussed in this MSDA documentation are:
• MSDA Manager
• Data Access
• Projects
• Import Directory
• Data Explorer
• Standard Applications
• Utility Applications
• Printing
• Charts
• History
• Favorites
• Job Queues
• Application Filtering and Multi-Runs
• Custom Reports
• Variogram 3-D Modeling

MSDA Manager
The MSDA Basic Concepts help doc can be accessed through
online help. In MSDA applications, click on Help, then select
Concepts.

MineSight for Modelers—Geostatistics Page - 77


Proprietary Information of Mintec, Inc. MineSight Data Analyst Tutorial

MSDA Tutorial Notes

This tutorial walks you through the principal MSDA tools and
functions. The MineSight Open Pit (MSOP) demonstration project,
included with MSDA, is used throughout for illustrative purposes.
Topics reviewed in this tutorial are:
• Getting Started
• MSDA Manager
• Connecting to a Data Source
• Data Explorer
• Box Plots
• Cumulative Probability Plots (CPP)
• Histograms
• Q-Q Plots
• Scatterplots
• Variograms
• Auto-Fit Variograms
• Variogram 3D Modeling
• Create Var Parameters File for MIK
• Custom Reports
• Multiple Chart Viewer
• Downhole Variograms
• Cutoff Analysis (PIKCUT)
• Declustering
• Topics common to many applications
 
To access MSDA Tutorial, on MSDA applications, click on Help,
then select Contents. When the Help window opens, select MSDA
Tutorial.

MineSight for Modelers—Geostatistics Page - 79


Geostatistics for Modelers
Exercises
Proprietary Information of Mintec, Inc. Calculating Variograms and Modeling

Calculating Variograms and Modeling Notes


Learning Outcomes
In this section you will use MSDA to calculate experimental
variograms in different directions and the MSDA Variogram
modeling tools to model the variograms.

Variogram Calculation
Open MSDA. Connect to the composites (Data Source). From the
MSDA Tools menu, select Build Variogram.
General Tab
Select correlogram as the type to calculate. Pick a lag distance of
50m. Use 10 lags. Pick Cu to analyze and the appropriate coordinate
items. Set the name of the output files (remember that suffixes will
be added automatically based on directions and filter names).

Filter Tab
Set up 2 separate filters. One for Rock Type 1 and one for Rock
Type 2. MSDA will automatically calculate variograms for both
codes.

MineSight for Modelers—Geostatistics Page - 83


Calculating Variograms and Modeling Proprietary Information of Mintec, Inc.

Notes

Direction Tab
Set up the horizontal and vertical directions. Use 12 horizontal
directions at 30 degrees step with a window angle of 15 degrees.
Use 4 vertical directions with a step size of 30 and window angle
of 15.
Use 100 meters band widths for both directions.

Page - 84 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Calculating Variograms and Modeling

Notes

Titles and Labels


Add some titles (remember that suffixes will be added
automatically based on filter names).

Variogram Modeling
You can model the variograms one at the time, you can auto-
fit all of them at once or use the Variogram 3D Manager for a 3D
variogram model.
Modeling Individual Variograms
Open one of the variograms you have calculated from above.
Click on Auto-Fit to fit a model. Now adjust as needed (Move the
points with the mouse or type in new parameters, model type etc.)

MineSight for Modelers—Geostatistics Page - 85


Calculating Variograms and Modeling Proprietary Information of Mintec, Inc.

Notes

Global Auto-fit
Go to the MSDA Utilities menu and click on Auto-Fit Variograms.
Select all the variograms you want to auto-fit. On the next panel
that comes up click on Auto-Fit to model them all at once (you can
hardwire some parameters if you wish).

Variogram 3D Manager
To create a 3D variogram model, go to the MSDA utilities menu
and click on Open Variogram 3D Manager.
Click on Add to add the variograms you want to model.

Page - 86 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Calculating Variograms and Modeling

Pick the type of model you want to use and then click on 3D Model Notes
| Auto-Fit.
You can check the results by using the Standard and Rose
Diagram tabs. Adjust showing parameters as needed.

The 3D model parameters are listed under the Model tab to the
right.

MineSight for Modelers—Geostatistics Page - 87


Calculating Variograms and Modeling Proprietary Information of Mintec, Inc.

Notes You can see a map of the model on the three main planes (non
rotated YX, YZ and XZ planes) by going to 3D Model | Display
Structures on planes.

You can save the model for future viewing by using File | Save all.
You can view the model in the shape of an ellipse by going to File
| Export Variogram file. Then in MS3D you can use the option Import
| Variogram ASCII file.

Page - 88 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Calculating Variograms and Modeling

Notes

Downhole Variograms
To calculate down the hole variograms, use the MSDA Utilities
menu, Build Downhole Variogram.
General Tab
Use type correlogram as well. Use a lag distance of 15m. Use 20
lags. Pick #REF as item to distinguish drillholes. Pick coordinate
items (if you use assays you can pick the FROM and TO items).
Set the name. MSDA will create a folder based on the name. Only
a combined variogram is needed at this point.

MineSight for Modelers—Geostatistics Page - 89


Calculating Variograms and Modeling Proprietary Information of Mintec, Inc.

Notes Filter Tab


Set up two separate filters. One for Rock Type 1 and one for Rock
Type 2. MSDA will automatically calculate variograms for both
codes.
Direction Tab
Use an azimuth of 0 with 90 degrees window and a dip of -90 with
45 degrees window.

Titles and Labels


Add some titles (remember that suffixes will be added
automatically based on filter names).
Model Downhole Variograms
Since you only calculate one combined variogram per rock type,
open graph and fit a model.

Page - 90 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Declustering

Declustering Notes
Prior to this section you calculated the composites and sorted
statistics. In this section you will use cell declustering technique to
decluster the composite data. This is not required for later work.
Learning Outcomes
In this section you will learn:
• How to decluster composite values
• How to produce a histogram of declustered composite values
Declustering
There are two declustering methods that are generally applicable
to any sample data set. These methods are the polygonal method
and the cell declustering method. In both methods, a weighted linear
combination of all available sample values is used to estimate the
global mean. By assigning different weights to the available samples,
one can effectively decluster the data set.
In this section you will be using the cell declustering method
which divides the entire area into rectangular regions called cells.
Each sample received a weight inversely proportional to the number
of samples that fall within the same cell. Clustered samples will tend
to receive lower weights with this method because the cells in which
they are located will also contain several other samples.
The estimate one gets from the cell declustering method will depend
on the size of the cells specified, If the cells are very small, then most
samples will fall into a cell of its own and will therefore receive equal
weights of 1. If the cells are too large, many samples will fall into the
same cell, thereby causing artificial declustering of samples.
Declustering Composite Data
On MSDA Manager, click on Tools and select Build Decluster File.

MineSight for Modelers—Geostatistics Page - 91


Declustering Proprietary Information of Mintec, Inc.

Notes Specify different Cell Sizes for comparison of the results.

Results
Check the output file by clicking on it in MSDA Files.

Exercise 1
Try declustering using different rock types.
Exercise 2
Create a graph of the cell sizes vs mean values. The cell size that
gives the lowest value should be the best choice.

Page - 92 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Model Interpolation

Model Interpolation Notes


Prior to this section you calculated and sorted the composites.
You initialized the mine model and added any necessary geology.
In this section you can use inverse distance weighting to add grades
to the mine model. This is required before displaying the model,
calculating reserves or creating pit designs.
Learning Outcomes
In this section you will learn:
• The types of interpolations available
• The use of controls on the interpolation
• How to interpolate grades with MineSight®
Types of Interpolations
There are several methods of interpolation provided to you.
• Polygonal assignment
• Inverse distance weighting
• Relative elevations
• Trend plane
• Gradients
• Kriging
Interpolation Controls
There is a large range of methods for controlling the interpolation
available.
• Search distance N-S, E-W, and by elevation
• Minimum and maximum number of composites to use for a
block
• Maximum distance to the nearest composite
• Use or omit geologic control
IDW Interpolation
On the MineSight Compass™ Menu tab and <select the Group 3-D
Deposit Modeling, and the Operation Calculation; from the procedure list,
select the procedure pintrp.dat - Model Interpolation> Select option 11 for
IDW Interpolation. Fill out the following panels.
Panel 2 - M620V1/V2 IDW Search Parameters
This panel provides input for the composite files to use, the area
to interpolate, and optional filename extensions. For this example,
use model 15, composite file 9, and specify “idl” as the filename
extension for both the run and report files.
Panel 3 - M620V1/V2 IDW Search Parameters
<Specify a 3-D search to use all composites within 210m horizontally
(based on variograms) horizontally and 50m vertically of a block. Use an
inverse distance power of 2, specify the closest composite is within 100
meters, and allow a minimum of 3 composites and a maximum of 16>.

MineSight for Modelers—Geostatistics Page - 93


Model Interpolation Proprietary Information of Mintec, Inc.

Notes Panel 6 - Interpolation Control Items


This panel lets you to specify the items and methods for
interpolation. <Interpolate the CU and MO composites using inverse
distance weighting to the model items CUID and MOID respectively. Also,
interpolate the CU composites to the model item CUPLY using polygonal
assignment. Store the distance to the nearest composite in item DISTP, and
the maximum number of data in item NCMPI>.
Panel 8 - Optional Search Parameters
Ellipsoidal Search and use of anistropic distances are optional.
<Use an ellipsoidal search of 210 by 120 by 120. Enter MEDS for
the rotation angle specification and check the box to Use anistropic
distances?>.
Panel 9 - Optional Data Selection
This panel will only come up if you choose the anistropic
distatncces option on the previous panel. The angles for this example
are ROT = 10, DIPN = 0 and DIPE = 0.
Panel 11 - Optional Geologic Codes
This panel provides options for up to three block limiting items
and two code matching items. Use only Rock Type 1 by specifying
Rock as a block limiting item and entering the value 1 as the
corresponding integer code. Also use ROCK as a code matching item.
Panel 12 - Optional Data Selection
Use the Reset option.
Results
The results of the interpolation are saved to the model file 15; to
view the results, create a model view in MineSight 3D.
Exercise
Rerun for Rock Type 2. Change search distances and use option
omit. Change the following panels:
Panel 2 - M620V1/V2 IDW Search Parameters
<Change the filename extensions for both the run and report files to
“id2”>.
Panel 3 - M620V1/V2 IDW Search Parameters
<Specify a 3-D search to use all composites within 240m (based on
variograms) horizontally and 50m vertically of a block>.
Panel 8 - Optional Search Parameters
<For this run, change the major, minor, and vertical search distances of
240, 180, and 180 respectively>s.
Panel 9 - Optional Data Selection
The angles for this example are ROT = 45, DIPN = 0 and DIPE = 0.
Panel 11 - Optional Geologic Codes
Use only Rock Type 2 by specifying ROCK as a block limiting item
and entering the value 2 as the corresponding integer code.
Panel 12 - Optional Data Selection
Use the OMIT option for this second interpolation pass.

Page - 94 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Model Interpolation

Again, the results for the interpolation of Rock type 2 can be Notes
checked by creating a Model View in MineSight 3D.
Ordinary Kriging
Ordinary kriging is an estimator designed primarily for the local
estimation of block grades as a linear combination of the available
data in or near the block, such that the estimate is unbiased and has
minimum variance. It is a method that is often associated with the
acronym B.L.U.E. for best linear unbiased estimator. Ordinary kriging
is linear because its estimates are weighted linear combinations of the
available data, unbiased since the sum of the weights is 1, and best
because it aims at minimizing the variance of errors.
The conventional estimation methods, such as inverse distance
weighting method, are also linear and theoretically unbiased.
Therefore, the distinguishing feature of ordinary kriging from the
conventional linear estimation methods is its aim of minimizing the
error variance.
Kriging with MineSight
Before producing an interpolation using kriging, you developed a
variogram. Three types of variograms are allowed:
• Spherical
• Linear
• Exponential
On the MineSight Compass™ Menu tab, <select the Group 3-D
Deposit Modeling, and the Operation Calculation; from the procedure list,
select procedure pintrp.dat - Model Interpolation>. Select option 1 for
Ordinary Kriging. Fill out the following panels.
Panel 2 - M624V1: Kriging Search Parameters
This panel provides input for the model and composite files to
use, the area to interpolate, and optional filename extensions. For
this example, use model file 15, composite file 9, and specify ‘kr1’ as
the filename extension for both the run and report files.
Panel 3 - M624V1: Kriging Search Parameters
<Specify a 3-D search to use all composites within 210m (based on
variograms) horizontally and 50m vertically of a block. Specify that the
closest composite must be within 100 meters, and allow a minimum of 3
composites and a maximum of 16>.
Panel 5 - Interpolation Control Items
This panel allows you to specify the items and method for
interpolation. Interpolate the CU composites using kriging to the
model item CUKRG.
Panel 7 - Optional Search Parameters
Ellipsoidal Search and use of anisotropic distances are optional.
For this run, <enter major, minor, and vertical search distances of 210,
120, and 120, respectively. Enter MEDS for the rotation angle specification
and check the box to Use anisotropic distances?>

MineSight for Modelers—Geostatistics Page - 95


Model Interpolation Proprietary Information of Mintec, Inc.

Notes Panel 8 - Optional Data Selection


This panel will only come up if you choose the anisotropic
distances option on the previous panel. The angles for this example
are ROT = 10, DIPN = 0 and DIPE = 0.
Panel 10 - Optional Input Parameters/Composite Type
Use this panel to specify the variogram parameter file (if used),
and optional block discretization parameters.
Panel 12 - Variogram Parameters
This panel provides entry for the variogram parameters: model
type, nugget, sill, range, and direction of major axis. <Enter the values
for Rock type 1 from the table in the Variograms section of the workbook,>
recalling that the sill in the table includes the nugget effect.
Panel 13 - Optional Input Parameters
Specify the storage of the Kriging variance in the item CUKVR.
Panel 14 - Optional Block Limiting and Geologic Matching
This panel provides options for up to three block limiting items
and two code matching items. Use only Rock Type 1 by specifying
ROCK as a block limiting item and entering the value 1 as the
corresponding integer code. Also use ROCK as a code matching item.
Panel 15 - Optional Data Selection
Use the Reset option.
The results of the interpolation are saved to the file 15 - you can
check the results visually by creating a Model View in MineSight 3D.
Exercise
Repeat calculations for Rock Type 2. Use variograms calculated in
Section 2. Change search distances as you did for IDW.

Page - 96 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Debugging Interpolation Runs

Debugging Interpolation Runs Notes


Learning Outcomes
In this section you will learn how to debug your interpolation runs.
You will learn how to:
• make a list of the composites used for interpolating a block,
• make a visual representation of your search parameters, and
• find out how small changes in search parameters affect your
interpolation.
Kriging Debug Procedure
Exercise 1
<Run procedure pintrp.dat with debug option on.> Fill panels as
described in the following:
Panel 2 – M624V1: Kriging Search Parameters
This panel allows you to select the desired model file (file 15,
msop15.dat), the desired composite file (file 9, msop09.dat) and
filename extensions (we’ll use ‘dbg’ for this example).
Panel 3 – Debug Parameters
This panel allows you to select the block to debug (for this example,
bench 35, row 75, column 85) and the output files and objects.
Panel 4 – M624V1: Kriging Search Parameters
This panel accepts input for the search distances and parameters
– for this and subsequent examples, we’ll use the search parameters
and variogram parameters used for rock type 2. <Enter a search
distance of 240m in the x and y directions. Use 240m as well as for
maximum distance, 50m in the z direction and a maximum 3D distance
for closest composite of 100m. Specify a minimum of 3 composites and a
maximum of 16 composites to interpolate a block.> Leave the rest of the
panel entries blank.
Panel 6 – Interpolation Control Items
This panel provides input for the items to be interpolated. For this
example, we’ll interpolate the composite CU values into the model
item CUKRG, using the kriging option (calc type 0).
Panel 8 – Optional Limiting and Search Parameters
This panel accepts a number of optional search parameters
relating to anisotropic distances and search ellipses. <Enter en
ellipsoidal search of 240m by 180m by 180m. Use MEDS rotation angles
and turn on the anisotropic distances option.>
Panel 9 – Optional Data Selection
<Define the ellipsoidal orientation. Enter 45 degrees for first rotation
and 0 degrees for the other two.>
Panel 13 – Variogram Parameters
This panel accepts the variogram parameters if no variogram
parameter file has been previously entered. Use the following
parameters:

MineSight for Modelers—Geostatistics Page - 97


Debugging Interpolation Runs Proprietary Information of Mintec, Inc.

Notes model nugget Sill (without Ranges Directions


nugget) (MEDS)
EXP 0.007 0.078 80/60/60 45/0/0

Panel 15 – Optional Block Limiting
This panel accepts a number of optional search parameters
relating to block limiting. <Enter rock type 2, and use geologic matching
for rock type>.
Examine Results
<Open file rpt624.dbg.> This is a list of the composites used:

Block (75, 75) Calculated = 0.3906


Note all of the composites are well inside the 240m maximum
search distance. Distances reported are adjusted by anisotropy. If
you want to see the real distances, rerun procedure without the
anisotropic option on. You should now notice that the order of the
composites has changed.
Exercise 2
<Rerun procedure. This time apply a max distance of 120m. Change the
ellipsoidal search distances to 120x90x90.> What do you notice?
Exercise 3
<Rerun procedure. This time apply a max distance to the closest
composite equal to 50m.> What do you notice?
Exercise 4
<Rerun procedure. Change max distance to the closest composite back to
100m. This time apply max number of composites to be used equal to 7.>
Check report and study results:

Page - 98 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Debugging Interpolation Runs

Notes

Exercise 5
<Rerun procedure using the outlier options from the last panel. Use
an outlier value of 0.70. Use max distance for outliers equal to 75.>
Open report:

It seems like there are no more composites available. <To add one
more (7 total) increase max distance to 240m as well as the ellipsoidal
searches to 240x180x180:

Exercise 6

MineSight for Modelers—Geostatistics Page - 99


Debugging Interpolation Runs Proprietary Information of Mintec, Inc.

Notes Rerun procedure using the octant/quadrant search options.>


Ellipsoidal Search
Exercise
<Run procedure pintrp.dat to build some ellipses and view them in
MineSight 3D.> Do not use octant/quadrant options.
In MineSight 3D, you can import ellipse as a MineSight object
(import file ellips.msr). Adjust properties of object if needed.

Page - 100 MineSight for Modelers—Geostatistics


Point Validation/Cross Validation of
Proprietary Information of Mintec, Inc. Estimation Methods and/or Search Parameters

Point Validation/Cross Validation of Notes


Estimation Methods and/or Search
Parameters
In this section you will use inverse distance weighting and
Kriging methods to determine the error between the estimated and
the actual known value of composite data at selected locations.
Then, you will decide which method is more appropriate. You will
also validate search parameters.
Learning Outcomes
In this section you will learn:
• The types of interpolations available in point validation
• The use of controls on the interpolation
• How to interpolate point grades with MineSight
Types of Point Interpolations
Each composite is interpolated using different powers of inverse
distance weighting method and Kriging. The results are then
summarized showing the differences between the estimated and
actual known data values. The following interpolations are done by
default by the program.
• Inverse distance weighting (IDW) of power 1.0
• IDW of power 1.5
• IDW of power 2.0
• IDW of power 2.5
• IDW of power 3.0
• Kriging
Interpolation Controls
There is a large range of parameters for controlling the point
interpolation.
• Search distance N-S, E-W, and by elevation
• 3D ellipsoidal search
• Minimum and maximum number of composites to use
• Maximum distance to the nearest composite
• Use or omit geologic control
• Inverse distance powers and variogram parameters
Point interpolation program M524V1 outputs the results for each
composite used to an ASCII file. These results are evaluated using
program M525TS and the statistical summaries are output to the
report file.
Point Validation
On the MineSight Compass™ Menu tab, <select the Group Statistics,
and the Operation Calculation; from the procedure list, select procedure
p52401.dat - Point Validation>. Fill out the panels as described.

MineSight for Modelers—Geostatistics Page - 101


Point Validation/Cross Validation of
Estimation Methods and/or Search Parameters Proprietary Information of Mintec, Inc.

Notes Panel 1 - File and Area Selection


This panel provides input for the composite file to use, and for the
area of the model to validate. <Select the composite file 9 and leave the
rest of the panel blank>.
Panel 2 - Point Interpolation
This panel provides input for the validation item and search
parameters. <Enter the Rock type 1 search parameter values for the CU
item (210m horizontal, 50m vertical, 210m 3D). Use 3 as the minimum
number of composites and 16 as the maximum.>
Panel 3 - Optional Ellipsoidal Search Parameters
Ellipsoidal Search and use of anisotropic distances are optional.
For this run, <enter major, minor, and vertical search distances of 210,
120, and 120, respectively. Enter MEDS for the rotation angle specification
and check the box to Use anisotropic distances? >
Panel 4 - Optional Data Selection
This panel will only come up if you choose the anisotropic
distances option on the previous panel. The angles for this example
are ROT = 10, DIPN = 0 and DIPE = 0.
Panel 5 - Optional Parameters
<Enter the name of the previously prepared variogram parameter file
(vario.rk1)>. If the variogram parameter file is not entered in this
panel, you will be prompted to enter the variogram parameters on
subsequent panels.
Panel 6 - Optional Data Selection for Point Interpolation
This panel allows you to define portions of the data to include
or exclude from the analysis based on item values. There is also an
optional selection item and geologic matching item available for
further data limiting. <Select only ROCK item value 1 for plotting by using
the RANGE command, and specify ROCK as the geologic matching item>.
Panel 8 - Optional Parameters
Use the default IDW powers for each case; generate a detailed
report for case 3, using a 0.1 frequency interval and 40 intervals.
Results
This report (on the next page) shows summary statistics for actual
composite grades versus the results from different interpolations.

Page - 102 MineSight for Modelers—Geostatistics


Point Validation/Cross Validation of
Proprietary Information of Mintec, Inc. Estimation Methods and/or Search Parameters

Notes

This section of the report shows the statistics of the differences


between actual and kriging values The histogram is the histogram of
the errors.

MineSight for Modelers—Geostatistics Page - 103


Point Validation/Cross Validation of
Estimation Methods and/or Search Parameters Proprietary Information of Mintec, Inc.

Notes This section of the report file shows correlation statistics between
the actual and Kriging values.

This section of the report file shows correlation statistics between


the actual and inverse distance values.

Exercise
Change some of the search parameters and rerun the above
procedure. What do you observe?

Page - 104 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Model Statistics/Geologic Reserves

Model Statistics/Geologic Reserves Notes


Prior to this section you added the grades, topography, and
necessary geology into the mine model. In this section you will
summarize the mine model data with frequency distributions and
calculated the geologic resources.
Learning Outcomes
In this section you will learn:
• How to calculate grade and tonnages above different cutoffs
• How to calculate grade and tonnages between cutoffs
• How to produce a histogram plot of model values
• How to generate reserves by bench or geological resources
• How to generate probability plots from the model
Model Statistics
Use MSDA to create model histograms, GT curves, probability
plots, and scatterplots.

MineSight for Modelers—Geostatistics Page - 105


Proprietary Information of Mintec, Inc. Model Calculations

Model Calculations Notes


In this section you will calculate block values for item EQCU and
store them in the model.
Learning Outcomes
In this section you will learn how to perform calculations using
information stored in the model.
Model Calculations
<From the MineSight Compass Menu tab, select the Group Statistics,
and the Operation Plot; from the procedure list, select procedure p61201.
dat - User-Calcs (Model)>. Fill out the panels as described.
Panel 1 - Mine Model/Surface File Data Items to be Used
This panel provides input for the model file type, area
specification and filename extensions. <Specify file 15, and leave the
rest of the panel blank.>
Panel 2 - Mine Model/Surface File Data Items to be Used
This panel provides input for the model items to use in the
calculation. Since we will calculate EQCU values from values stored
for CUID and MOID, we need to enter these three items. <Enter zeros
for optional values to substitute for undefined values.>
Panel 3 - Optional Data Selection for M612RP Calculations
This panel allows you to define portions of the data to include or
exclude from the analysis based on item values. You can also specify
an optional boundary file in this panel. <Use the RANGE command to
specify ROCK Types 1 and 2, and CUID values between 0 and 99.>
Panel 4 - Define Special Project Calculations for M612RP
In this panel the model calculation(s) are defined; if more than ten
calculations are required, check the box at the bottom of the panel.
For this exercise, we will <define EQCU = CUID + MOID * 5 and leave
the box unchecked.>
Panel 5 - Optional Storage of Items Back to the Model
This panel provides input for specifying the item(s) in the model
where the result is to be stored. For this exercise, we’ll <store only the
EQCU item.>
The results are stored directly to the specified item in the model
file. To check the results visually, simply create a Model View in
MineSight 3D.

MineSight for Modelers—Geostatistics Page - 107


Proprietary Information of Mintec, Inc. Quantifying Uncertainty

Quantifying Uncertainty Notes


Prior to this section you calculated the distance to the closest
composite and the Kriging variance.
Learning Outcomes
In this section, you will learn how to quantify your confidence in
the results of the block model calculations.
We will use different approaches:
• distance to the closest composite,
• Kriging variance,
• combined Kriging variance,
• and Relative Variability Index.
Distance to the Closest Composites Calculations
<Assign the value of 1 to model item ZONE, when DISTP = 0 to 39>
(since we used the same search distances for IDW and Kriging,
item DISTP represents the distance to the closest composite for
both methods).
Distance of 39m corresponds more or less to 25% of the model.
Fifty percent of the model was assigned distances up to 57m, and
75% up to 77m. Distances are not true (they are anisotropic).
On the MineSight Compass™ Menu tab, <select the Group 3-D
Deposit Modeling, and the Operation Calculation; from the procedure list,
select procedure p61201.dat - User-Calcs (Model)>. Fill out the panels
as described.
Panel 1 - Mine Model/Surface File Data Items to be Used
This panel provides input for the model file type, area
specification and filename extensions. Specify file 15 and leave the
rest of the panel blank.
Panel 2 - Mine Model/Surface File Data Items to be Used
This panel provides input for the model items to use in the
calculation. Since we are not actually doing any calculation this run,
leave this panel blank..
Panel 3 - Optional Data Selection for M612RP Calculations
This panel allows you to define portions of the data to include
or exclude from the analysis based on item values. You can also
specify an optional boundary file in this panel. <Use the RANGE
command to specify DISTP values between 0 and 39, and CUID values
between 0 and 99>.
Panel 4 - Define Special Project Calculations for M612RP
In this panel the model calculation(s) are defined; if more than ten
calculations are required, check the box at the bottom of the panel. For
this exercise, we will <define ZONE = 1 and leave the box unchecked>.
Panel 5 - Optional Storage of Items Back to the ModelNotes:
This panel provides input for specifying the item(s) in the model
where the result is to be stored. For this exercise, we.ll store only the
ZONE item. Repeat the procedure for:
Zone = 2 when DISTP = 40 to 57
Zone = 3 when DISTP = 57 to 77
Zone = 4 when DISTP >77

MineSight for Modelers—Geostatistics Page - 109


Quantifying Uncertainty Proprietary Information of Mintec, Inc.

Notes These values for ZONE will be used to define proven ore (ZONE
=1 or 2), probable ore (ZONE =3) and possible ore (ZONE =4).
Exercise
Make a model view of item ZONE in MineSight 3D to check the
results of the code assignment.

Kriging Variance
Make a model view of the item CUKVR as it was calculated by
running procedure P62401.DAT. Use cutoffs of 0.039, 0.055 and
0.087 (quartiles).

Combined Kriging Variance


Rerun the Kriging procedure for each rock type. Calculate
combined variance instead of Kriging variance. Store in item
CUKCV. Make a model view. Use cutoffs of 0.005, 0.010, and 0.021
(quartiles).

Page - 110 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Quantifying Uncertainty

Notes

Relative Variability Index


Rerun the Kriging procedure for each rock type. Calculate RVI
instead of Combined variance. Store in item RVI. Make a model view.
Use cutoffs of 0.22, 0.34, and 0.65 (quartiles). What do you notice?

MineSight for Modelers—Geostatistics Page - 111


Proprietary Information of Mintec, Inc. Change of Support

Change of Support Notes


Prior to this section you had the composite and 3D model files
initialized and loaded. You also calculated the classical statistics and
the grade variograms of the composites.
Learning Outcomes
In this section you will learn:
• What change of support means
• How to determine indicator cutoffs
• How to calculate block variance for different size blocks
• What is Krige’s relationship of variance
• How to determine change of support correction factor
• How to do global change of support correction
• Change of support methods
Change of Support
The term support at the sampling stage refers to the characteristics
of the sampling unit, such as the size, shape and orientation of
the sample. For example, channel samples and diamond drillcore
samples have different supports. At the modeling and mine
planning stage, the term support refers to the volume of the blocks
used for estimation and production.
It is important to account for the effect of the support in our
estimation procedures, since increasing the support has the effect
of reducing the spread of data values. As the support increases,
the distribution of data gradually becomes more symmetrical. The
only parameter that is not affected by the support of the data is
the mean. The mean of the data should stay the same even if we
change the support.
Global Correction
There are some methods available for adjusting an estimated
distribution to account for the support effect. The most popular ones
are affine correction and indirect lognormal correction. All of
these methods have two features in common:
1. They leave the mean of the distribution unchanged.
2. They change the variance of the distribution by some
“adjustment” factor.
Krige’s Relationship of Variance
This is the special complement to the partitioning of variances,
which simply says that the variance of point values is equal to the
variance of block values plus the variance of points within blocks.
The equation is given below:

MineSight for Modelers—Geostatistics Page - 113


Change of Support Proprietary Information of Mintec, Inc.

Notes Calculation of Block Variance


<On the MineSight Compass Menu tab, select the Group Statistics,
and the Operation Calculation; from the procedure list, select procedure
psblkv.dat - Block Variance>. Fill out the panels as described.
Panel 1 Block Variance Calculation
This panel provides input for the variogram parameter file, block
size and discretization factors. Use the existing variogram parameter
file vario.rk1 and discretize by a factor of four in both horizontal
directions, and a factor of three vertically.
Results
The output report file summarizes the following:

Exercise 1
Change block size to 10x10 and re-run the procedure. What
change do you see in the block variance?
Exercise 2
Change block discretization to 10x10x5 and see the effect on the
block variances of 20x20 blocks.
Exercise 3
If you have another variogram parameter file, try running the
procedure with it. What do you observe?
Change of Support on Composite Values
On the MineSight Compass Menu tab, <select the Group Statistics,
and the Operation Calculation; from the procedure list, select procedure
p40201.dat - Statistics (Composites)>. Fill out the panels as described.
Panel 1 - 3D Composite Data Statistical Analysis
This panel provides input for the composite file type and item to
analyze. Enter composite file 9 and specify CU as the base assay for
analysis and histogram generation.

Page - 114 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Change of Support

Panel 2 - 3D Composite Data Statistical Analysis Notes


This panel provides input for the histogram frequency parameters
and filename extensions. Use a minimum value of zero, and report
40 intervals with an interval of 0.1. Enter a filename extension of ‘cu’
for the run and report files, and check the change of support option.
Panel 3 - Change of Support Parameters
This panel provides input for the output data file and the block
variance. Send the results of this run to file block.dat, and use the
block variance from our earlier kriging run (0.177).
Panel 4 - Optional Data Selection for Composite Statistics
This panel allows you to define portions of the data to include
or exclude from the analysis based on item values. Titling options
are also provided on this panel. Select only ROCK item value 1 by
using the RANGE command, and enter a title such as ‘15­m bench
composites - rock type 1’.
Panel 5 - 3D Coordinate Limits for Data Selection
This panel allows you to limit the data further, either by
specifying a portion of the project area or a boundary file. Leave this
panel blank.
Panel 6 - Histogram Plot Attributes
This panel provides options for setting up your histogram display.
Set up the Histogram Plot Attributes as desired.
Results
Look into the file BLOCK.DAT using Notepad or another text
editor. The first column in this file displays the theoretical block
grades after the change of support correction is made. The second
column contains the original data as input.

MineSight for Modelers—Geostatistics Page - 115


Change of Support Proprietary Information of Mintec, Inc.

Notes Distribution of Theoretical Blocks


Use MSDA to create histograms of the theoretical grades.
Compare to the original.
Volume-Variance Correction on Composite Data
On the MineSight Compass Menu tab, <select the Group Statistics,
and the Operation Calculation; from the procedure list, select procedure
pcmpvc.dat - Volume-Variance (Composites)>. Fill out the panels as
described.
Panel 1 - Volume-Variance Correction on Composite Grades
This panel provides input for the source composite file, interval
specification, and filename extension naming options. Accept the
default file 9, and check the box to bypass file 12.
Panel 2 - Volume-Variance Correction on Composite Grades
This panel provides input for the analysis item, destination item,
and an optional selection item. Use the CU item as the source, and
store the results in item CUBLK. Check the box to store results, and
select the affine correction option (default).
Panel 3 - Optional Data Selection for Composites
This panel allows you to define portions of the data to include
or exclude from the analysis based on item values. Titling and
boundary file options are also provided on this panel. Select only
ROCK item value 1 by using the RANGE command.
Panel 4 - Volume-Variance Correction Parameters
The Volume-Variance Correction factor will be block to point
variance ratios: 0.17699/0.257 = 0.689; also enter the average grade of
the composites (0.7053).
Results
The report file (rpt508.lvc) contains a summary of the Volume-
variance correction results.

Exercise 1
Run stats on item CUBLK to look at the new distribution.

Page - 116 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Change of Support

Exercise 2 Notes
Run the Volume-Variance Correction using the indirect lognormal
method. Use 0.72 for the coefficient of variation. (Store back to
CUBLK, run statistics.)
Volume Variance Correction on Model Data
<From the MineSight Compass menu set Select Group Name =
STATISTICS, Operations Type = Calculation, and Procedure Desc. =
Volume-variance - pmodvc.dat>
Panel 1 Select the File to Use
Leave this panel blank.
Panel 2 Select Items to be Used
Select indirect lognormal correction option. Use item CUKRG.
Store back to item CUKGG.
Panel 3 Optional Data Selection
Select Rock Type 1.
Panel 4 Volume-Variance Correction Parameters
Use a correction factor of 1.08. Assume that your SMU unit is 10 x
10 x 15. If you run procedure psblkv.dat for this size of a block, you
should get a block variance of 0.19157; therefore the correction factor
should be 0.19157/0.17699 = 1.08. The average model grade for Rock
Type 1 is 0.6917 and the c.v. is 0.5025.
Results

Exercise 1
Run stats on item CUKGG to lookat the new distribution.
Exercise 2
Plot grade tonnage curves of CUKGG and CUKRG items to
compare the original and adjusted grade distribution.

MineSight for Modelers—Geostatistics Page - 117


Proprietary Information of Mintec, Inc. Outlier Restricted Kriging

Outlier Restricted Kriging Notes


Learning Outcomes
In this section you will learn about outlier restricted kriging and
how to set it up in MineSight.
Outlier Restricted Kriging (ORK)
ORK requires two major steps. The first step consists of assigning
to each block the probability P(x) or the proportion of the outlier
data within the block. This value is between 0 and 1. The second step
involves the assignment of the weights to each data point within
the search volume of the block. These weights are assigned in such
a way that the sum of the weights for the outlier data is equal to the
probability or proportion which is determined from the first step.
The weights for the other data add up to 1 -P(x). Thus, the sum of all
the weights is 1, as it should be for an unbiased estimator.
Determine the outlier cutoff grade
Use MSDA Probability Plot.
Make a composite CU probability plot and determine the outlier
value. Use only benches 2540 to 2600. For the next steps we are
going to assume a cut off grade for outlier equal to 1.2.

Calculate indicators
In order to calculate the probability of each block to be above
grade 1.2, we are going to assign indicator values to the composites:
0 if the grade is below 1.2,
1 if otherwise.
This step requires that you have an additional item in file 9 to
store the indicators with min =0, max =1 (or more) and precision = 1.
<Select Group Composites, Operations Calculations. Run procedure
p50801.dat ­User Calcs (Composites)
Use item ROCKX. For benches 2540 to 2600 assign value zero to
item ROCKX (use rock types 1 and 2). Then assign value ROCKX=1 for
Cu>1.2>
Calculate probabilities
Assign the probability of occurrence of outlier grades to the
blocks. This step requires that you have an additional item in file 15
to store the probabilities. The item should be initialized with min =0,
max =1, precision = 0.01 or 0.001. Use Inverse Distance Weighting to
assign the probabilities.
<Select Group 3D-Modeling, Operations Calculations. Run procedure
pintrp.dat ­IDW interpolation
Use the same search parameters used in previous runs. Add one more
item to interpolate (CUIND using ROCKX). Interpolate only benches 24 to
28. Use OR1 and OR2 as extensions for run files and reports.>
Perform ORK
<Select Group 3D-Modeling, Operations Calculations. Run

MineSight for Modelers—Geostatistics Page - 119


Outlier Restricted Kriging Proprietary Information of Mintec, Inc.

Notes procedure pintrp.dat.> Select option 3 for Outlier Restricted Kriging.


Use the same search parameters used in previous runs. Replace model
item to interpolate to CUPLY. Interpolate only benches 24 to 28. Use OR1
and OR2 as extensions for run files and reports. Use item ROCKX from
composite file and item CUIND from model in the ORK panel.>
Compare results with regular kriging
Compare results from ordinary kriging and outlier restricted
kriging. Run model statistics and create grade-tonnage curves.

Page - 120 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Indicator Kriging to Define Geologic Boundary Above a Cutoff

Indicator Kriging to Define Geologic Notes


Boundary above a Cutoff
Learning Outcomes
In this section you will learn:
• How to calculate the indicator function (0 or 1) based on a
grade cutoff
• How to calculate the probability of a block having a grade
value above the Cutoff
• How to view the probabilities (from block model) in
MineSight
Indicator Kriging
The basis of the technique is transforming the composite
grades to a (0 or 1) function. All composite grades above cutoff
can be assigned a code of 1 whereas all the composites below can
be assigned a code of 0. Then a variogram can be formed from
the indicators which can be used for Kriging the indicators. The
resulting Kriging estimate represents the probability of each block
having a grade value above the cutoff.
Assign Indicators
On the MineSight Compass Menu tab, <select Group Composites,
Operation Type Calculations. Run procedure p50801.dat -User-Calcs
(comps) to assign indicators to item altrx.>
Panel 1 Labels of Composite Items to use
<Enter item altrx as the item to store the indicators.>
Panel 2 Optional Data Selection
<Enter a RANGE for the calculation on rock type 1 and 2 and cu grade
0 to 99.>
Panel 3 Limits for Data Selection
Leave this panel blank.
Panel 4 Special Project Calculations
<In this panel you need to assign an initial code of 0 to item altrx
.Repeat the procedure, but this time enter a range for cu from 0.3 to 99, and
use altrx =1.>
Variogram of Indicators
Use MSDA to calculate and model an indicator variogram
(normal variogram of item ALTRX).
<Compute 4 normal variograms (4x1), starting at horizontal angle 0.0
with 45 degree increments and at a vertical angle of 0.0. Use 10 intervals
with 50 m lag distance. Use 22.5 degree horizontal windowing angle and
10 degree vertical windowing angle.>

MineSight for Modelers—Geostatistics Page - 121


Indicator Kriging to Define Geologic Boundary Above a Cutoff Proprietary Information of Mintec, Inc.

Notes Model the Indicator Variogram


<Make a new model (nugget = .068, sill = .234, range = 370).>
Krige Indicators
On the MineSight Menu tab, <select Group 3D Deposit Modeling , Operations Type
Calculations. Run procedure pintrp.dat - Ordinary Kriging.>
Panel 2 Files and Model Specification Area
<Use extension alt.>
Panel 3 Krige Search Parameters
<Enter a search distance of 370m in the x and y directions as well as a maximum
distance. Enter 50m in the z direction and a maximum 3D distance for closest composite
of 100m. Specify a minimum of 3 composites and a maximum of 16 composites to
interpolate a block.>
Panel 5 Interpolation Control Items
<Use composite item ALTRX to interpolate item CUIND.>
Panel 12 Variogram Parameters
<Use figures from previous exercise.>
View Results in MineSight
<Create a view of item CUIND. Set up intervals of 0 to 1 with an increment of 0.1.
Black out all cutoffs except 0.5. What you will see is a probabilistic boundary of model
values above 0.3.>

Page - 122 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Multiple Indicator Kriging (M.I.K.)

Multiple Indicator Kriging (M.I.K.) Notes


Prior to this section, you must have initialized and loaded the
composite and 3D model files. You must also have calculated the
classical statistics and the grade variograms of the composites.
Learning Outcomes
In this section you will learn:
• How to determine indicator cutoffs
• How to calculate indicator variograms
• How to model indicator variograms
• How to determine indicator class means
• How to assign indicators to composite data
• How to setup indicator variogram parameter files
• How to calculate affine correction
• How to do multiple indicator Kriging run
• How to calculate indicator Kriging reserves
Overview
Multiple Indicator Kriging (M.I.K.) is a technique developed
to overcome the problems with estimating local recoverable
reserves. The basis of the technique is the indicator function which
transforms the grades at each sampled location into a [0,1] random
variable. The indicator variograms of these variables are estimated
at various cutoff grades. The technique consists of estimating the
distribution of composite data. The distribution is then corrected
to account for the actual selective mining unit (SMU) size. This
yields the distribution of SMU grades within each block. From that
distribution, recoverable reserves within the block can be retrieved.
Accumulation of recoverable reserves for these blocks over a volume
gives the global recoverable reserves for that volume.
Uses of Indicators
Indicators can be used to:
• deal with outliers
• model multiple populations
• estimate categories (descriptive or qualitative variables)
• estimate distributions
• estimate confidence intervals
Cutoff Analysis
<On MSDA Manager, click on Utilities, and select Cutoff Analysis
(PIKCUT). Run the analysis on item CU, using 10 cutoffs. Use filter
on ROCKX type 1 and 2. The output report file can be displayed
by clicking on the file name on MSDA files. The report file has *.pik
extension.

MineSight for Modelers—Geostatistics Page - 123


Multiple Indicator Kriging (M.I.K.) Proprietary Information of Mintec, Inc.

Notes

Results:
PIKCUT report can be viewed by clicking on the file in MSDA
Files.

Class Cutoff >= Cutoff < Samples Average Metal %Total


(units)
1 0 0.31 687 0.165 113.15 9.10
2 0.31 0.423 311 0.365 113.35 9.11
3 0.423 0.537 237 0.478 113.16 9.10
4 0.537 0.657 191 0.593 113.3 9.11
5 0.657 0.757 161 0.705 113.51 9.13
6 0.757 0.867 140 0.808 113.06 9.09
7 0.867 0.999 123 0.926 113.85 9.16
8 0.999 1.166 106 1.071 113.52 9.13
9 1.166 1.399 88 1.293 113.78 9.15
10 1.399 1.68 75 1.525 114.34 9.19
11 1.68 3.697 54 2.012 108.62 8.73

Total: 2173 0.572 1243.6 100

Calculating Indicator Variograms


<In MSDA Manager, click on Tools and select Build Variogram.
Select Indicator as the variogram type. Run the variograms on
item CU.
Click on Indicators tab to specify Indicator cutoffs. Enter the cutoffs
using the PIKCUT table.

Page - 124 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Multiple Indicator Kriging (M.I.K.)

Notes

Modeling Indicator Variograms


Indicator Variograms can be modeled in MSDA using Variogram
3D Manager under Utilities if directional variograms for each
cutoff was calculated. They can be also be modeled using Auto-
Fit Variograms option and the results can then be displayed using
Multiple Chart Viewer under Utilities.

MineSight for Modelers—Geostatistics Page - 125


Multiple Indicator Kriging (M.I.K.) Proprietary Information of Mintec, Inc.

Notes

Modeling the MIK Variogram


Proceed with your variogram modeling as if they were normal
variograms.
Model the variograms using the Variogram 3D Manager tool.
Save your model. You need to do so for all cutoffs (one cutoff set of
the variogram at a time). The File Chooser in the Variogram 3D
Manager allows you to filter which variograms to pick based on the
cutoff.

After the variograms are modeled, use the Save option to save the
model.

Page - 126 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Multiple Indicator Kriging (M.I.K.)

Notes

Variogram Parameter File


First Method
After all of the models are created and saved, they can be
combined into one file for use with the MineSight MIK program. Use
the option under the Utilities menu.

Pick all the saved model files and select whether you want to
make a variogram file for M624IK or M624MIK (GSLIB). Fill the
panel and save file at the end.

MineSight for Modelers—Geostatistics Page - 127


Multiple Indicator Kriging (M.I.K.) Proprietary Information of Mintec, Inc.

Notes Second Method


<On the MineSight Menu tab, select Group MIK , Operations Type
Edit. Run Procedure pvgmik.dat - MIK Variogram Parameter File.>
Panel 1 Output and Description File
Variogram parameters will be written to the output file specified
(vario.mik). Use 10 cutoffs. The mean beyond the last cutoff is 2.012
whereas the max value is 3.7. Affine correction factor must be equal
to or less than 1.
Panel 2 Variogram Parameters
<Enter the parameters for the first cutoff. Continue to set up the
variogram parameters for the other indicator cutoffs.> You can use the
figures from the Pik cut table.

Multiple Indicator Kriging


<Select Group Name MIK I Operations Type Calculations. Run
procedure pintrp.dat - Select option 10 for MIK.>
Panel 2 Select Files/Area
<Use MIK as file extensions.>
Panel 3 M24IMIK Search Parameters
<Specify a 3D search to find all composites within 200m horizontally
and 50m vertically of a block. Use 150m as maximum distance to the closest
composite. Use a minimum of 3 and a maximum of 16 composites.>
Panel 5 MIK Interpolation Control Items
The program computes the grade and the percent of ore above
the specified cutoff for each block and stores them into the 3D block
model. <Use 0.2 as cutoff grade. Store grade to item CUPCT. Store block%
to item PCT. Specify that the block% item is a percent item. Use composite
item CU.> <Use 5 as min number of composites in a class interval for the
program to use local means.>
Panel 9 MIK Input Parameters
MIK variogram parameters file must be specified (vario.mik).
Panel 10 Optional Geologic Codes
Include Rock Type 1 and 2 data only. Use geologic matching on
item Rock.
MIK Reserves
Use MSDA to calculate reserves based on CUPCT. Weight by topo
and PCT. Multiply block by 16.2 K+/block.

Page - 128 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Other Non-Kriging Interpolation Methods

Other Non-Kriging Interpolation Notes


Methods
Learning Outcomes
In this section you will learn:
• How to use the Trend Planes method of interpolation
• How to use the Gradient interpolation technique
Trend Plane Search Interpolation
M621V1 uses a similar interpolation scheme to M620V1. However,
the search is along the dip and strike of a trend plane specified by a
mine model code associated with a certain plane strike azimuth and
plane dip angle as well as with certain distances along the strike, dip
and off the plane.
The program can be run from procedure pintrp.dat. The
procedure can be found in MineSight Compass Menu, under Group
3-D modeling I Operations Calculations.
Exercise
<Run procedure pintrp.dat - option 12.> Try to use the same
interpolation parameters as you did with Inverse distance and
Kriging methods.
Gradient Interpolation Technique
M625V1 uses gradients to neighboring points for weighting
the sorted composites during interpolation of mine model values.
M625V1 uses the tangent plane or gradient method to interpolate
block values in a mine model. Tangent planes are calculated
between all composites and a specified number of neighboring
composites. Each plane must satisfy the following conditions:
• the plane must pass through the function value at the point
in question (i.e., through the Z (grade) value),
• and the angles the plane makes with vectors or lines to all of
the various points in the neighborhood must be minimized.
The angles are weighted by a function of how far or near the
various neighboring points are from the point of interest. After
the tangent planes are generated, block values are calculated from
neighboring composites (now with gradients). User needs to specify
how many neighbors are used in this calculation. The gradient
information for each composite is evaluated at the block location and
the calculation of the block value is weighted by the distance from
the block to each composite.
Exercise
<Run procedure pintrp.dat - option 13 under Group 3D Modeling I
Operation Calculations.> Try to use the same search parameters as in
previous methods.

MineSight for Modelers—Geostatistics Page - 129


Practical Geostatistics for Earth
Sciences
Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Introduction

Introduction Notes
Geostatistics refers to a collection of numerical techniques that
deal with the characterization of spatial attributes, employing
primarily random models. The Geostatistical techniques are applied
to study the distribution in space of essential values for geologists
and mining engineers, such as mineral content or thickness of
an ore body although their application is in no way limited to
problems in geology and mining. As a matter of fact, geostatistics
is applied today in any earth and related sciences imaginable
from environmental, agricultural, ecological applications to
meteorological weather forecasting and so on.
Historically, geostatistics can be considered as old as mining
itself. When mining men was picking and analyzing samples, and
computing average grades, weighted by corresponding thickness
or the area of influence, one may consider that they were applying
geostatistics without knowing about it. However, it was Georges
Matheron of France who coined the word “Geostatistics.” After the
publication of the Theory of Regionalized Variables by Matheron in
early 1960’s, the term “Geostatistics” became popular.
The classical statistical methods are based on the assumption that
the sample values are realizations of a random variable. The samples
are considered independent. Their relative positions are ignored,
and it is assumed that all sample values have an equal probability of
being selected. Thus, one does not make use of the spatial correlation
of samples although this information is very relevant and essential
in certain data sets such as the ones obtained from an ore deposit.
In contrast, in geostatistics one considers that the sample values
are realizations of random functions. On this hypothesis, the value
of a sample is a function of its position in the mineralization of
the deposit, and the relative position of the samples is taken into
consideration. Therefore, geostatistics is concerned with spatial data.
That is, each data value is associated with a location in space and
there is at least an implied connection between the location and the
data value. Geostatistics offers many tools describing the spatial
continuity that is an essential feature of many natural phenomena
and provides adaptations of classical statistical techniques to take
advantage of this continuity.
The objective of Geostatistical techniques can be defined as:
1. to characterize and interpret the behavior of the existing
sample data;
2. to use that interpretation to predict likely values at locations
that have not yet been sampled.
In summary, geostatistics involves the analysis and prediction of
any spatial or temporal phenomena, such as mineral grades, quality
parameters, impurities, porosity, contaminant concentrations, and
so forth. The prefix geo- is usually associated with geology since
geostatistics has its origins in mining. Nowadays, geostatistics
is basically a name associated with a class of techniques used to
analyze and predict values of a variable distributed in space or

MineSight for Modelers—Geostatistics Page - 133


Practical Geostatistics for Earth Sciences—Introduction Proprietary Information of Mintec, Inc.

Notes time. Such values are implicitly assumed to be associated with each
other, and the study of such a correlation is usually expressed as
the “spatial analysis of continuity” or “variogram modeling.” After
spatial analysis, predictions at unsampled locations are made using
“kriging” or they can be simulated using “conditional simulations.”
In summary, the main steps in a geostatistical study include:
(a) exploratory data analysis,
(b) spatial analysis of continuity―calculation and modeling of
variograms,
(c) making predictions―kriging or simulations.
The intent of this book is to make the reader familiar with the
main concepts of statistics, and the geostatistical tools available to
solve problems in geology and mining of an ore deposit. Therefore,
the emphasis will be the use of these tools through MineSight and
their practical application to resource and reserve estimations. The
majority of the material in this book is introductory and exclusive of
mathematical formalism. It is based on a modified, but real data set.
The solutions proposed in this book are, therefore, for the particular
data set used, and may not be used as general recipes. It is hoped
that this book will provide the readers with practical aspects of
various statistical and geostatistical tools, and help prepare them to
tackle their problems at hand.
Organization of Sections
This book is organized to take the reader from the elementary
statistics to more advanced geostatistical topics in a natural
progression. Following this introductory section, Section 2 reviews
the basic statistical concepts and definitions. This section also covers
the theoretical model of distributions, random variables and Central
Limit Theorem.
Organization and presentation of geostatistical results are
vital steps in communicating the essential features of a large data
set. Therefore, Section 3 covers univariate, bivariate and spatial
descriptions for data analysis and display.
Section 4 is the first section that gets into spatial statistics. This
section provides various ways of describing the spatial features of a
data set, including variogram analysis through the analysis of spatial
continuity. It also includes the variogram types, modeling and cross
validation of the results.
Geostatistics deals with the characterization of spatial attributes
also known as regionalized variables for which deterministic
models are not possible because of the complexity of the natural
processes. Therefore, Section 5 introduces the random processes and
variances. It covers the theory of regionalized variables, random
function models and the necessity of modeling. The question of why
probabilistic models are necessary to describe earth science processes
is discussed in this section.

Page - 134 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Introduction

In earth sciences, it is not very uncommon to have samples Notes


collected on an irregular grid. It is also very likely to have clustered
samples in some area of interest. Therefore, Section 6 reviews
different declustering methods that deal with estimating an
unbiased average value over a large area.
Section 7 gets into the most frequently used geostatistical
estimation method, the ordinary kriging. The section explains both
point and the block kriging concepts. It also discusses the theory
and assumptions of the algorithm, and the effects of variogram
parameters on the kriged estimates. Similarly, Section 8 discusses
other relevant kriging methods such as simple kriging, cokriging,
outlier restricted kriging, and non-linear kriging methods. However,
the detailed explanations of these techniques are skipped purposely
to spare the reader from mathematical formalism.
One of the most popular non-linear geostatistical estimation
methods is the multiple indicator kriging. Section 9 discusses
both the theory and the adaptation of this method to handle the
estimation problems with highly skewed data. This section also
discusses the topics of recoverable reserves and volume-variance
relationships.
Section 10 discusses volume-variance relationship under the
change of support. The section also deals with the impact of
smoothing on reserves, and how to deal with it. The correction
factors available to adjust distributions and their applications are
included in this section.
In the modeling of random functions, any sample is regarded as
one possible partial realization of a model. Stochastic simulation
generalizes the concept by allowing the generation of as many
equally likely realizations per random function as necessary. Section
11 provides a review of conditional simulation, discusses the
merits of both estimation and simulation, and their limitations. The
simulation algorithms available and typical applications are also
discussed in this section.
Finally, Section 12 lists the references used. Some of these
references are textbooks; others are technical papers on certain
applications. The readers are encouraged to refer to them if they
want more details on the specific subjects mentioned.

MineSight for Modelers—Geostatistics Page - 135


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

Notes
Basic Statistics
This chapter is designed to provide a review of basic statistics
for those readers who have little or no background in statistics. It
is intended to familiarize the readers with the statistical concepts
providing them the necessary foundation for the subjects in the
remainder of the book.
Definitions
Statistics
Statistics is the body of principles and methods for dealing with
numerical data. It encompasses all operations from collection and
analysis of the data to the interpretation and presentation of the results.
Statistical analysis may involve the use of probability concepts
and statistical inference. This body of knowledge comes into play
when it is necessary or desirable to form conclusions based on
incomplete data. The statistics make it possible to acquire useful,
accurate and operational knowledge from available information.
Geostatistics
Geostatistics is the application of statistical methods or the
collection and analysis of statistical data for use in the earth sciences.
Throughout this book, geostatistics will refer only to the statistical
methods and tools used in spatial data for resource analysis.
Universe
The universe is the source of all possible data. For our purposes,
an ore deposit can be defined as the universe. But often problems
arise when a universe does not have well defined boundaries. In this
case the universe is not clearly located in space until other concepts
are defined.
Sampling Unit
The sampling unit or simply sample is the set of all those
observations actually recorded. A sample is a subset of a parent
population or a part of the universe on which a measurement is
made. This can be a core sample, channel sample, a grab sample etc.
When one makes statements about characteristics of a universe, he
or she must specify what the sampling unit is.
Support
Support is the characteristics of the sampling unit, which refers
to the size, shape and orientation of the sample. The support can
be as small as a point or as large as the entire field. A change in any
characteristics of the support defines a new regionalized variable.
For example, the channel samples gathered from anywhere in a drift
will not have the same support as the channel samples cut across the
ore vein in the same drift.
Population
The word population is synonymous with universe in that it refers
to the total category under consideration. It is the set of all samples
of a predetermined universe within which statistical inference is to

MineSight for Modelers—Geostatistics Page - 137


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes be performed. It is possible to have different populations within the


same universe based on the support of the samples. For example,
population of blast hole grades versus population of exploration
hole grades. Therefore, the sampling unit and its support must be
specified in reference to any population.
Random Variable
A random variable is a variable whose values are randomly
generated according to a probabilistic mechanism. It may be thought
of as a function defined over the elements of a sample space. For
example, the outcome of a coin toss and the grade of a core sample in
a diamond drill hole are both random variables.
Frequency Distribution
A frequency distribution shows how the units of a population are
distributed over the range of their possible values. There are two
types of frequency distribution, probability density function (pdf)
and cumulative density function (cdf).
a) Probability Density Function
The possible outcome of a random selection of one sample
is described by the probability distribution of its grade. This
distribution, usually referred to as the probability density function
or pdf, may or may not be known. For example, the possible
outcomes from throwing a die are 1, 2, 3, 4, 5, or 6. Each outcome
has an equal probability of 1/6. On the other hand, in a mineral
deposit, the probability distribution of the grade will never be
known. In that case, an experimental probability distribution
is computed to infer which theoretical distribution may have
produced such sample values.
The probability distributions can be either discrete or continuous
functions. In the case of discrete functions, the distributions assign
a probability f(x) to each event x. For example, the distribution for a
toss of a coin will assign a probability of 0.5 that the coin lands heads
up, and a probability of 0.5 that it lands tails up. The summation of
all the possible f(x), in this case 0.5 + 0.5, is equal to one.
Thus, the following must hold true if f(x) is discrete:
1. f(xi) ≥ 0 for xi ∈ R (R is the domain)
2. Σ f(xi) = 1
In the case of a continuous distribution, a density of probability
f(x) will be assigned to each x so that the probability of one value
falling between x and x+dx will be f(x) dx, where dx is infinitesimal.
The f(x) is a function such that the area under the plotted curve
between two limits a and b gives the probability of an observation
within this range. This area is also the integral between a and b in the
usual notation of calculus:
P(a<X<b) = a∫b f(x) dx (2.1.1)
The following must hold true if f(x) is continuous:
1. f(x) ≥ 0
2. ∫ f(x) dx = 1

Page - 138 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

b) Cumulative Density Function Notes


The cumulative probability distribution F(X), usually referred to
as the cumulative density function or cdf, describes the proportion of
the population below a certain value. If x is a random variable, then
the cumulative density function F(x) is:
F(x) = P(X ≤ x) (2.1.2)
The following holds true for F(x).
1. 0 ≤ F(x) ≤ 1 for all x
2. F(x) is nondecreasing
3. F(-∞) = 0, and F(∞) = 1
Descriptive Measures
The general procedure used for describing a set of numerical
data is referred to as “reduction of the data.” This process involves
summarizing the data by computing a small number of measures
that characterize the data and describing adequately for the
immediate purposes of the analyst. We use several statistics to
mathematically describe a distribution. These statistics fall into three
categories: measures of location, measures of variation or spread,
and measures of shape.
Measures of Location
Mean
The mean, m, is the most important measure of central tendency.
It measures that point about which the data tend to cluster. The
mean is the arithmetic average of the data values. It is calculated by
the formula:
m = 1/n Σ xi i=1,...,n (2.2.1)
where n is the number of data, and x1,...,xn are the data values. The
summation sign, Σ, is used as a shorthand notational substitute for
the instruction “take the sum” or “add.”
A mean can be simple or weighted. A weighted mean is
one where weights representing the importance of the various
observations are applied to the observations in the process
of computing the mean. These weights represent additional
information that is introduced into the computation. A weighted
mean is calculated by the formula:
Weighted m = Σ wi.xi / Σ wi i=1,...,n (2.2.2)
where wi are the weights. Note that, in this formula, the
denominator is the sum of the weights and not the number of
observations.
Median
The median, M, is the midpoint of the observed values if they
are arranged in increasing (or decreasing) order. Therefore, half of
the values of the distribution are below the median and half of the
values are above the median. When the number of observations is
even, the median is midway between the two central values.

MineSight for Modelers—Geostatistics Page - 139


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes The median can be calculated easily once the data is ordered so
that x1 < x2 < ...
< xn. The calculation is slightly different depending on whether the
number of data values, n, is odd or even:
M = x(n+1)/2 if n is odd
M = [xn/2 + x(n/2)+1] / 2 if n is even (2.2.3)
The median can easily be read from a probability plot as we will
see in the next chapter. Since the y-axis records the cumulative
frequency, the median is the value on the x-axis that corresponds to
50% on the y-axis.
Mode
The mode is the value that occurs most frequently. The mode is
easily located on a graph of a frequency distribution. It is at the peak
of the curve, the point of maximum frequency. On a histogram, the
class with the tallest bar can give a quick idea where the mode is.
However, when the histogram is especially irregular, or when there
are two or more classes with equal frequencies, the location of the
mode becomes more difficult.
One of the drawbacks of the mode is that it may change with
the precision of the data values. For this reason, the mode is not
particularly useful for data sets in which the measurements have
several significant digits. In such cases, one chooses an approximate
value by finding the tallest bar on a histogram.
Minimum
The minimum is the smallest value in the data set. In many
practical situations, the smallest values are recorded simply as being
below some detection limit. In such situations, it does not make
much difference whether the minimum value assigned is 0 or some
arbitrary small value as long as it is done consistently. However, it is
extremely important, especially in drillhole sampling, not to assign
zero values to the missing data. They should be separated from the
actual data by either assigning negative values, or by alphanumeric
indicators. Once it is decided how to handle the missing assays, then
they can be used accordingly in the compositing stage.
Maximum
The maximum is the largest value in the data set. It is especially
important in ore reserve analysis to double check the maximum
value as well as any suspiciously high values, for accuracy.
This should be done to make sure that these values are real, not
typographical errors.
Both the minimum and the maximum values of a set of data are
valuable information about the data that give us the limits within
which the data values are distributed.
Quartiles
The quartiles split the data into quarters in the same way the
median splits the data into halves. Quartiles are usually denoted by
the letter Q. For example, Q1 is the lower or first quartile, Q3 is the
upper or third quartile, etc.

Page - 140 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

As with the median, quartiles can be read from a probability plot. Notes
The value on the x-axis, which corresponds to 25% on the y-axis,
is the lower quartile and the value that corresponds to 75% is the
upper quartile.
Deciles, Percentiles, and Quantiles
The idea of splitting the data into halves or into quarters can
be extended to any fraction. Deciles split the data into tenths. One
tenth of the data fall below the first or lowest decile. The fifth decile
corresponds to the median. In a similar way percentiles split the data
into hundredths. The 25th percentile is the same as the first quartile,
the 50th percentile is the same as the median, and the 75th percentile
is the same as the third quartile.
Quantiles, on the other hand, can split the data into any
fraction. They are usually denoted by q, such as q.25 and q.75, which
corresponds to lower and upper quartiles, Q1 and Q3, respectively.
Measures of Variation
The other measurable characteristic of a set of data is the variation.
Measures of variation or spread are frequently essential to give
meaning to averages. There are several such measures. The most
important ones are the variance and standard deviation.
Variance
The sample variance, s2, describes the variability of the data
values. It is the average squared difference of the data values from
their mean. Mathematically it is the second moment about the mean.
It is calculated by the following formula:
s2 = 1/n Σ (xi - m)2 i = 1,...,n (2.2.4)
The following formula can also be used and is more suitable for
programming:
s2 = 1/n { Σ(xi)2 - [ Σ(xi) ] 2 / n } i = 1,...,n (2.2.5)
Since the variance involves the squared differences, it is sensitive
to outlier values. Therefore, the values with skewed distributions
often have higher variances than those that are normally distributed.
Standard Deviation
The standard deviation, s, is simply the square root of the variance
(√s2 ). It is often used instead of the variance since its units are the
same as the units of the variable described.
Interquartile Range
The interquartile range, IQR, is the difference between the upper
and the lower quartiles and is given by:
IQR = Q3 - Q1 (2.2.6)
Unlike the variance and the standard deviation, the interquartile
range does not use the mean as the center of the distribution.
Therefore, it is sometimes preferred if a few erratically high values
strongly influence the mean. However, IQR is not a well known
parameter in earth science applications.

MineSight for Modelers—Geostatistics Page - 141


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes Measures of Shape


In addition to measures of central tendency and variation, there
are other measurable characteristics of a set of data that may describe
the shape of the distribution. There are several such measures.
However, some of them, such as skewness or peakedness are rarely
used in earth science applications.
Skewness
The skewness is often calculated to determine if a distribution is
symmetric. The direction of skewness is defined as in the direction of
the longer tail of the distribution. If the distribution tails to the left, it
is called negatively skewed, if it tails to the right, it is called positively
skewed. The skewness is calculated using the following formula:
Skewness = [1/n Σ (xi - m)3] / s3 i = 1,...,n (2.2.7)
Skewness is the third moment about the mean. Therefore it is
even more sensitive to erratic high values than the mean and the
variance. A single large value can heavily influence the skewness
since the difference between each data value and the mean is
cubed. Therefore, the calculated value of skewness is not nearly so
important as its sign.
Peakedness
The peakedness or kurtosis is often calculated to determine the
degree to which the distribution curve tends to be pointed or peaked.
It is calculated using the following formula:
Peakedness = [1/n Σ (xi - m)4] / s4 i = 1,...,n (2.2.7)
Peakedness is the fourth moment about the mean, therefore it is
even more sensitive to erratic high values than the skewness. It gives
higher values when the curve is peaked. However, as in skewness,
a single large value can heavily influence the peakedness value.
Therefore, its usefulness is limited, and rarely used in geostatistics.
Coefficient of Variation
The coefficient of variation, CV, is the measure of relative variation.
It is simply the standard deviation (s) divided by the mean (m):
CV = s / m (2.2.8)
The coefficient of variation does not have a unit. Therefore, it
can be used to compare the relative dispersion of values around
the mean, among distributions described in different units. It may
also be used to compare distributions that, although measured in
the same units, are of such difference in absolute magnitude that
comparison of the variability by the use of measures of absolute
variation is not meaningful.
If the estimation is the final goal of a study, the coefficient of
variation can provide some warning of upcoming problems. A
coefficient of variation greater than one indicates the presence of
some erratic high sample values that may have significant impact on
the final estimates.

Page - 142 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

Theoretical Model of Distributions Notes


The normal and the lognormal distribution are commonly used to
represent frequency distributions of sample data values.
Normal Distribution
The normal distribution is the most common theoretical
probability distribution used in statistics. It is also referred as the
Gaussian distribution. The normal distribution curve is bell-shaped.
Its equation is a function of two parameters, the mean (m) and the
standard deviation (s) as follows:
f(x) = 1 / (s√2π) exp [-½ ((x-m) / s)2] (2.3.1)
In a population defined by a normal distribution, 68% of the
values fall within one standard deviation of the mean, and 95% of
the values fall within two standard deviation of the mean. Figure
2.3.1 shows an example of the normal distribution curve.
Standard Normal Distribution
A normal distribution with mean 0 and standard deviation 1
is called the standard normal distribution. Any normal variable
can be converted to a standard normal deviate, z, or standardized
by subtracting its arithmetic mean and dividing by its standard
deviation as follows:
z = (x - m) / s (2.3.2)

a) pdf

b) cdf
Figure 2.3.1 Normal Distribution Curve

MineSight for Modelers—Geostatistics Page - 143


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes The cumulative distribution function, denoted F(x), is not


easily computed for the normal distribution. Therefore, extensive
tables have been prepared to simplify calculation with the normal
distribution. Most statistics text include tables for the standard
normal distribution with mean 0 and variance 1. Table 2.3.1 is an
example of such tables. It gives the cumulative normal distribution
function. An example on the use of this table is given below:
Example to use cumulative normal distribution table:
Find the proportion of sample values above 0.5 cutoff in a normal
population that has a mean m = 0.3, and a standard deviation s = 0.2.
Solution:
First, transform the cutoff, x0, to unit normal.
z = (x0 - m) / s = (0.5 - 0.3) / 0.2 = 1
Next, find the value of F(z) for z = 1. The value of F(1) = 0.8413
from Table 2.3.1. Then, calculate the proportion of sample values
above 0.5 cutoff, P(x > 0.5), as follows:
P(x > 0.5) = 1 - P(x ≤ 0.5) = 1 - F(1) = 1 - 0.8413 = 0.16
Hence, 16% of the samples in the population are greater than 0.5.
Table 2.3.1 Cumulative Normal Distribution Function

F(z) = 1/ √2 π ∫ Z eudt, u = -1/2t2

Page - 144 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

Lognormal Distribution Notes


The lognormal distribution occurs when the logarithm of a
random variable has a normal distribution. This distribution is
positively skewed. Its probability density is given by
f(x) = 1 / (xß√2π) e-u for x > 0, ß > 0 (2.3.3)
where
u = (ln x - α)2 / 2ß2
α = average of logarithms, and
ß = variance of logarithms.
Many variables encountered in ore reserve analysis have
positively skewed distributions. However, it should be noted that
not all positively skewed distributions are lognormal. In fact, one
may result in erroneous estimates if an assumption of lognormality
is used for data that are not from a lognormal distribution.
Figure 2.3.2 shows an example of a lognormal density curve. The
corresponding formulas of mean and variance between the normal
and lognormal distributions are given below:
μ = exp (α + β2/2) (2.3.4)
σ = μ [exp(β ) - 1]
2 2 2
(2.3.5)
α = log μ - β /2
2
(2.3.6)
β = log [1 + (σ /μ )]
2 2 2
(2.3.7)
Three-Parameter Lognormal Distribution
There are instances where the logarithm of a random variable plus
a constant, ln (x+c), is normally distributed. This type of distribution
is called three-parameter lognormal distribution. If a variable is three-
parameter lognormal, the cumulative curve will show an excess of
low values. The additive constant, c, can be estimated by using
c = (M2 - q1 q2) / (q1 + q2 + 2M) (2.3.8)
where M is the median, q1 and q2 are the quantiles corresponding
to p and 1-p cumulative frequencies respectively. In theory, any
value of p can be used but a value between 10% and 25% will give
the best results.
Some researchers investigated the possible problems in using the
three-parameter lognormal distribution. They found that bias could
result if the additive constant is too small, and that it would be better
to make the constant too large than too small.

MineSight for Modelers—Geostatistics Page - 145


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes

Figure 2.3.2 Lognormal Distribution Curve Examples


Random Variables
A random variable is a variable whose values are randomly
generated according to a probabilistic mechanism. In other words,
a random variable denotes a numerical quantity defined in terms of
the outcome of an experiment. For example, the outcome of coin toss
is a random variable. The grade of ore from a drill hole sample is a
random variable.
Properties of a Random Variable
The parameters of a random variable cannot be calculated exactly
by observing a few outcomes of the random variable; rather they are
parameters of a conceptual or theoretical model. From a sequence
of observed outcomes, such as drill hole assay values, all one can do
is simply to calculate sample statistics based on that particular data
set. However, a different set of assay values may produce a different
set of statistics. It is true that as the number of samples increases,
the sample statistics tend to approach to their corresponding
model parameters. This leads the practitioners to assume that the
parameters of the random variable are the same as the sample
statistics they can calculate.
The two model parameters most commonly used in probabilistic
approaches to estimation are the mean or expected value of the
random variable and its variance.
Expected Value — The expected value of a random variable is its
mean or average outcome. It is often denoted by μ, and is defined as
μ = E(x) (2.4.1)
E(x) refers to expectation which is defined by the following
formula:
E(x) = -∞∫∞ x f(x) dx (2.4.2)

Page - 146 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

where f(x) is the probability density function of the random Notes


variable x.
Variance — The variance of a random variable is the expected
squared difference from the mean of the random variable. It is often
denoted by σ2, and is defined by
σ2 = E (x-μ)2 = -∞∫∞ (x-μ)2 f(x) dx (2.4.3)
The standard deviation of a random variable is simply the square
root of its variance. It is denoted by σ.
Independence
Random variables are considered to be independent if the joint
probability density function of n random variables satisfy the
following relationship:
p(x1,x2,...,xn) = p(x1) p(x2) ... p(xn) (2.4.4)
In this equation, p(xi) is the marginal distribution of xi. The
1

equation simply means that the probability of two or more


events happening is the product of the individual probabilities of
each event happening. If A and B are independent events, their
probability of occurring together is given by
P(A ∩ B) = P(A) * P(B) (2.4.5)
For instance, the probability of getting two heads in two
successive flips of a coin is ½ * ½ = ¼ or 0.25. The probability of
drawing two aces in a row from a standard deck of 52 playing cards
is 4/52 * 4/52 = 1/169 provided the first card is replaced before the
second is drawn. The probability of rolling a 3, a 1, a 6, and then
some other number in four rolls of a balanced die is 1/6 * 1/6 * 1/6
* 3/6 = 1/432.
Conditional Probability
If the probability of one of the events happening influences the
probability of the other event happening, then the random variables
cannot be considered independent. In that case, the conditional
probability must be computed. If A and B are any events that are
dependent, then the conditional probability of A relative to B is
given by
P(A | B) = P(A ∩ B) / P(B) (2.4.6)
For example, if the probability that a research project will be well
planned is 0.80 and the probability that it will be well executed is
0.72, then the probability that a research project will be well planned
will also be well executed is 0.72/0.80 = 0.90.
Covariance
The dependence between two random variables is described by
covariance. Covariance is defined as follows:
Cov(x1,x2) = E {[x1 - E(x2)] [x2 - E(x2)]} (2.4.7)
= E(x1x2) - [E(x1)] [E(x2)]
If x1 and x2 are independent, then they have no covariance, or
Cov(x1,x2) = 0.

MineSight for Modelers—Geostatistics Page - 147


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes Properties of Expectation and Variance


The following are some of the properties associated with
expectation and variance:
1. If C is a constant, then E(Cx) = C E(x)
2. If x1, x2, ..., xn have finite expectation, then
E(x1+x2...+xn) = E(x1) + E(x2) + ... + E(xn)
3. If C is a constant, then Var(Cx) = C2 Var(x)
4. If x1, x2, ..., xn are independent, then
Var(x1+x2...+xn) = Var(x1) + Var(x2) + ... + Var(xn)
5. Var(x+y) = Var(x) + Var(y) + 2 Cov(x,y)
Central Limit Theorem
The Central Limit Theorem (CLT) states that sample means of a
group of independent variables tend toward a normal distribution
regardless of the distribution of the samples. Thus, if samples of
size, n, are drawn from a population that is not normally distributed,
the successive sample means will nevertheless form a distribution
that is approximately normal. This distribution approaches closer
and closer to normal as the sample size, n, increases. How close the
approximation is to a normal distribution is hard to determine. In
most cases, for n > 40, the approximation is quite good.
Standard Error of the Mean
The dispersion of the distribution of sample means depends on
two factors:
1. The dispersion of the parent population. The more variable
the parent population, the more variable will be the sample
means.
2. The size of the sample. The variability of the sample means
decreases with increasing size of the sample.
Mathematically speaking, the standard deviation of the
distribution of sample means, which is also called the standard error
of the mean, varies directly with the standard deviation of the parent
population (σ), and inversely with the square root of the number of
samples (n). This is expressed in the following equation:
Standard Error of the Mean = σx = σ / √n (2.5.1)
Of course, in order for the above formula to be valid, the samples
have to be independent of each other.
Confidence Intervals
The confidence in an estimate can be expressed by applying
error bounds around the estimate. These error bounds are called
confidence intervals, or confidence limits. Using the central limit
theorem, some confidence intervals (CI) can be calculated for the
sample mean (m). The most common intervals are at 95% confidence
level, and are formed in the following manner:
Lower CI = mL = m - 2 (s / √n) (2.5.2)

Page - 148 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Basic Statics

Upper CI = mU = m + 2 (s / √n) Notes


where m is sample mean, s is sample standard deviation, and n is
the number of samples. The above confidence limits tell us that 95
out of 100 times this procedure is applied, the true population mean
will be within the limits specified. It is clear that, as the number of
samples increases, the standard error of the mean decreases, and the
confidence interval becomes smaller.
Lognormal Case
Many variables encountered in earth sciences are not normally
distributed. They often have positively skewed distributions. Even
though not all positively skewed distributions are lognormal,
depending on one’s objective a lognormal model can describe most
of these distributions adequately.
To compute confidence intervals when we have a lognormal
model, we first determine the log mean (α) and variance (β2) of the
lognormal distribution using the Equations 2.3.6 and 2.3.7 based the
sample mean and standard deviation.
Then we compute the confidence intervals in the log space. For
instance, at 95% confidence level, we get:
Lower CI (log) = αL = α - 2 / (β2/n) (2.5.3)
Upper CI (log) = αU = α + 2 / (β2/n)
Finally we compute the confidence intervals in terms of original
variables:
Lower CI = mL = exp (αL + 1/2 β2) (2.5.4)
Upper CI = mU = exp (αU + 1/2 β2)
Choosing Sample Sizes
One of the most important questions for the design of any
sampling program is the total sample size that is required. As a
general rule the sample size for a study should be large enough so
that important parameters are estimated with sufficient precision
to be useful. But it should not be unnecessarily large giving more
precision than is need, thus wasting time and resources.
If the sample statististics are approximately normally distributed,
there are equations that can be used to determine sample sizes in
different situations. Some examples are given below.
To estimate a population mean from a simple random sample
with a 95% confidence interval of m ± δ, the sample size should be
approximately
N = 4 σ2 / δ2 (2.6.1)
where σ is the population standard deviation. To use this equation,
an estimate or guess of σ must be available. For example, if the mean
of the sample population is m = 0.5, and the 95% confidence interval
is ±0.05, assuming that σ will be about the same as the mean, we get
N = 4 * (0.5)2 / (0.05)2 = 4 * 0.25 / 0.0025 = 400.

MineSight for Modelers—Geostatistics Page - 149


Practical Geostatistics for Earth Sciences—Basic Statistics Proprietary Information of Mintec, Inc.

Notes Suppose two random samples of size N are taken from different
distributions, and give sample means m1 and m2. Then to obtain an
approximate 95% confidence interval for the difference between the
two population means of the form (m1 - m2) ± δ requires that the
sample size should be approximately
N = 8 σ1 σ2 / δ2 (2.6.2)
where σ1 and σ2 are the standard deviations for populations 1 and
2, respectively.
2.7 Bivariate Distribution
In the analysis of earth science data, it is often desirable to
compare two distributions or to know the pattern of dependence of
one variable (X) to another (Y). For instance, one may want to study
the relationship between rainfall and the yield of certain crop, or
the relationship between pollutants in the air and the incidence of
certain disease. Problems like these are referred as the problems of
correlation analysis, where it is assumed that the data points (xi, yi)
for i=1,2, ..,n are values of a pair of random variables whose joint
density is given by F(x, y).
This pattern of dependence is particularly critical if one wishes to
estimate the outcome of unknown X from the outcome of known Y.
For example, in a gold deposit, if the drill hole cores were sampled
for gold, but not always for silver, then one may want to estimate
the missing silver grades from gold grades if these two grades are
correlated. A strong relationship between two variables can thus
help us predict one variable if the other is known.
Just like the distribution of a single random variable X is
characterized by a cdf F(x) = Prob {X ≤ x}, the joint distribution of
outcomes from two random variables X and Y is characterized by a
joint cdf:
F(x,y) = Prob {X ≤ x, and Y ≤ y} (2.7.1)
In practice, the joint cdf F(x,y) is estimated by the proportion of
pairs of data values jointly below the respective threshold values x, y.

(Footnotes)
1
A marginal distribution is a distribution that deals with only one
variable at a time.

Page - 150 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Data Analysis and Display

Data Analysis and Display Notes


It is essential to organize any the statistical data to understand
its characteristics and to see it clearly. Therefore, much of statistics
deals with the organization, presentation, and summary of data. The
organization of data includes its preparation and verification as well.
Data Preparation and Verification
Error Checking
One of the most tedious and time-consuming tasks in a
geostatistical study is error checking. Although one would like to
weed out all the errors at the outset, there are often a few errors that
remain hidden until the analysis is already started.
In drill hole assay data preparation, the initial drill hole logs
must be coded carefully and legibly to prevent any future errors.
One should not use zero or blank to indicate the missing data. It
is preferable to use a specific negative value, such as -1 or -999, for
such data. Otherwise, the missing data may end up being used in
estimation as part of the actual data.
After the data have been entered into the computer, it should
be listed to check for typographical errors. However, that process
alone does not guarantee the accuracy of the data. Some helpful
suggestions to verify the data for accuracy are given below:
• Sort the data and examine the extreme values. Try to establish
their authenticity by referring to the original sampling logs.
• Plot sections and plan maps for visual verification and
spotting the coordinate errors. Are they plotting within the
expected limits?
• Locate the extreme values on a map. Are they located
along trends of similar data values or are they isolated? Be
suspicious of isolated extremes. If necessary and possible, get
duplicate samples.
When trying to sort out inconsistencies in the data, it usually
helps to familiarize oneself with the data. Time spent for verification
of the data is often rewarded by a quick recognition when an error
has occurred.
Graphical Display and Summary
Univariate Description
Frequency Distributions and Histograms
One of the most common and useful presentations of data sets
is the frequency table and the corresponding graph, the histogram.
A frequency table records how often observed values fall within
certain intervals or classes. The mean and the standard deviation of
the values within these intervals are also reported.
Table 3.2.1 is an example of a frequency table using the sample
copper (CU) data. The first column in this table is the cutoff or
threshold value at the specified intervals. The second column
“weight” shows the number of intervals if the values are used
without any weighting item. If there is weighting, then the weight
refers to the unit of the weight item used, for example the length of
the samples. The column that has the title of the item (CU) gives the
mean or the average of the data within each interval. If there is no
weighting, the mean is simply the mathematical average. If there is

MineSight for Modelers—Geostatistics Page - 151


Practical Geostatistics for Earth Sciences—
Data Analysis and Display Proprietary Information of Mintec, Inc.

Notes weighting, then the mean will be the weighted average value. The
last column in the table reports the standard deviation of the samples
within the intervals. Such frequency tables can have additional
information such as the percent or the proportion of samples,
coefficient of variation and so on.
The histogram is the graphical representation of the same
information on a frequency table. Summary statistics are customarily
included in the histogram to complete the preliminary information
needed to study the sample data. The histogram of the data can be
generated either on a printer or a plotter, and is essential for visual
presentation. Figure 3.2.1 shows an example of a histogram using the
sample data.
Frequency tables and histograms are very useful in ore reserve
analysis for many reasons:
1. They give a visual picture of the data and how they are
distributed.
2. Bimodal distributions show up easily, which usually indicates
mixing of two separate populations.
3. Outlier high grades can be easily spotted.

Table 3.2.1 Frequency Statistics of Sample Data

Page - 152 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Data Analysis and Display

Notes

Figure 3.2.1 Histogram Plot of Sample Data


Cumulative Frequency Tables
In ore resource analysis, the cumulative frequency above a
specified cutoff value is of great interest. Therefore, the programs
that generate frequency tables and histograms are also designed
to generate the cumulative frequency tables. These tables give the
mean and standard deviation of data above the cutoff grades that
correspond to the histogram intervals. Table 3.2.2 is a cumulative
frequency table using the same data in Table 3.2.1. Obviously,
the only difference between these two tables is that the first one
gives the statistics within the intervals; the second one gives the
cumulative results.

Table 3.2.2 Cumulative Frequency Statistics of Sample Data

MineSight for Modelers—Geostatistics Page - 153


Practical Geostatistics for Earth Sciences—
Data Analysis and Display Proprietary Information of Mintec, Inc.

Notes Probability Plots


Probability plots are useful in determining how close the
distribution of sample data is to being normal or lognormal. On a
normal probability plot, the y-axis is scaled so that the cumulative
frequencies will plot as a straight line if the distribution is normal.
On a lognormal probability plot, the x-axis is in logarithmic scale.
Therefore, the cumulative frequencies will plot as a straight line if
the data values are lognormally distributed. Figure 3.2.2 shows a
lognormal probability plot of the sample copper data values that are
greater than 0.05% CU. It is very tempting to consider the shape of
the plot in this figure to be a straight line, and therefore to assume
that the distribution plotted is lognormal. However, a somewhat
convex shape of the plot at its upper center indicates that this may
not be the case.
When the use of a resource estimation method depends on
assumptions about the distribution, one must be aware of the
consequences of disregarding deviations of a probability plot at the
extremes. This is because such assumptions often have their greatest
impact when one is estimating extreme values. Departures of a
probability plot from approximate linearity at the extreme values are
often deceptively small and easy to overlook. However, the estimates
derived using such an assumption may be different from reality.
Probability plots are very useful for detecting the presence of
multiple populations. Although the deviations from the straight line
on the plots do not necessarily indicate multiple populations, they
represent changes in the characteristics of the cumulative frequencies
over different intervals. Therefore, it is always a good idea to find
out the reasons for such deviations.
Unless the estimation method is dependent on a particular
distribution, selecting a theoretical model for the distribution of data
values is not a necessary step prior to estimation. Therefore, one
should not read too much into a probability plot. The straightness
of a line on a probability plot is no guarantee of a good estimate and
the crookedness of a line should not condemn distribution-based
approaches to estimation. Certain methods lean more heavily toward
assumptions about the distribution than others do. Some estimation
tools built on the assumption of normality may still be useful even
when the data are not normally distributed.

Page - 154 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Data Analysis and Display

Notes

Figure 3.2.2 Lognormal Probability Plot of Sample Data


Bivariate Description
Scatter Plots
One of the most common and useful presentations of bivariate
data sets is the scatter plot or scatter diagram. A scatter plot
is simply an x-y graph of the data on which the x-coordinate
corresponds to the value of one variable, and the y-coordinate to the
value of other variable.
A scatter plot is useful for assistance in seeing how well two
variables are related. It is also useful for drawing our attention
to unusual data pairs. In the early stages of the study of spatially
continuous data set, it is necessary to check and clean the data. Even
after the data have been cleaned, a few erratic values may have a
major impact on estimation. The scatter plot can be used to help both
in the validation of the initial data and in the understanding of later
results. Figure 3.2.3 shows a scatter plot of two different assay grades.
Quantile-Quantile Plots
Two marginal distributions can be compared by plotting their
quantiles against one another. The resultant plot is called quantile-
quantile, or simply q-q plot. If the q-q plot appears as a straight line,
the two marginal distributions have the same shape. A 45‑degree line
indicates further that their mean and variances are also the same.
Correlation
In the very broadest sense, there are three possible scenarios
between two variables: the variables are positively correlated,
negatively correlated, or uncorrelated.
Two variables are positively correlated if the smaller values of one
variable are associated with the smaller values of the other variable,
and similarly the larger values are associated with the larger values
of the other. For example, in a gold-silver deposit, higher values of
silver can be observed with higher values of gold. Similarly, amount

MineSight for Modelers—Geostatistics Page - 155


Practical Geostatistics for Earth Sciences—
Data Analysis and Display Proprietary Information of Mintec, Inc.

Notes of brecciation in a rock formation can be positively correlated with


the amount of gold deposited in the rock.
Two variables are negatively correlated if the smaller values
of one variable are associated with the larger values of the other
variable, or vice versa. In geologic data sets, the concentrations
of two major elements are negatively correlated. For example, in
dolomitic limestone, an increase in the amount of calcium, usually
results in a decrease in the amount of magnesium.
The final possibility is that the two variables are not related. An
increase or decrease in one variable has no apparent effect on the
other. In this case, the variables are said to be uncorrelated.

Figure 3.2.3 Scatter Plot of Two Variables


Correlation Coefficient
The correlation coefficient, r, is the statistic that is most commonly
used to summarize the relationship between two variables. It can be
calculated using the following equation:
r = CovXY / σx σy (3.2.1)
The numerator, CovXY, is called the covariance and can be
calculated using
CovXY = 1/n ∑(xi - mx) (yi - my) i = 1,...,n (3.2.2)
where n is the number of data; xi’s are the data values for the first
variable, mx is their mean, and σx their standard deviation. Similarly,
yi’s are the data values for the second variable, my is their mean, and
σy their standard deviation.
The correlation coefficient is actually a measure of how close to a
straight line two variables plot. If r = 1, then the scatter plot will be a
straight line with a positive slope. This is the case of a perfect positive
correlation. If r = -1, then the scatter plot will be a straight line with a
negative slope. This is the case of a perfect negative correlation. If r =
0, then there is no correlation between the two variables.

Page - 156 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Data Analysis and Display

It is important to note that r provides a measure of the linear Notes


relationship between two variables. If the relationship between two
variables is not linear, then the correlation coefficient may be a very
poor summary statistics.
A few outlier pairs may affect the correlation coefficient and
the covariance. Exclusion of these pairs from the statistics can
dramatically improve an otherwise poor correlation coefficient. Also
the correlations between two variables may be dependent on other
factors such as relative depth of mineralization from surface, certain
rock or alteration types and so on.
Linear Regression
If there is a strong relationship between two variables, which
can be expressed by an equation, then one variable can be used to
predict the other one if one of them is unknown. The simplest case
for this type of prediction is linear regression, in which it is assumed
that the dependence of one variable on the other can be described by
the equation of a straight line
y = ax + b (3.2.3)
where “a” is the slope, and “b” is the constant of the line. They are
given by:
a = r (σy / σx) b = my - amx (3.2.4)
The slope, a, is the correlation coefficient multiplied by the ratio
of the standard deviations, with σy is the standard deviation of the
variable we are trying to predict, and σx is the standard deviation of
the variable we know. Once the slope is known, then the constant,
b, can be calculated using the means of the two variables, mx and
my. Figure 3.2.3 gives the equation of the line that describes the
relationship between two different grade items.
One of the properties of the line fitted by the least squares
regression method is that the sum of the vertical deviations about
that line is zero. The size of these deviations measures the goodness
of the “fit,” and the numerical measure of the variation of the points
about the line may be computed in just the same way as the standard
deviation of a frequency distribution. The resulting statistics is
called the standard error of the estimate s(y,x) which is given by the
following formula
standard error of the estimate s(y,x) = √ Σ (d)2 / n (3.2.5)
where d is the deviations from the straight line. Clearly, it bears
the same relation to the least squares line in a scatter diagram as
the standard deviation of a frequency distribution bears to the
arithmetic mean. It may be interpreted in as much the same way in
that, if the deviations about the line are normally distributed, then
it may be said that 68% of the deviations will lie within a distance of
one standard error of the estimate from the line. Thus, the standard
error of the estimate measures the variability of the observed points
about the line of average relationship. If all the points fell exactly on
the line, the standard error of the estimate would be zero.
Spatial Description
One of the characteristics of earth sciences data sets is that the
data belong to some location in space. Spatial features of the data set,
such as the degree of continuity, the overall trend, or the presence
of high or low-grade zones, are often of considerable interest. None

MineSight for Modelers—Geostatistics Page - 157


Practical Geostatistics for Earth Sciences—
Data Analysis and Display Proprietary Information of Mintec, Inc.

Notes of the univariate and bivariate descriptive tools presented in the


previous sections captures these spatial features.
Data Location Maps
One of the most common and simplest displays of spatial data is
to generate a location map. This is a map on which each data location
is plotted along with its corresponding data value. Figure 3.2.4 is
an example of such a map on which the pierce points of the sample
drillhole data set have been plotted at a specified level showing the
composite data values on this bench.
The data location maps are an important initial step in analyzing
spatial data sets. With irregularly gridded data, a location map
often gives a clue to how the data were collected. For example, the
areas with higher grade mineralization may be drilled more closely
indicating some initial interest.
The data location maps help reveal obvious errors in data
locations. They also help draw attention to unusual data values that
may be erroneous. Lone high values surrounded by low values and
vice versa are worth rechecking.
Contour Maps
The overall trends in the data values can be revealed by a contour
map. Some contouring algorithms cannot contour the data unless
they are on a regular grid. In that case, the data values need to be
interpolated to a regular grid. Interpolated values are usually less
variable than the original data values and can make the contoured
surface appear smoother. Figure 3.2.5 shows the contoured data
values that are plotted on Figure 3.2.4 using the original data
locations without gridding.

Figure 3.2.4 Sample Data Location Map

Page - 158 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Data Analysis and Display

Notes

Figure 3.2.5 Sample Data Contour Map


Symbol Maps
For many very large regularly gridded data sets, plotting of all the
data values may not be feasible, and a contour map may mask many
of the interesting details. An alternative that is often used in such
situations is a symbol map. This is similar to data plotting with each
location replaced by a symbol that denotes the class to which the
data value belongs. These symbols are usually chosen so that they
convey the relative ordering of the classes by their visual density.
This type of display is often designed for a printer. Therefore, it is
convenient to use if one does not have access to a plotting device.
Unfortunately, the scale on symbol maps may get distorted since
most printers do not print the same number of characters per inch
horizontally as they do vertically.
Moving Window Statistics
For a given data set, it is quite common to find that the data values
in some regions are more variable than in others. Such anomalies
may have serious practical implications. For example, in most
mines, erratic ore grades cause problems at the mill because most
metallurgical processes benefit from low variability in the ore grade.
The calculation of summary statistics within moving windows is
frequently used to investigate anomalies both in the average value
and variability. The area is divided into several local neighborhoods
of equal size and within each local neighborhood, or window,
summary statistics are calculated.
The size of the window depends on the average spacing between
data locations and on the overall dimensions of the area being

MineSight for Modelers—Geostatistics Page - 159


Practical Geostatistics for Earth Sciences—
Data Analysis and Display Proprietary Information of Mintec, Inc.

Notes studied. The window size should be large enough to include a


reliable amount of data within each window for summary statistics.
Needing large windows for reliable statistics and wanting small
windows for local detail may necessitate the use of overlapping the
windows, with two adjacent neighborhoods having some data in
common. If there are still too few data within a particular window, it
is often better to ignore that window in subsequent analysis than to
incorporate unreliable statistics.
Proportional Effect
Knowing the local variability changes across the deposit is
important for estimation. Contouring of data values or moving
window statistics is very useful in displaying fluctuations in data
values in different directions. If there are such fluctuations, it is
important to know how the changes in local variability are related to
the local mean.
A proportional effect is a relationship between mean and variance.
It may be a strictly functional relationship or it may simply be
empirical. The most common form of proportional effect occurs
when the variance is directly proportional to some function of the
mean.
In broad sense, there are four relationships one can observe
between the local mean and local variability:
1. The mean and variability are both constant.
2. The mean is constant, variability changes.
3. The mean changes, variability is constant.
4. Both the mean and variability change.
The first two cases are the most favorable ones for estimation.
The estimates in a particular area will be as good as the estimates
elsewhere if the local variability is roughly constant. In other words,
no area will suffer more than others from highly variable data
values. If the variability changes noticeably, then it is better have a
situation where the variability is related to the mean. In initial data
analysis, it is useful to establish if a predictable relationship exists
between the local mean and variability. If it exists, such relationship
is generally referred to as a proportional effect.
Two very common forms of the proportional effect are:
1. variance vs. mean2
2. variance vs. (mean + constant)2
One of the characteristics of normally distributed values is that
there is usually no proportional effect. In fact, the local standard
deviations are roughly constant. For lognormally distributed values,
a scatter plot of local means versus local standard deviations will
show a linear relationship between the two. Figure 3.2.6 gives an
example of this relationship on the sample data where the mean
and the standard deviation of the copper values on each bench were
plotted against each other.

Page - 160 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Data Analysis and Display

Notes

Figure 3.2.6 Sample Plot of Proportional Effect

MineSight for Modelers—Geostatistics Page - 161


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Analysis of Spatial Continuity Notes


The analysis of spatial continuity in an ore deposit is very
essential for ore reserve estimation. There are several geostatistical
tools to describe the spatial continuity, such as correlation function,
covariance function, and variogram. All of these tools use some
summary statistics to describe how spatial continuity changes as
a function of distance and direction. This chapter will cover only
the variogram since it is more traditional than the covariance and
correlation functions although they all are equally useful.
Variogram
The variables in earth sciences represent some similarity (or
dissimilarity) that exists between the value of a sample at one
point and the value of another sample some distance away. This
expected variation can be called the spatial similarity or rather
spatial correlation. Therefore, it is apparent that we need a measure
to characterize the similarity or correlations of sample values within
a deposit, or rather within a homogeneous area of the deposit where
we can assume the geological relationships are the same or similar.
This can be achieved by means of a variogram.
Definition
In simplest terms, the variogram measures the spatial correlation
between samples. One possible way to measure this correlation
between two samples at point xi and xi+h taken h distance apart is
the function
f1(h) = 1/n ∑ [Z(xi) - Z(xi+h)] (4.1.1)
where Z(xi) refers to the assay value of the sample at point xi, h is the
distance between samples. Thus, the function measures the average
difference between samples h distance apart. Although this function
is useful, in many cases it may be equal to zero or close to it because
the differences will cancel out. A more useful function is obtained by
squaring the differences:
f2(h) = 1/n ∑ [Z(xi) - Z(xi+h)]2 (4.1.2)
In this case the differences do not cancel out, and the result
of the above function will always be positive. This was the
variogram function originally denoted as 2 γ(h). However,
the popular usage refers to semi-variogram γ(h) as being the
variogram. Therefore, throughout this chapter, variogram will
refer to the following function:
γ(h) = 1/(2n) ∑ [Z(xi) - Z(xi+h)]2 i = 1,...,n (4.1.3)
Note that γ(h) is a vector function in three-dimensional space and
it varies with both distance and direction. The number of samples,
n, is dependent on the distance and direction selected to accept
the data. The formal definition of the variogram is given by the
following equation:
γ(h) = 1/(2v) ∫v [Z(x) - Z(x+h)]2 dx (4.1.4)
v refers to the volume of the deposit. The function, Z(x), simply
defines the value of interest such as the grade at point x.

MineSight for Modelers—Geostatistics Page - 163


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes The basic terminology used to describe the features of the


variogram is given below:
Sill
The value of the variogram where it reaches a plateau, or levels
off is called the sill. A variogram computed using the Equation 4.1.3
should likely have a sill approximately equal to the variance of the
data. Figure 4.1.1 shows the sill and other variogram parameters on a
sample plot.
Range
The samples that are close to each other have generally similar
values. As the separation distance between samples increases, the
difference between the sample values, and hence the corresponding
variogram value will also generally increase. Eventually,
however, an increase in the separation distance no longer causes a
corresponding increase in the variogram value. Thus, the variogram
reaches a plateau, or its sill value. The distance at which the
variogram reaches the sill is called the range.
The range is simply the traditional geologic notion of zone or range
of influence. It means that beyond the range, the samples are no
longer correlated. In other words, they are independent of each other.
Nugget Effect
Although one would expect to obtain the same value when the
samples are taken from the same location, it is not very unusual,
especially in highly variable deposits, to obtain different values.
This causes a discontinuity at the origin of the variogram. The
vertical jump from the value of zero at the origin to the value of the
variogram is called the nugget effect. The ratio of the nugget effect to
the sill is often referred to as the relative nugget effect and is usually
quoted in percentages. The nugget effect is a combination of:
• short-scale variability that occurs at a scale smaller than the
closest sample spacing
• sampling error due to the way that samples are collected,
prepared and analyzed
Figure 4.1.1 shows a sample variogram from the computer
program output.

Figure 4.1.1 Variogram Parameters

Page - 164 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Notes

Lag Statistics

Figure 4.1.2 Sample Variogram Output


Computation
In practice, a variogram is almost always computed using a
discrete number of points such as drill hole assays. Therefore,
the Equation 4.1.3 can be used to calculate the variogram. In this
equation, it is assumed that there are n pairs of samples; each pair is
separated by distance h. In addition, all these samples are assumed
to lie on a straight line, along which the variogram computation is
being performed. To illustrate the computation of a variogram, the
following example can be given.
Example for Variogram Calculation:
Let us calculate an E-W variogram using the data in Figure 4.1.3.
In this figure, there are five samples collected in E-W direction,
each separated by a distance h where h is equal to 15 units (feet or
meters). Since the spacing of data values is 15 units, we compute the

MineSight for Modelers—Geostatistics Page - 165


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes variogram at 15-unit steps. For the first step (h=15), there are 4 pairs:
x1 and x2, or .14 and .28
x2 and x3, or .28 and .19
x3 and x4, or .19 and .10
x4 and x5, or .10 and .09

Therefore, for h=15, we get

γ(15) = 1/(2*4) [(x1-x2)2 + (x2-x3)2 + (x3-x4)2 + (x4-x5)2]


= 1/8 [(.14-.28)2 + (.28-.19)2 + (.19-.10)2 + (.10-.09)2]
= 0.125 [(-.14)2 + (.09)2 + (.09)2 + (.01)2]
= 0.125 (.0196 + .0081 + .0081 + .0001)
= 0.125 (.0359)
γ(15) = 0.00448
For the second step (h=30), there are 3 pairs:

x1 and x3, or .14 and .19


x2 and x4, or .28 and .10
x3 and x5, or .19 and .09

Note: N is sample number


+ is sample location
h is equal to 15.
Figure 4.1.3 Sample Data For Variogram Computation

Therefore, for h=30, we get


γ(30) = 1/(2*3) [(x1-x3)2 + (x2-x4)2 + (x3-x5)2]
= 1/6 [(.14-.19)2 + (.28-.10)2 + (.19-.09)2]
= 0.16667 [(-.05)2 + (.18)2 + (.10)2]
= 0.16667 (.0025 + .0324 + .0100)
= 0.16667 (.0449)
γ(30) = 0.00748

For the third step (h=45), there are 2 pairs:


x1 and x4, or .14 and .10
x2 and x5, or .28 and .09

Therefore, for h=45, we get


γ(45) = 1/(2*2) [(x1-x4)2 + (x2-x5)2]

Page - 166 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Notes

= 1/4 [(.14-.10)2 + (.28-.09)2]


= 0.25 [(.04)2 + (.19)2]
= 0.25 (.0016 + .0361)
= 0.25 (.0377)
γ(45) = 0.00942

For the fourth step (h=60), there is only one pair: x1 and x5. The
values for this pair are .14 and .09, respectively. Therefore, for h=60,
we get

γ(60) = 1/(2*1) (x1 - x5)2


= ½ (.14-.09)2
= 0.5 (.05)2
= 0.5 (.0025)
γ(60) = 0.00125
If we take another step (h=75), we see that there are no more pairs.
Therefore, the variogram calculation stops at h=60.
Lag Distance
In the above example, the sample pairs were on a straight line.
Furthermore, the samples were at uniform intervals with distance
between them being 15 feet. Often the sample values do not fall on
a straight line. The spacing between samples may not be uniform
either. Consequently, one uses an interval rather than a point to
pair the samples. The basic unit used for interval (h) is called the lag
distance or class size. For example, if the lag distance is 50 meters,
any pair of data points whose distance is between 0 and 50 can
be included in the computation of the first lag. Any pair of data
points whose distance is between 51 and 100 can be included in the
computation of the second lag, etc.
Another way to pair the data is to apply a tolerance distance (dh)
at each lag. Thus, if the specified lag distance is h, the actual lag
distances used becomes nh ± dh where n is the number of classes.
The only exception to this is the first lag which goes from 0 to h + dh.
For example, if the lag distance is 50 meters with a tolerance of ±25,
then the first lag (h=50) may actually go from 0 to 75 meters, second
lag (h=100) goes from 75 to 125 meters, third lag (h=150) goes from
125 to 175 meters, and so on.
In some cases, a strict tolerance distance may need to be applied
around each lag distance, including lag distance at h=0. For

MineSight for Modelers—Geostatistics Page - 167


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes example, if the lag distance is 50 meters with a tolerance of ±10,


then the first lag (h=0) may actually go from 0 to 10 meters, second
lag (h=50) goes from 40 to 60 meters, third lag h=100) goes from 90
to 110 meters, and so on. When the distance tolerance is less than
half the lag distance, then some data points may not get used in the
variogram calculation.
The lag distance to use in a variogram computation in any
direction will depend on the spacing of the samples. If the samples
are at irregular locations, a lag distance equal to the average spacing
of the samples can be a good start. If there are too many pairs in the
first lag of the variogram, a shorter the lag distance should be tried.
For vertical or down-hole variograms, the mining bench height or
composite length of the samples are frequently used.
Horizontal and Vertical Windows
In mining especially, drilling on a irregular grid is not unusual.
Even holes that are drilled on a regular grid seldom lie on a straight
line. Therefore, when a variogram is computed along a specified
direction, one has to tolerate this inevitable randomness and accept
any pair whose separation is close to the direction of the variogram.
This tolerance is specified in terms of an angle from the direction of
the variogram. For example, if the direction of the variogram is E-
W, and we use a 15° tolerance, then any pair along E-W direction as
well as those within ±15° from E-W line are accepted for variogram
computation. The tolerance angle is called the window. If this
tolerance is in a horizontal direction, it is referred as the horizontal
window. If the tolerance is in a vertical direction, it is referred as the
vertical window.
Horizontal and Vertical Band Widths
In some situation, the pairs accepted within a tolerance window
can be tested if they are within a specified distance from the line of
direction of the variogram. This distance is referred to as the band
width. Again, the band width can be applied to the horizontal as
well as the vertical directions. Figure 4.1.4 illustrates the window
tolerance angle and band width definitions for a single direction.
Figure 4.1.5 illustrates the lag distance, the angle tolerance and the
band width definitions for multiple direction.

Page - 168 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Note: The above figure can be used for both horizontal and Notes
vertical directions; plan view applies to the horizontal direction,
section view applies to the vertical direction.
Figure 4.1.4 Variogram Window and Band Width Definitions
– Single Direction

Figure 4.1.5 Variogram Lag, Window and Band Width


Definitions – Multiple Directions
Analysis
One typically begins the analysis of spatial continuity with an
omni-directional variogram for which the directional tolerance is
large enough to include all pairs. An omni-directional variogram
can be thought of loosely as an average of the various directional
variograms, or “the variogram in all directions.” It is not a strict
average since the sample locations may cause certain directions to
be over represented. For example, if there are more east-west pairs
than north-south pairs, then the omni-directional variogram will be
influenced more by east-west pairs.
The calculation of the omni-directional variogram does not imply
a belief that the spatial continuity is the same in all directions. It
merely serves as a useful starting point for establishing some of
the parameters required for sample variogram calculation. Since
direction does not play a role in omni-directional variogram
calculations, one can concentrate on finding the distance parameters
that produce the clearest structure. An appropriate class size or lag
can usually be chosen after few trials.
Another reason for beginning with omni-directional calculations
is that they can serve as an early warning for erratic directional
variograms. Since the omni-directional variogram contains more
sample pairs than any directional variogram, it is more likely
to show a clearly interpretable structure. If the omni-directional
variogram does produce a clear structure, it is very unlikely for the
directional variograms to show a clear structure.

MineSight for Modelers—Geostatistics Page - 169


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes Once the omni-directional variograms are well behaved, one can
proceed to explore the pattern of anisotropy with various directional
variograms. In many practical studies, there is some prior information
about the axes of the anisotropy. For example, in a mineral deposit,
there may be geologic information about the ore mineralization that
suggests directions of maximum and minimum continuity. Without
such prior information, a contour map of sample values may offer
some clues to such directions. One should be careful, however, in
relying solely on a contour map because the appearance of elongated
anomalies on a contour map may sometimes be due to sampling grid
rather than to an underlying anisotropy.
For computing directional variograms, one needs to choose a
directional tolerance (window and/or band width) that is large
enough to allow sufficient pairs for a clear variogram, yet small
enough that the character of variograms for separate directions is
not blurred beyond recognition. For most cases, it is reasonable to
use a window of about ±15 °. As a rule of thumb, one can initially
use half of the incremental angle used for computing directional
variograms. For example, if the variograms were to be computed
at 45 ° increments, then a ±22.5 ° window would be appropriate.
Both the window and class size selected for any given direction can
be adjusted after the initial trial. The best approach is to try several
tolerances and use the smallest one that still yields good results.
In cases where a three-dimensional anisotropy is present, one can
apply a coordinate transformation to the data before computing the
sample variogram. The axes of the new or transformed coordinate
system are made to align with the suspected directions of the
anisotropy. This enables a straightforward computation of the
variograms along the axes of the anisotropy.
Existence of Drift
The drift can be defined simply as a general decrease or increase in
data values with distance for the direction specified. It is the average
difference between the samples separated by a distance h. Therefore it
can be positive or negative. The existence of high drift values in each
lag with the same sign can be an indication of a drift in the direction
the variogram computed. One may be able to see a drift from the
shape of the variogram. When a linear drift is present, this usually
introduces a parabolic component in the experimental variogram,
which can be a dominant feature on the shape of the variogram.
Theoretical Variogram Models
In order to use practical use of the experimental variogram, it
is necessary to describe it by a mathematical function or a model.
There are many models that can be used to describe the experimental
variograms; however, some models are more commonly used than
others. These models are explained below.
Spherical Model
This model is the most commonly used model to describe a
variogram. The definition of this model is given by

Page - 170 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

γ (h) = c0 + c [1.5 (h / a) - 0.5 (h3 / a3)] if h < a Notes


γ (h) = c0 + c if h ≥ a (4.2.1)
In this equation, c0 refers to the nugget effect, “a” refers to the
range of the variogram, h is the distance, and c0+c is the sill of the
variogram.
The spherical model has a linear behavior at small separation
distances near the origin but flattens out at larger distances, and
reaches the sill at a, the range. It should be noted that the tangent at
the origin reaches the sill at about two thirds of the range.
Linear Model
This is the simplest of the models. The equation of this model is
as follows:
γ (h) = c0 + A(h) (4.2.2)
where c0 is the nugget effect, and A is the slope of the variogram.
Exponential Model
This model is defined by a parameter a (effective range 3a). The
equation of the exponential model is
γ (h) = c0 + c [1 - exp (-h / a)] h > 0 (4.2.3)
This model reaches the sill asymptotically. Like the spherical
model, the exponential model is linear at very short distances near
the origin, however it rises more steeply and then flattens out more
gradually. It should be noted that the tangent at the origin reaches
the sill at about two fifths of the range.
Gaussian Model
This model is defined by a parameter a (effective range a√3). The
equation of the Gaussian model is given by
γ (h) = c0 + c [1 - exp (-h2 / a2)] h > 0 (4.2.4)
Like the exponential model, this model reaches the sill
asymptotically. The distinguishing feature of the Gaussian model is
its parabolic behavior near the origin.
Power Model
This model is defined by a power 0 < a < 2 and a positive slope c.
The equation of the power model is given by
γ (h) = c ha (4.2.5)
Hole-Effect Model
This model is used to represent fairly continuous processes that
show a periodic behavior, such as a succession of rich and poor
zones. Its equation is given by
γ (h) = c [1 - ( Sin(wh) / wh ) ] (4.2.6)
In this equation, c is synonymous to the sill value, w is a constant
related to the wavelength, and h is distance. Greater flexibility in fitting
this model is achieved by using the following modified equation:
γ (h) = c {1 - [Sin(wh+p) / (wh+p)]} (4.2.7)
The addition of constant p allows the model to be shifted left or

MineSight for Modelers—Geostatistics Page - 171


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes right along the x-axis. A nugget effect term could also be added to
the equation.
There are other theoretical models that can be used to describe
a variogram, such as the DeWijsian model and the cubic model.
However, these models are not as commonly used in practice as
some of the models described above. Figure 4.2.1 shows how various
theoretical variogram models look. Figure 4.2.2 shows a plot of sample
variogram and a spherical theoretical model fit to this variogram.

Figure 4.2.1 Theoretical Variogram Models

Page - 172 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Notes

Figure 4.2.2 Variogram Plot and Theoretical Model Fit


Anisotropy
Anisotropy exists if the structural character of the mineralization
of an ore deposit differs for various directions. For example, the
grade may be more continuous along the strike direction than it is
down the dip direction. This can be determined by comparing the
variograms calculated for different directions within the deposit.
Geometric anisotropy
There are two types of anisotropy: Geometric anisotropy and
zonal anisotropy. Geometric anisotropy is present if the nugget
and sill of the variograms are generally the same, but their ranges
are different in various directions. Because the nugget and sill of
the variograms are the same, a simple translation of coordinates is
sufficient to transform one variogram to another, or simply to make
them isotropic. The ratio of the major axis over the minor axis of the
ellipse that is used to perform the necessary transformation is called
the anisotropy factor.
The variograms can be anisotropic in three-dimensions, in which
case two anisotropy factors would be defined corresponding to the
ratios of the length of three axes of an ellipsoid; namely, the major
axis, the minor axis, and the vertical axis.
One way to check for geometric anisotropy is to make a contour
plot of the variogram values, γ(h), for different directions on the
plane where one thinks, for example, the major and minor axes
of anisotropy are located. If the resulting contours display an
elliptical shape, this would indicate the presence of anisotropic

MineSight for Modelers—Geostatistics Page - 173


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes mineralization on that plane. More or less circular contours would


indicate isotropic mineralization. Figure 4.3.1 gives a sample plot of
variogram contours generated to detect geometric anisotropy. Before
generating this type of contours, a set of directional variograms must
be generated on the plane of interest from which variogram values at
varying distances are retrieved and contoured.
Zonal anisotropy
Zonal anisotropy is present if the nugget and range of the
variograms are generally the same, but their sills are different in
various directions. This situation is encountered in deposits in which
the mineralization is layered or stratified. The variation in grade
for a particular direction is due not only to distance, but also to the
number of layers crossed.

Figure 4.3.1 Variogram Contours


Zonal anisotropy is much more difficult to handle during
estimation than geometric anisotropy. Quite often, combinations of
geometric and zonal anisotropy are encountered and can be very
difficult to interpret. One way to deal with zonal anisotropy is to
partition the data into zones, and analyze each zone separately.
Another way to handle zonal anisotropy is to use nested variogram
structures that will be discussed next. The difference in the sill values
is expressed as one nested structure applicable only along that
specific direction. Figure 4.3.2 gives examples of geometrical and
zonal anisotropies.

Page - 174 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Notes

Figure 4.3.2 Geometrical and Zonal Anisotropy


Nested Structures
A variogram function can often be modeled by combining several
variogram functions:
γ(h) = γ1(h) + γ2(h)+ ... + γn(h)
or
γ (h) = γi(h) i = 1,...,n (4.3.1)
For example, there might be two structures displayed by a
variogram. The first structure may describe the correlation on a short
scale. The second structure may describe the correlation on a much
larger scale. These two structures can be defined using a nested
variogram model.
In using nested models, one is not limited to combining models
of the same shape. Often the sample variogram will require a
combination of different basic models. For example, one may
combine spherical and exponential models to handle a slow rising
sample variogram that reaches the sill asymptotically.
To illustrate nested model concept, the three simple structures shown
in Figure 4.3.3 are combined to give the resulting nested variogram.

MineSight for Modelers—Geostatistics Page - 175


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes

Figure 4.3.3 Nested Structures


Variogram Types
The development of variograms can be really frustrating on data
with highly skewed distribution. Because a variogram is computed
by taking the squared differences between the data pairs, a few
outlier high grades may contribute so significantly that some of the
points in the variogram are very erratic.
There are types of variograms that are often used to produce
clearer descriptions of the spatial continuity.
Relative Variogram
A relative variogram, γR(h), is obtained from the ordinary
variogram by simply dividing each point on the variogram by the
square of the mean of all the data used to calculate the variogram
value at that lag distance:
γR(h) = γ (h) / [m(h) + c]2 (4.4.1)
where c is a constant parameter used in the case of a three-parameter
lognormal distribution.

Page - 176 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Pairwise Relative Variogram Notes


Another type of relative variogram that often helps to produce
a clearer display of the spatial continuity is the pairwise relative
variogram. This particular relative variogram also adjusts the
variogram calculation by a squared mean. However, the adjustment
is done separately for each pair of sample values, using the average
of the two values as the local mean. This serves to reduce the
influence of very large values on the calculation of the moment of
inertia. The equation of pairwise relative variogram is given as
γPR(h) = 1/(2n) ∑[(vi-vj)2 /((vi+vj)/2)2] (4.4.2)
where vi and vj are the values of a pair of samples at locations i and j,
respectively.
The reason behind the computation of a relative variogram is an
implicit assumption that the assay values display proportional effect.
In this situation, the relative variogram tends to be stationary, even
though the ordinary variogram (or covariance) is not stationary. If
the relationship between the local mean and the standard deviation
is something other than linear, one should consider scaling the
variograms by some function other than mi2.
Logarithmic (Log) Variogram
A log variogram, γL(h), is obtained by calculating the ordinary
variogram using the logarithms of the data, instead of the raw
(untransformed) data.
The reason for transforming the raw data into logarithms is
to reduce or eliminate the impact of extreme data values on the
variogram structure. The data transformation accomplishes this
objective by simply reducing the range of variability of the raw data.
After computing a logarithmic variogram, one may want to
transform its parameters back to normal values. This can be done
using the following steps:
1. Use the range given by logarithmic variograms as the range
of the normal variogram.
2. Estimate the logarithmic mean (α) and variance (β2). Use
the sill of the logarithmic variogram as the estimate of β2,
particularly if the sill is not equal to the computed variance
using the formula.
3. Calculate the mean, (µ) and the variance (σ2) of the normal
data using
µ = exp (α+β2/2) (4.4.3)
σ = µ [exp (β ) - 1]
2 2 2
(4.4.4)
4. Set the sill of the normal variogram equal to the variance (σ2)
computed above.
5. Compute c (sill-nugget) and c0 (nugget) value of the normal
variogram using
C = µ2 [exp (clog) - 1] (4.4.5)
C0 = sill - c (4.4.6)

MineSight for Modelers—Geostatistics Page - 177


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes The final acceptance of these parameters should be after an


extensive cross validation of the selected variogram model.
Covariance Function Variogram
This is a relatively new framework to obtain a variogram through
a direct estimation of covariances. Isaaks and Srivastava of Stanford
University proposed the method in 1987. It is based on the point that
the sample covariance function reflects the character of the spatial
continuity better than the sample variogram, particularly under the
condition of heteroscedasticity (proportional effect) and preferential
clustering of data.
This deterministic framework, also known as non-ergodic
framework, is more appropriate when sample information is
interpolated (as opposed to extrapolated) within the same domain.
This is most often the situation during ore reserve estimation. A non-
ergodic process, therefore, implies a situation whereby local means
are different from the population mean, as is usually the case of
heteroscedasticity.
The covariance function, C(h), can be calculated from the
following:
C(h) = 1/N ∑ [vi vj - m-h . m+h] (4.4.7)
The data values are v1,...,vn; the summation is over only the N
pairs of data whose locations are separated by h. m-h is the mean
of all the data values whose locations are -h away from some other
data location. Similarly, m+h is the mean of all the data values whose
locations are +h away from some other data location.
The above equation is sometimes referred to as the covariogram.
The covariogram and the variogram are related by the formula:
γ (h) = C(0) - C(h) (4.4.8)
Since the value at h=0 is simply the sample variance, the value
obtained at each lag for covariogram is subtracted from the variance
of samples to give the covariance function variogram.
Because of the way it is computed, the covariance function
variogram is usually better behaved than a normal variogram for data
with a skewed distribution. However, having obtained a well-behaved
variogram does not eliminate the shortcomings of linear geostatistics
where one still has to face the typical problem of how to handle
the outlier data during ordinary kriging, to minimize the common
occurrence of overestimation in grades as well as in tonnages.
Correlogram
This is another relatively new technique to measure the spatial
continuity of data through the correlation function. By definition, the
correlation function ρ(h) is the covariance function (Equation 4.4.7)
standardized by the appropriate standard deviations.
ρ(h) = C(h) / (σ-h . σ+h) (4.4.9)
where σ-h is the standard deviation of all the data values whose
locations are -h away from some other data location:
σ2-h = 1/N ∑ (vi2 - m2-h) (4.4.10)

Page - 178 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

Similarly σ+h is the standard deviation of all the data values whose Notes
locations are +h away from some other data location:
σ2+h = 1/N ∑ (vj2 - m2+h) (4.4.11)
Like the means, the standard deviations, σ-h and σ +h, are usually
not equal in practice. The shape of the correlation function is similar
to covariance function. Therefore, it needs to be inverted to give
a variogram type of curve, which we call correlogram. Since the
correlation function is equal to 1 when h=0, the value obtained at each
lag for correlation function is subtracted from 1 to give the correlogram.
Indicator Variogram
Indicator variograms are computed using indicators of 0 or 1
based on a specified cutoff. Therefore, to compute an indicator
variogram, one must transform the raw data into indicator variables.
These variables are obtained through the indicator function which is

{
defined as:
1, if z(x) ≤ zc
i(x;zc) = (4.4.12)
0, otherwise

where:
x is location,
zc is a specified cutoff value,
z(x) is the value at location x.

Transformation of raw data z(x) into indicator variable i(x;zc) is a


non-linear transform. All spatial distributions of indicator variables
at sampled points will have the same basic form of 0 or 1. If the
observed grade is less than the cutoff grade, it will be 1. Otherwise,
it will be zero. It is obvious that the indicator variable, i(x;zc), will
change as the cutoff grade changes.
The best-defined experimental indicator variogram for a given
set of data is usually that variogram corresponding to cutoff grades
zc close to the median grade. This is because about 50% of the
indicator data are equal to 1 and the rest are equal to 0. Therefore,
the expected sill value of the median variogram is 0.25. It is also the
maximum sill value of indicator variograms.
If one is calculating indicator variograms at many cutoff grades,
the sill values of indicator variograms increase until the median
variogram is computed. As cutoff grades continue to increase, more
and more indicator data become equal to 1, thus decreasing the sill
values but not necessarily the ranges of the variograms.

Cross Variogram
Like the variogram for spatial continuity of a single variable, the
cross variogram is used to describe the cross-continuity between two
variables. Cross variogram between variable u and variable v can be
calculated using the following discrete method:

MineSight for Modelers—Geostatistics Page - 179


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes γCR(h) = 1/2n ∑ [u(xi) - u(xi+h)]2 * [v(xi) - v(xi+h)]2 (4.4.13)


There are several unique properties associated with cross
variograms. One such property is that the variogram is always
positive, whereas cross variograms can take negative values. A
negative value of a cross variogram simply indicates that increase in
one variable corresponds to a decrease in the other variable.
Calculation of cross variograms is a necessary step in cokriging
and probability kriging.
Fitting A Theoretical Variogram Model
One needs to fit a theoretical model or mathematical function
to the experimental or sample variogram points in order to put it
into a practical use. Usually, this fitting is done interactively using a
computer program. However, if one has to resort to a manual fitting of
the model to the sample variogram, the following steps can be useful:
1. Draw the variance as the sill (c + c0)
2. Project the first few points to the y-axis. This is an estimate of
the nugget (c0).
3. Project the same line until it intercepts the sill. This distance is
two thirds of the range for spherical model.
4. Now, using the estimates of range, sill, nugget and the
equation of the mathematical model under consideration
(e.g., spherical model), calculate a few points and see if the
curve fits the sample variogram.
5. If necessary, modify the parameters and repeat Step 4 to
obtain a better fit.
It should be noted that the nugget value (c0) should be estimated
more accurately than the sill and the range of the variogram. This
is because the values of the variogram near the origin are extremely
important in estimation problems.
Verification of Model Parameters
Fitting a theoretical variogram model is often quite simple as
long as the sample variogram is well behaved. Usually visual
fits are satisfactory under these conditions. Unfortunately, some
sample variograms, mostly those calculated using data with skewed
distributions, are not well behaved. These variograms may not
resemble closely any of the theoretical models studied. Under these
circumstances, model fitting becomes a challenging task. It must be
noted that in all cases both choosing the model and estimating its
parameters is quite subjective.
H-Scatter Plots
An h-scatter plot a scatter diagram that shows all possible pairs of
data values separated spatially by a certain distance in a particular
direction. For instance, if we take all the pairs that fall within the first
lag of a variogram we compute, and plot their values on x-y plot, we
obtain a scatter plot.

Page - 180 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

The location of any point can be described by a vector, as can be Notes


the separation between any two points, for instance, point i and
point j. It is important to distinguish between the vector going from
point i and point j and the vector going from point j and point i. The
notation hij is used to refer to the vector going from point i and point
j; whereas hji is used to refer to the vector going from point j and
point i. The originating point for the vector is considered the “tail,”
and the ending point is considered as the “head.” This is important
as we use this information to determine which axis of the h-scatter
diagram it belongs to.
The shape of the cloud of points on an h-scatter plot indicates
how continuous the data values are over a certain distance in a
particular direction. If the data points plot close to the 45-degree line
passing through the origin, then it is sign of good correlations and
continuity. As the data values become less similar, the points on the
plot get scattered and random.
Summary statistics for h-scatter plots are similar to ones done for
correlation analysis. A correlation coefficient can be calculated based
on the least squares regression. The regression line can be overlaid
on the plot. Figure 4.6.1 shows a sample h-scatter plot with both 45-
degree and the best line fit displayed on the plot.

Figure 4.6.1 Sample H-Scatter Plot

MineSight for Modelers—Geostatistics Page - 181


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes Cross Validation


There are so many interdependent subjective decisions in a
geostatistical study that it is a good practice to validate the entire
geostatistical model and kriging plan prior to any production run.
Thus one may want to check the results of a kriging plan using
different variogram parameters as well as different approaches in
search strategy. This can be done by a technique that allows us to
compare the true and the estimated values using the sample data
set. This technique is called cross validation or point validation. It is
also known as jack-knifing. Although the term jack-knifing applies
to resampling without replacement, i.e., when one set of data values
is re-estimated from another non-overlapping data set once it has
been re‑estimated. In cross validation actual data are dropped one at
a time and re‑estimated from the some of the neighboring data. Each
datum is replaced in the data set once it has been re-estimated.
Thus with cross validation, using a “leave-one-out” approach, one
estimates (predicts) a known data point using a candidate variogram
model and point kriging (or any other interpolation method),
pretending that this data point is not known. In other words, only
the surrounding data points are used to krige this data point, while
leaving the data point out.
Once the estimated grade is calculated, one can determine the
error between the estimated value and the true known value for this
data point. The procedure is repeated for all known data points in
the test area, to compute the error statistics such as the mean error,
variance of errors and the average kriging variance for specified
model parameters. For comparison, the overall process is repeated
using different variogram parameters or models. The “correct”
parameters to be chosen are the ones that produce:
• The least amount of average estimation error
• Either the variance of the errors or the weighted square error
(or variance) is closest to the average kriging variance
The weighted square error (WSE) is given by the following equation:
WSE = ∑ [(1/ σi2) (ei)2] / ∑ (1/ σi2) (4.7.1)
where ei is the difference between the predicted value at point i
and the known value, and σi2 is the kriging variance for point i. The
weighting by the inverse of the kriging variance gives more weight
to those points that should be closely estimated and vice versa.
A cross validation study may help us to choose between different
weighting procedures, between different search strategies, or
between different variogram models and parameters. Unfortunately,
cross validation results are most commonly used simply to compare
the distribution of the estimation errors from different estimation
methods. Such a comparison, especially if similar techniques
are being compared, may fall short of clearly indicating which
alternative is best. However, cross validation results have important
spatial information, and a careful study of the spatial distribution
of errors, with a specific focus on the final goals of the estimation

Page - 182 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Analysis of Spatial Continuity

exercise can provide insights into where an estimation procedure Notes


may run into trouble. Since such insights may lead to case-specific
improvements in the estimation procedure, cross validation is a
useful preliminary step before final estimates are calculated. The
exercise of cross validation is analogous to a dress rehearsal: it is
intended to detect what could go wrong, but it does not ensure that
the show will be successful.
There are limitations of cross validation that should be kept in
mind when analyzing the results of a cross validation study. For
example, it can generate pairs of true and estimated values only at
sample locations. Its results usually do not accurately reflect the
actual performance of an estimation method because estimation at
sample locations is typically not representative of estimation at all of
the unsampled locations.
In other practical situations, particularly three-dimensional
data sets where the samples are located very close to one another
vertically but not horizontally, cross validation may produce very
optimistic results. Discarding a single sample from a drill hole and
estimating the value using other samples from the same drill hole
will produce results that make any estimation procedure appear
to perform much better than it will in actual use. The idea of cross
validation is to produce sample data configurations that mimic the
conditions under which the estimation procedure will actually be
used. If very close nearby samples will not be available in the actual
estimation, it makes little sense to include them in cross validation.
In such situations, it is common to discard more than the single
sample at the location we are trying to estimate. It may, for example,
be wiser to discard all of the samples from the same drill hole. This,
however, puts us back in the situation of producing cross-validated
results that are probably a bit too pessimistic. Another alternative, in
this case, will be to discard only the data within a specified distance
to the point being estimated.
If the data has outlier high grades, it may also be a good idea not
to include them in the cross validation study. If removing a data
point is going to make it impossible for other points around this
point to give a reasonable estimate for the point, then there is no
reason to include that data point. Including the extreme data points
which do not have or rarely have any corresponding data elsewhere
in the deposit only makes the cross validation results look worse.
However, the outlier data and its impact on the estimation should
be dealt separately. Figure 4.7.1 shows a sample output from cross
validation program. In this output, the actual or true grades of the
composites are compared against the estimates from kriging. The
average difference is the mean error. The error statistics are given
under the column DIFF.
Cross validation does not indicate whether an observation,
estimate, parameter, or assumption is correct. All it does is generate
errors associated with different selections. The user must draw
conclusions by comparing the errors. As imperfect as this technique
may be, drawing conclusions by cross validation is in many

MineSight for Modelers—Geostatistics Page - 183


Practical Geostatistics for Earth Sciences—
Analysis of Spatial Continuity Proprietary Information of Mintec, Inc.

Notes cases better than the alternative of making arbitrary selections or


assumptions.
Variable : CU
ACTUAL KRIGING DIFF
‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑
Mean = .7347 .7417 -.0070
Std. Dev = .4968 .3791 .2920
Minimum = .0000 .0200 -.9000
Maximum = 3.7000 2.1300 2.0100
Skewness = .9851 .5129 1.0620
Peakedness= 1.6682 .0576 5.0352

Ave. kriging variance = .1006


Weighted square error = .0836

Number of samples used = 925


Correlation coefficient = .8104
Least square fit line (Y = A + B * X):
Intercept (A) = -.0531
Slope (B) = 1.0621
Standard error of estimate = .2912
95% confidence interval on A = .0413
95% confidence interval on B = .0495

Figure 4.7.1 Sample Output of Cross Validation

Page - 184 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Random Processes and Variances

Random Processes and Variances Notes


The theory of regionalized variables is developed on a
mathematical model in which the grade of ore in a deposit is
considered the result of a random process in three-dimensional
space. It is obvious that estimation requires a model of how the
phenomena behave at unsampled locations. Without a model, one
has only the sample data and no inferences can be made about the
unknown values at locations that were not sampled. One of the
important contributions of geostatistics is the emphasis it places on a
statement of the underlying model.
Theory of Regionalized Variables
The theory of regionalized variables developed by Matheron
forms the mathematical basis of geostatistics. The key point of this
theory is that the geological or mineralogical process active in the
formation of an orebody is interpreted as a random process. Thus,
the grade at any point in a deposit is considered as a particular
outcome of this random process. This probabilistic interpretation
of a natural process as a random process is necessary to solve the
practical problems of estimation of the grade of an ore deposit.
Such an interpretation is simply a conceptualization of reality and is
valid only so far as it creates a better picture of reality and permits
practical problems to be solved.
Regionalized Variable
Regionalized variables are are random variables that are
correlated in space (or in time). The grade of ore, thickness of a
coal seam, elevation of the surface of a formation, are examples
of regionalized variables. In the geostatistical framework, such
variables could be called random variables but the term regionalized
is used to indicate that such variables are spatially correlated to
some degree.
A function Z(x) that assigns a value to a point x in three
dimensional space is called a regionalized variable. This function
Z(x) displays a random aspect consisting of highly irregular and
unpredictable variations, and a structured aspect reflecting the
structural characteristics of the regionalized phenomenon.
The main purpose of the theory of regionalized variables is
to express the structural properties of the regionalized variables
in adequate form, and to solve the problem of estimating these
variables from the sample data.
Random Processes
If we view the grade of ore in a deposit to be the result of a
random process, then we can represent this process by a model
using a random function Z(x) where x corresponds to any point
in three dimensional space. Since there are an infinite number of
points in a deposit, there are an infinite number of grades that
can be theoretically described using infinite number probability
distributions.

MineSight for Modelers—Geostatistics Page - 185


Practical Geostatistics for Earth Sciences—
Random Processes and Variances Proprietary Information of Mintec, Inc.

Notes A random function Z(x) in its generality consists of infinite


collection of density functions in a probability space which consists
of a collection of possible events. Such a generalized model of an ore
forming process and also its result, the grade of the deposit, would
be impossible to manipulate. Consequently, certain assumptions are
usually made in defining the particular random process of interest.
One such assumption is the stationarity of the process.
Stationarity
The stationarity condition assumes that the values in the data
set represent the same statistical population. That is, the property
measured is stationary, or stable, over the area measured. Stationary
is required to ensure that the spatial correlation may be modeled
with an appropriate function (that is, a positive definite function)
and states that the expected value, noted E, which may be considered
the mean, of the data values is not dependent upon the distance
separating the data points. Mathematically, this assumption states
that the expected value the difference between two random variables
is zero. This is denoted by the following equation:
E[Z(x + h) - Z(x)] = 0 for all x,
where,
Z(x), Z(x+h) = random variables
x = sampled location
h = distance between sampled locations
Stationarity occurs when the regionalization is repeated
throughout the orebody. This means that the infinite collection of
density functions do not vary from one location to another in their
statistical characteristics—they belong to the same family of density
functions. In other words, similar ore distributions are as likely to
be found in one part of the orebody as in another. For example, if
a grade at a given point can be described using a normal density
function, then another grade at a different location must also be
described using a normal density function. However, the parameter
values of the density functions need not be constant from one
point to another point in a deposit, unless one makes additional
assumptions about these parameter values.
The decision of stationarity is critical for the representativeness and
the reliability of the Geostatistical tools used. Pooling the data across
geological units may mask important geological differences; on the
other hand, splitting the data into too many sub-categories may lead
to unreliable statistics based on too few data per category and an
overall confusion. The rule in statistical inference is to pool the largest
amount of “relevant” information to formulate predictive statements.
Stationarity is a property of the model. Thus, the decision of
stationarity may change if the scale of the study changes or if more
data become available. If the goal of the study is global, the local
details can be averaged out. Conversely, the more data available the
more statistically significant differentiations become possible.

Page - 186 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Random Processes and Variances

Knowledge of historical practices, visual indications, and other Notes


explicit information may be used to assess whether the data set
meets this assumption of stationarity. Some of frequently invoked
assumptions on the random process during the ore reserve
estimation are:
1. Strong stationarity
2. Second order stationarity
3. Intrinsic hypothesis (weaker second order stationarity)
Strong Stationarity
In order for a random function Z(x) to meet the strong stationarity
requirement, the following properties must be satisfied.
E[Z(x)] = m, m = finite and independent of x (5.1.1)
Var[Z(x)= σ , σ = finite and independent of x
2 2
(5.1.2)
The first condition implies that there is no gradual increase or
decrease in grade for some specified direction. That means there is
no drift. The second condition implies a constant parameter value of
the underlying density functions.
Second Order Stationarity
A random function Z(x) is called a second order stationarity, if the
requirement of Equation 5.1.2 for a finite σ2 is replaced by Equation
5.1.3.
E[Z(x)] = m, m = finite and independent of x (5.1.3)
E[Z(x+h). Z(x)] - m = C(h) = finite and independent of x (5.1.4)
2

Equation 5.1.3 is the covariogram function C(h) defined earlier.


This condition implies that for each pair of random variables Z(x+h)
and Z(x), the covariance exists and depends only on the separation
distance h. The covariance does not depend on the particular
location x within the deposit.
The stationarity of covariance implies the stationarity of the variance
as well as the variogram. Under this assumption, the relationship
between the variogram and the covariogram is given below.
γ(h) = C(0) - C(h) = Var[Z(x)] - C(h) (5.1.5)
Note that covariogram C(h) gives the covariances as a function of
the distance h.
Intrinsic Hypothesis
The intrinsic hypothesis represents a weaker form of second order
stationarity. Furthermore, this hypothesis can be of two types: The
intrinsic hypothesis of order zero, and intrinsic hypothesis of order one.
a) For the intrinsic hypothesis of order zero, the following
conditions must be satisfied.
E[Z(x)] = m, m = finite and independent of x (5.1.6)
E[Z(x+h)- Z(x)]2 = 2γ(h) = finite and independent of x (5.1.7)

Note this last equation is the definition of the variogram function.


Under this hypothesis, we assume no drift, and the existence and the
stationarity of the variogram only.

MineSight for Modelers—Geostatistics Page - 187


Practical Geostatistics for Earth Sciences—
Random Processes and Variances Proprietary Information of Mintec, Inc.

Notes Often the condition of no drift in a deposit cannot be satisfied in


practice. Under such circumstance, the intrinsic hypothesis of order
one is invoked.
b) For the intrinsic hypothesis of order one, the following
conditions must be satisfied.
E[Z(x+h)- Z(x)] = m(h) = finite and independent of x (5.1.6)
E[Z(x+h)- Z(x)]2 = 2γ(h) = finite and independent of x (5.1.5)
Under this hypothesis, instead of finite and constant mean m,
Equation 5.1.6 simply requires that the difference in the mean must
be finite, independent of the support point x, and depend only on the
separation distance h.
In performing local estimation using ordinary kriging, the intrinsic
hypothesis of order zero is invoked, whereas the technique called
universal kriging must be employed under the first order hypothesis.
Random Function Models
Estimation requires a model of how a phenomenon behaves at
locations where it has not been sampled. This is because, without a
model, one has only the sample data and no inferences can be made
about the unknown values at locations that were not sampled. There
are two types models, deterministic and probabilistic models, which
can be applied to the random process under study depending on
how much one knows about that process.
Deterministic Models
The most desirable information that can be brought to bear on the
problem is a description of how the phenomenon was generated. In
certain situations, the physical and chemical processes that generated
the data set might be known in sufficient detail so that an accurate
description of the entire profile can be made from only a few sample
values. In such situations, a deterministic model is appropriate.
For example, the sample data set in Figure 5.2.1 consists of seven
locations and seven v values. By itself, this sample data set provides
virtually no information about the entire profile of v. All one knows
from the samples is the value of v at seven particular locations.
Estimation of the values at unknown locations demands that one
must bring in additional information or make some assumptions.
Imagine that the seven sample data were measurements of the
height of a bouncing ball. Knowledge of the physics of the problem
and the horizontal velocity of the ball would allow one to calculate
the trajectory of this ball shown in Figure 5.2.2. While this trajectory
depends on certain simplifying assumptions, and is therefore
somewhat idealized, it still captures the overall characteristics of
a bouncing ball and serves as a very good estimate of the height
at unsampled locations. In this particular example, one relies very
heavily on the deterministic model used. In fact, the same estimated
profile could have been calculated with a smaller data set. The
deterministic model allows reasonable extrapolation beyond the
available sampling.

Page - 188 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Random Processes and Variances

From this example, it is clear that deterministic modeling is Notes


possible only if the context of the data values is well understood.
The data values, by themselves, do not reveal what the appropriate
model should be.

Figure 5.2.1 Sample Points on a Profile to be Estimated

Figure 5.2.2 A Deterministic Model Curve


Probabilistic Models
Very few earth science processes are understood in sufficient
detail to permit the application of deterministic models. Though
one does not know the physics and chemistry of many fundamental
processes, the variables of interest in earth science data sets are
typically the end result of a vast number of processes whose
complex interactions cannot be described quantitatively. For the vast

MineSight for Modelers—Geostatistics Page - 189


Practical Geostatistics for Earth Sciences—
Random Processes and Variances Proprietary Information of Mintec, Inc.

Notes majority of earth science data sets, one is forced to admit that there is
some uncertainty about how the phenomenon behaves between the
sample locations. Probabilistic random function models recognize
this fundamental uncertainty and provide tools for estimating values
at unknown locations once some assumptions about the statistical
characteristics of the phenomenon are made.
In a probabilistic model, the available sample data are viewed as
the result of some random process. From the outset, it seems like
this model conflicts with reality. The processes that create an ore
deposit are certainly extremely complicated, and our understanding
of them may be so poor that their complexity appears as random
behavior. However, this does not mean that they are random. It
simply means that one does not know enough about that particular
process. Although the word random often connotes unpredictable,
one should view the sample data as the outcome of some random
process. This will help the ore reserve practitioners with the problem
of predicting unknown values.
It is possible in practice to define a random process that might
have conceivably generated any sample data set. The application
of the most commonly used geostatistical estimation procedures,
however, does not require a complete definition of the random
process. It is sufficient to specify only certain parameters of the
random process.
With any estimation procedure, whether deterministic or
probabilistic, one inevitably wants to know how good the estimates
are. Without an exhaustive data set against which one can check the
estimates, the judgment of their goodness is largely qualitative and
depends to a large extent on the appropriateness of the underlying
model. As the conceptualization of the phenomenon that allows one
to predict what is happening at location where there are no samples,
models are neither right nor wrong. Without additional data, no
proof of their validity is possible. They can, however, be judged
as appropriate or inappropriate. Such a judgment, which must
take into account the goals of the study and whatever qualitative
information is available, will benefit considerably from a clear
statement of the model.
Parameters of Random Functions
We can calculate several parameters that describe interesting
features of a random function. If the random function is stationary,
then the expected value and the variance can be used to summarize
the univariate behavior of the set of random variables. For
bivariate behavior of a stationary random function, the covariance,
correlogram or variogram can be used. These three parameters are
related by a few simple expressions. They provide exactly the same
information in a slightly different form. The correlogram and the
covariance have the same shape. But the correlogram is scaled so that
its maximum value is 1. The variogram also has the same shape as the
covariance function except that it is inverted. While the covariance
starts from a maximum of σ2 at zero distance and decreases to 0, the
variogram starts at 0 and increase to a maximum of σ2.

Page - 190 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Random Processes and Variances

By summarizing the joint distribution of pairs as a function of Notes


distance, the variogram, covariance, or correlogram provide a
measurement of the spatial continuity of the random function.
Practical Use of Random Functions
A random function is a purely conceptual model that we choose
to use because of the lack of an accurate deterministic model. Since
it is a conceptual model, we are responsible for defining the random
process that might have created the observed sample values.
In practice we usually adopt a stationary random function as
the model and specify only its covariance or variogram because
the complete definition of probabilistic mechanism is very difficult
even in one dimension. Furthermore, for many of the problems
we typically encounter, we do not need to know it. Therefore, it is
possible to tackle many estimation problems once we determine a
few parameters.
The choice of a variogram or covariance model is an important step
in a geostatistical estimation procedure as it directly implies a certain
pattern of spatial continuity. If the mineralization continuous, then
the estimates based on only the closest samples may be very reliable.
On the other hand, if it is erratic, then we may require using many
more sample data beyond the closest ones to get good estimates.

MineSight for Modelers—Geostatistics Page - 191


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Declustering

Declustering Notes
There are two declustering methods that are generally applicable
to any sample data set. These methods are the polygonal and the
cell declustering methods. In both methods, a weighted linear
combination of all available sample values is used to estimate the
global mean. By assigning different weights to the available samples,
one can effectively decluster the data set.
Polygonal Declustering
In this method, each sample in the data set receives a weight
equal to its area of the polygon. Figure 6.1.1 shows the polygons of
influence of a sample data set. The perpendicular bisectors between
a sample and its neighbors form the boundaries of the polygon of
influence. However, the edges of the global area require special
treatment. A sample located near the edge of the area of interest
may not be completely surrounded by other samples and the
perpendicular bisectors with its neighbors may not form a closed
polygon. One solution is to choose a natural limit, such as a property
boundary or a geologic contact. This can then be used to close the
border polygons. An alternative, in situations where a natural
boundary is not easy to define, is to limit the distance from a sample
to any edge of its polygon of influence. This has the effect of closing
the polygon with the arc of a circle.
By using the areas of these polygons of influence as weights in
the weighted linear combination, one can accomplish the necessary
declustering. Clustered samples receive small weights corresponding
to their small polygons of influence. On the other hand, samples with
large polygons can be thought of as being representative of a larger
area and therefore entitled to a larger weight.

Figure 6.1.1
Sample Map
of Polygons of
Influence

MineSight for Modelers—Geostatistics Page - 193


Practical Geostatistics for Earth Sciences—Declustering Proprietary Information of Mintec, Inc.

Notes Cell Declustering


In this method, the entire area is divided into rectangular regions
called cells. Each sample receives a weight inversely proportional
to the number of samples that fall within the same cell. Clustered
samples will tend to receive lower weights with this method
because the cells in which they are located will also contain several
other samples.
Figure 6.2.1 shows a grid of such cells superimposed on a number
of clustered samples. The dashed lines shows the boundaries of the
cells. Each of the two uppermost cells contains only one sample,
so both of these samples receive a weight of 1. The lower left cell
contains two samples, both of which receive a weight of 1/2. The
lower right cell contains eight samples, each of which receives a
weight of 1/8.
Since all samples within a particular cell receive equal weights
and all cells receive a total weight of 1, the cell declustering method
can be viewed as a two step procedure. First, the samples are used
to calculate the mean value within moving windows, then these
moving window means are used to calculate the mean of global area.
Guidelines For Choosing Cell Size
The estimate one gets from the cell declustering method will depend
on the size of the cells specified. If the cells are very small, then most
samples will fall into a cell of its own and will therefore receive equal
weights of 1. If the cells are too large, many samples will fall into the
same cell, thereby causing artificial declustering of samples.
If there is an underlying pseudo regular grid, then the spacing of
this grid usually provides a good cell size. If the sampling pattern
does not suggest a natural cell size, a common practice is to try several
cell sizes and pick the one that gives the lowest estimate of the global
mean. This is only appropriate if clustered sampling is exclusively in
the areas with high grade values. In such cases, which are common
in practice, the clustering of the samples will tend to increase the
estimate of the mean. Therefore, choosing the cell size that produces
the lowest estimate can be a proper approach. However, if the data are
known to be clustered in low valued areas, then one should choose a
cell size that yields the highest declustered mean.
A sample output from a cell declustering program is given in
Figure 6.2.2.

Page - 194 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Declustering

Notes

Figure 6.2.1 An Example of Cell Declustering

Figure 6.2.2 Sample Output From Cell Declustering Program

MineSight for Modelers—Geostatistics Page - 195


Practical Geostatistics for Earth Sciences—Declustering Proprietary Information of Mintec, Inc.

Notes Declustered Global Mean


The estimated global mean from declustering the sample data,
DGM, is given by the following equation:
DGM = ∑(wi . vi) / ∑wi i=1,...,n (6.3.1)
where n is the number of samples, wi are the declustering weights
assigned to each sample, and vi are the sample values. The
denominator acts as a factor to standardize the weights so that they
add up to 1.
For the polygonal approach, ∑wi is equal to the total area of
influence of all the polygons. For the cell declustering approach, ∑wi
is equal to the total number occupied cells since the weights of the
samples in each cell add up to 1.
The polygonal method has the advantage over the cell
declustering method of producing a unique estimate. In situations
where the sampling does not justify choosing an appropriate cell
size, the cell declustering method will not be very useful.

Page - 196 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Ordinary Kriging

Ordinary Kriging Notes


Ordinary kriging is an estimator designed primarily for the local
estimation of block grades as a linear combination of the available
data in or near the block, such that the estimate is unbiased and
has minimum variance. It is a method that is often associated
with the acronym B.L.U.E. for best linear unbiased estimator.
Ordinary kriging is linear because its estimates are weighted linear
combinations of the available data; it is unbiased since the sum of
the weights adds up to 1; it is best because it aims at minimizing the
variance of errors.
The conventional estimation methods, such as the inverse distance
weighting method, are also linear and theoretically unbiased.
Therefore, the distinguishing feature of ordinary kriging from the
conventional linear estimation methods is its aim of minimizing the
error variance.
Kriging Estimator
The kriging estimator is a linear estimator of the following form:
Z* = Σ λi Z(xi) i = 1,...,n (7.1.1)
where Z* is the estimate of the grade of a block or a point, Z(xi) refers
to sample grade, λi is the corresponding weight assigned to Z(xi),
and n is the number of samples.
The desired attribute of the minimum estimation variance can
be achieved by minimization of the above equation subject to the
constraint that the sum of the weights must be equal to 1. In other
words, the weighting process of kriging is equivalent to solving a
constrained optimization problem where the objective function is
given by Equation 7.1.1 and the one constraint is as given below:
Minimize σ2 = F(λ1,λ2,λ3...,λn)
Subject to Σ λi = 1 (7.1.2)
This constraint optimization problem can be readily solved by the
method of Lagrange multiplier.
Kriging System
Ordinary kriging can be performed for estimation of a point or a
block. The linear system of equations for both cases is very similar.
Point Kriging
The point kriging system of equations in matrix form can be
written in the following form:

MineSight for Modelers—Geostatistics Page - 197


Practical Geostatistics for Earth Sciences—Ordinary Kriging Proprietary Information of Mintec, Inc.

Notes The matrix C consists of the covariance values Cij between the
random variables Vi and Vj at the sample locations. The vector D
consists of the covariance values Ci0 between the random variables
Vi at the sample locations and the random variable V0 at the location
where an estimate is needed. The vector λ consists of the kriging
weights and the Lagrange multiplier. It should be noted that the
random variables Vi, Vj, and V0 are the models of the phenomenon
under study, and these are parameters of a random function.
Block Kriging
The only difference of block kriging from point kriging is that
estimated point is replaced by a block. Point-to-block correlation
is the average correlation between sampled point i and all points
within the block. In practice, a regular grid of points within the
block is used. Consequently, the matrix equation includes “point-to-
block” correlations:
The block kriging system is similar to the point kriging system
given in Equation 7.2.1 above. In point kriging, the covariance matrix
D consists of point-to-point covariances. In block kriging, it consists
of block-to-point covariances. The block kriging system can therefore
be written as follows:

The covariance values CiA is no longer a point-to-point covariance


like Ci0, but the average covariance between a particular sample and
all of the points within A:
CiA = 1/A ∑ Cij (7.2.3)
In practice, the A is discretized using a number of points in x, y,
and z directions to approximate CiA.
Kriging Estimation Variance
For each block or point kriged, a kriging variance is calculated.
The block kriging variance is given by:
σ2OK = CAA - [ ∑(λi . CiA)+ μ] (7.2.4)
The value CAA is the average covariance between pairs of locations
within A. In practice, this average block-to-block covariance is also
approximated by discretizing the area A into several points. It is
important to use the same discretization for the calculation of point-
to-block covariances in D in Equation 7.2.2. If one uses different
discretizations for the two calculations, there is a risk of getting
negative error variances from Equation 7.2.4.
For the point kriging variance, CAA in Equation 7.2.4 is replaced by
the variance of the point samples, or simply by the sill value of the
variogram. CiA is replaced by Cij.

Page - 198 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Ordinary Kriging

Kriging variance does not depend directly on the data. It depends Notes
on the data configuration. Since it is data value independent, the
kriging variance only represents the average reliability of the data
configuration throughout the deposit. It does not provide the
confidence interval for the mean unless one makes an assumption
that the estimation errors are normally distributed with mean zero.
However, if the data distribution is highly skewed, the errors are
definitely not normal because one makes larger errors in estimating a
higher-grade block than a low-grade block. Therefore, the reliability
should be data value dependent, rather than data value independent.
For a fixed sampling size, different sampling patterns can produce
significantly different estimation variances. In two dimensions,
regular patterns are usually at the top of the efficiency scale in terms
of achieving a given estimation variance with the minimum number
of data, while clustered sampling is the most inefficient.
Block Discretization
When using the block kriging approach, one has to decide to
discretize the local area or block being estimated. The grid of
discretizing points should always be regular. However, spacing
between points can be made larger in one direction than the other if
the spatial continuity is anisotropic. Figure 7.2.1 shows an example
of regularly spaced points to discretize a block.
If one chooses to use fewer discretizing points, computation time
for kriging will be faster. This computational efficiency must be
weighted against the desire for accuracy, which calls for as many
points as possible.
It has been shown in practice that, in general, significant
differences may occur in the estimates using grids containing less
than 16 discretizing points. With more than 16 points, however, the
estimates are found to be very similar.
The following points should be considered when deciding on how
many points to use to discretize a block:
1. Range of influence of the variogram used in kriging.
2. Size of the blocks with respect to this range.
3. Horizontal and vertical anisotropy ratios.

Figure 7.2.1.
Block
discretization
to represent
the block with
points at regular
intervals.

MineSight for Modelers—Geostatistics Page - 199


Practical Geostatistics for Earth Sciences—Ordinary Kriging Proprietary Information of Mintec, Inc.

Notes Properties of Kriging


There are many properties associated with ordinary kriging
estimation. Some are listed below:
• Kriging has a built-in declustering capability of data during
estimation. Therefore, it is not sensitive to preferential
sampling in specific areas. This is very useful especially
when the data used to estimate are clustered and irregularly
spaced.
• Kriging is conditionally unbiased.
• Kriging is an exact estimator. In other words, kriging will
estimate all the known points exactly. There is no error.
• Kriging calculates the kriging variance for each block.
It should be noted that the kriging variance is only a ranking index
of data configurations. Since the kriging variance does not depend
on the data values, it should not be used to select a variogram
model or a kriging implementation; nor should it be used as the sole
criterion to determine the location of additional samples.
• Kriging tends to screen out the influence of some samples if
they are directly behind the nearby samples.
The practical consequence of this property is that some samples
may have negative kriging weights. On the contrary, conventional
linear estimation methods, such as inverse distance weighting, will
never produce negative weights. The disadvantage of negative
weights is that they also create the possibility of negative estimates
if a particularly high sample value is associated with a negative
weight. When ordinary kriging produces estimates that are negative,
one can be justified in setting such estimates to zero if the variable
being estimated must be positive.
It is worth noting that the use of a variogram with a parabolic
behavior near the origin will cause the screen effect to be much more
pronounced, often producing negative weights even larger than those
generated by the variogram models that are linear near the origin.
• The average grade of the small blocks from kriging is the
same as the kriged grade of the combined block (assuming
the same data is used for both cases). However, the kriging
variances of small blocks are not readily amendable for the
addition.
• Kriging tends to give the estimated block grades that are less
variable than the actual grades. This smoothing effect is also
true for most estimators.
The variance of block grades is approximately related to the
variance of estimated values and the kriging variance by the
following relationship:
σ2z = σ2z* + σ2k - σ2m (7.3.1)
The last term σ m, is the estimation variance of the average grade
2

of the entire deposit and is usually negligible. This relationship


can be used to gain insight into the effects of both block size and
drillhole spacing on the quality of ore deposit model. It can also be

Page - 200 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Ordinary Kriging

used under another form to assess the quality of estimation. For Notes
example, one can define σ2z* / σ2z ratio to be a smoothing factor in
order to show to which degree kriging reproduces reality in terms
of block variability.
Assumptions Made in Kriging
The following assumptions are made in ordinary kriging.
1. No drift is present in the data.
2. Both variance and covariance exist and are finite.
3. The mean grade of the deposit is unknown.
Desirable Properties of Estimators
When comparing different estimators of a random variable,
one should check into certain properties of these estimators. An
estimator without a bias is the most desirable. This condition, which
is commonly referred as unbiasedness, can be expressed as follows:
E (Z - Z*) = 0 (7.4.1)
where Z is the estimate and Z is the true value of the random
*

variable being estimated. This condition means that the expected


value of the error (Z - Z*) is zero. In other words, on average the
estimator predicts the correct value.
The second desirable property of an estimator is that the variance of
the errors should be small. This condition is expressed as follows:
Var (Z - Z*) = E (Z - Z*)2 = small (7.4.2)
Another desirable property of an estimator is its robustness. An
estimator that works well with one data set should also work well
with many different data sets. If that is the case, then the estimator
is called robust. An example to this could be the ordinary kriging
estimator which is considered to be a robust estimator.
Effect of Variogram Model Parameters
The exercise of fitting a model function to the sample variogram
involves important choices and subjectivity on the part of
the practitioner. The sample variogram does not provide any
information for distances shorter than the minimum spacing
between the sample data. Unless the sampling includes duplicates
at the same location, the nugget effect and the behavior of the
variogram near the origin cannot be determined easily from the
sample variogram. In other instances, the shape of the variogram
may not be very clearly defined to determine the range accurately.
Yet these parameters may have considerable effect on the ordinary
kriging weights and on the resulting estimate.
Effect of Scale
Two variogram models that differ only in their scale will
produce the exact kriging weights. Therefore the ordinary kriging
estimate will not be affected. For example, a spherical model with
nugget=0.001 and sill=0.01 will give the same estimate for a block as
another spherical model with nugget=0.1 and sill=1, as long as the
range of the variogram and data set used remain the same. In both
cases, the nugget to sill ratios are kept constant.

MineSight for Modelers—Geostatistics Page - 201


Practical Geostatistics for Earth Sciences—Ordinary Kriging Proprietary Information of Mintec, Inc.

Notes Rescaling a variogram, however, will affect the ordinary kriging


variance that will increase by the same factor that was used to scale
the variogram.
The fact that the variogram can be rescaled by any constant
without changing the estimate enables one to use the relative
variogram without fear of altering the estimate. In the case where the
local relative variograms differ one from another by only a rescaling
factor, only the kriging variance will be affected. Each one of the
local relative variograms will provide an identical kriging estimate.
Effect of Shape
Different variogram models will produce different kriging
estimates for a block even if they have the identical parameters of
nugget, sill and range. This is because the shape the models has an
effect on the weights assigned to the data points used for the block.
For example, a parabolic behavior near the origin is indicative of
very continuous phenomena so the estimation procedure gives more
weights to the closest samples.
Nugget Effect
For the variograms that differ only in their nugget effect, the
lower the nugget effect, the higher will be the ordinary kriging
weights assigned to the samples closer to the block being estimated.
Increasing the nugget effect makes the estimation procedure become
more like a simple averaging of the available data. The other
noticeable result of using a higher nugget effect is that the ordinary
kriging variance is higher.
A pure nugget effect variogram model indicates the lack of
spatial correlation; the data value at any particular location bears
no similarity even to very nearby data values. In terms of statistical
distance, none of the samples is any closer to the point being
estimated than any other. The result is that for ordinary kriging with
a pure nugget effect model of spatial continuity, all weights are equal
to 1/n.
Effect of Range
The change of the range has a relatively minor effect on
the ordinary kriging weights. Even so, these relatively minor
adjustments in the weights can cause a noticeable change in the
estimate. If the range is increased without changing any other
parameter, the ordinary kriging variance will be lower since this
will make the samples appear to be closer to the block, in terms of
statistical distance, than they originally were.
If the range becomes very small, all the samples appear to be
equally far away from the block being estimated and from each
other, with the result similar to that of a pure nugget effect: the
weights all become 1/n and the estimation procedure becomes a
simple average of the available sample data.
Effect of Anisotropy
With the use of the anisotropic variogram model in ordinary
kriging, more weight will be given to the samples in the major

Page - 202 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Ordinary Kriging

direction of continuity. For example, if the horizontal anisotropy Notes


ratio (major axis to minor axis range) in a deposit is 2 to 1, then the
samples along the major axis will appear to be twice as close, in
terms of statistical distance, than those samples along the minor axis.
This will have an effect of increasing the weights of samples along
the major axis direction of continuity even though other samples in
the opposite direction may be closer to the block in terms of true or
geometric distance.
In many data sets, the direction of maximum continuity may
not be the same throughout the area of interest. There may be
considerable local fluctuations in the direction and the degree of
anisotropy. In such situations, the sample variogram may appear
isotropic only because it may be unable to reveal the undulating
character of the anisotropy. If the qualitative information offers a
way to identify the direction and degree of the anisotropy, then the
estimation procedure will benefit greatly from a decision to base the
choice of the spatial continuity model on qualitative evidence rather
than the quantitative evidence of the sample variogram.
The success of ordinary kriging is due to its use of a customized
statistical distance rather than a geometric distance and to its attempt
to decluster the available sample data. Its use of a spatial continuity
model that describes the statistical distance between points gives
it considerable flexibility and an important ability to customize the
estimation procedure to qualitative information.
Effect of Search Strategy
The quality of estimates produced by ordinary kriging depends on
the time taken to choose a representative model of spatial continuity
as well as an appropriate search strategy. Determining what criteria
to use in order to identify the data points that should contribute to
the estimation of a particular point or block is a very critical step
in grade interpolation. This is the selection of a search strategy that
is appropriate for the method used. Here, considerable divergence
exists in practice, involving the use of fixed numbers, observations
within a specified radius, quadrant and octant searches, elliptical
or ellipsoidal searches with anisotropic data, and so on. Since the
varying of the parameters may affect the outcome of the estimation
considerably, the definition of the search strategy is therefore one of
the most consequential steps of any estimation procedure.
For estimation methods that can handle any number of nearby
samples, the most common approach to choosing the samples that
contribute to the estimation is to define a search neighborhood within
which a specified number of samples is used. If there is anisotropy,
the search neighborhood is usually an ellipse that is centered on the
point or the center of the block being estimated. The orientation of
this ellipse is dictated by the pattern of spatial continuity of the data
based on the variogram analysis. Obviously, the ellipse is oriented
with its major axis parallel to the direction of maximum continuity. If
there is no evident anisotropy, the search ellipse becomes a circle and
the question of orientation is no longer relevant.

MineSight for Modelers—Geostatistics Page - 203


Practical Geostatistics for Earth Sciences—Ordinary Kriging Proprietary Information of Mintec, Inc.

Notes A good search strategy should include at least a ring of drill holes
with enough samples around the blocks to be estimated. However,
it should also have provisions for not extending the grades of the
peripheral holes to the areas that have no data.
Since most drilling is vertically oriented, increasing the vertical
search distance has more impact on the number of samples available
for a given block, than increasing the horizontal search distance.
If the vertical search is considerable for a given block, then there
might be a problem of having a large portion of the samples for the
block coming from the nearest hole, thus carrying the most weight.
This may cause excessive smoothing in reserve estimates. If the
circumstances warrant a large the vertical search, then one solution
could be to limit the number of samples used from each individual
drill hole.
Octant or Quadrant Search
It is common, especially in precious metal deposits, to have
denser drilling in highly mineralized areas of the deposit. When
such clustering of the holes is present, it might be necessary to
have a balanced representation of the samples in all directions in
space, rather than taking the nearest n samples for the blocks to be
estimated. This can be achieved by either declustering the samples
before the estimation or by a simple octant or quadrant search in
which the number of samples in each octant or quadrant is limited to
a specified number during the interpolation.
The use of an octant or quadrant search usually improves the
inverse distance weighting method results more so than it does
the results of ordinary kriging that has well-known “screening” of
data to do internal declustering. Therefore, an octant or quadrant
search accomplishes some declustering and the effect of this is more
noticeable on the method that does not decluster by itself.
Relevance of Stationary Models
A random function model is said to be first order stationary if
the mean of the probability distribution of each random variable is
the same, as explained in Section 5. Because we use this assumption
to develop the unbiasedness condition, it is concluded that
unbiasedness is guaranteed when the weights sum to one is limited
to first order stationary models.
Therefore, an easily overlooked assumption in every estimate is
the fact the sample values used in the weighted linear combination
are somehow relevant, and that they belong to the same group or
population, as the point being estimated. Deciding which samples
are relevant for the estimation of a particular point or a block may be
more important than the choice of an estimation method.
The decision to view a particular sample data configuration as an
outcome of a stationary random function model is strongly linked
to the decision that these samples can be grouped together. The
cost of using an inappropriate model is that statistical properties
of the actual estimates may be very different from their model
counterparts. The use of weighted linear combinations the sum

Page - 204 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Ordinary Kriging

of whose weight is one, does not guarantee that the actual bias is Notes
zero. The actual bias will depend on several factors such as the
appropriateness of sample data configuration as an outcome of a
stationary random function.
All linear estimation methods implicitly assume a first order
stationary model through their use of the unbiasedness condition.
Therefore, it is not only the ordinary kriging which requires first
order stationarity. If estimation is performed blindly, with no thought
given to the relevance of the samples within the search strategy, the
methods that make use of more samples may produce worse results
than the methods that make use of few nearby samples.

MineSight for Modelers—Geostatistics Page - 205


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Other Kriging Techniques

Other Kriging Techniques Notes


Many other estimation techniques were developed over the years
to address varying problems and challenges facing the practitioners.
Although some of these methods are better known or more practical
than others, each one provides another tool that can be suited for a
particular application (Arik, 2002b).
Simple Kriging
The simple kriging estimator is a linear estimator of the
following form:
Z*sk = Σ λi [Z(xi) - m] + m i = 1,...,n (8.1.1)
where Z*sk is the estimate of the grade of a block or a point, Z(xi)
refers to sample grade, λi is the corresponding simple krigiging
weights assigned to Z(xi), n is the number of samples, and m = E
{Z(x)} is the location dependent expected value of Z(x).
Thus the simple kriging algorithm requires prior knowledge of
the mean m. Stationary simple kriging does not adapt to local trends
in the data since it relies on the mean value m, assumed known and
constant throughout the area. Consequently, simple kriging is rarely
used for mapping z-values. Instead, it is the more robust ordinary
kriging algorithm that is used.
Another difference of simple kriging from the ordinary kriging is
that there is no constraint on simple kriging weights. In other words,
they do not have to add up to 1 as in the case of ordinary kriging.
Cokriging
Cokriging is the estimation of one variable based on measured
values of two or more variables. This procedure can be regarded as a
generalization of kriging in the sense that, at every location, there is
a vector of many variables instead of a single variable. The variable
to be estimated is denoted as the target or the primary variable
while all other variables are categorized as auxiliary or secondary
variables. The secondary variable is spatially cross-correlated with
the primary variable.
The commonly used ordinary kriging utilizes only the spatial
correlation between samples of a single variable to obtain the best
linear unbiased estimate of this variable. In addition to this feature,
cokriging also utilizes the cross-correlations between several
variables to further improve the estimate. Therefore, cokriging can
be defined as a method for estimation that minimizes the variance
of the estimation error by exploiting the cross-correlation between
two or more variables. The estimates are derived using secondary
variables as well as the primary variable.
Reasons for Cokriging
Cokriging is especially advantageous in cases where the
secondary variable values are more abundant than the primary
variable. The precision of the estimation may then be improved by
considering the spatial correlations between the primary variable

MineSight for Modelers—Geostatistics Page - 207


Practical Geostatistics for Earth Sciences—
Other Kriging Techniques Proprietary Information of Mintec, Inc.

Notes and the better-sampled secondary variables. Therefore, having


extensive data from blastholes as the secondary variable with the
widely spaced exploration data as the primary variable is an ideal
case for cokriging.
Cokriging Equation
The co-estimation of the primary variable is calculated as:
Z* = Σ λi Z(xi) + Σ wj Y(xj) i= 1,...,n j= 1,…m (8.2.1)
where Z* is the estimate of the grade of a block, Z(xi) refers to
primary variable, λi is the corresponding weight assigned to Z(xi),
and n is the number of primary-variable samples. Similarly, Y(xj)
refers to secondary variable, wj is the corresponding weight assigned
to Y(xj), and m is the number of secondary-variable samples.
Other than tedious inference and matrix notations, cokriging is
the same as kriging. It also branches out to several flavors like the
ordinary cokriging, simple cokriging, and indicator cokriging. The
traditional ordinary cokriging system of equations for two variables,
exploration drillhole data being the primary variable and the
blasthole data being the secondary variable, is given in Table 8.2.1.
Table 8.2.1 Cokriging system of equations for two variables; primary
variable is the drillhole data, secondary variable is the
blasthole data.

....................................... ..... ...........


[Cov{didi}] [Cov{dibj}] [1] [0] [λi] [Cov{x0di}]
....................................... ..... ...........
[Cov{dibj}] [Cov{bjbj}] [0] [1] [δj] [Cov{x0bj}]
....................................... x ..... = ...........
[ 1 ] [ 0 ] 0 0 μd 1
....................................... ..... ...........
[ 0 ] [ 1 ] 0 0 μb 0
....................................... ..... ...........

[Cov{didi}] = drill hole data (dhs) covariance matrix, i=1,n

[Cov{bjbj}] = blast hole data (bhs) covariance matrix, j=1,m

[Cov{dibj}] = cross-covariance matrix for dhs and bhs

[Cov{x0di}] = drill hole data to block covariances

[Cov{x0bj}] = blast hole data to block covariances

[λi] = Weights for drill hole data

[δj] = Weights for blast hole data

μd and μb = Lagrange multipliers

Page - 208 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Other Kriging Techniques

Steps Required for Cokriging Notes


Since cokriging uses multiple variables, the amount of work
involved prior to cokriging itself is a function of the number of
variables used. For cokriging of drill and blasthole data of the same
item, the following steps are required.
• The regularization of blasthole data into a specified block
size. This block size could be the same as the size of the
model blocks to be valued, or a discreet sub-division of
such blocks. One thus establishes a new database of average
blasthole block values.
• Variogram analysis of drillhole data.
• Variogram analysis of blasthole data.
• Cross-variogram analysis between drill- and blasthole data.
This is done by pairing each drill hole value with all blasthole
values.
• Selection of search and interpolation parameters.
• Cokriging.
Unless the primary variable, that which is being estimated, is
under-sampled with respect to the secondary variable, the weights
given to the secondary data tend to be small, and the reduction in
estimation variance brought by cokriging is not worth the additional
inference and modeling effort.
Outlier Restricted Kriging
Outlier Restricted Kriging (ORK) is a modified version of the
ordinary kriging. It requires two steps. First step is determination
of the outlier cutoff. An indicator value of 1 is assigned to the data
equal or greater than this cutoff. All other data have indicators of 0.
Based on these indicators, the probability of occurrence of outliers is
assigned to each block. This step is very similar to Indicator kriging.
The second step is handled internally by the program that solves
a modified kriging matrix to allocate appropriate weights to sample
data based on the assigned probabilities in each block (Arik, 1992).
The ORK method can be useful to apply in deposits with skewed
grade distributions. The method is almost as simple to use as the
ordinary kriging with only an additional step of assigning outlier
probabilities. Its main advantage is to control the outlier high
grades from smearing into their surrounding. This control is done
by assigning the probability of occurrence of the outliers to the
blocks prior to kriging of the grades using a simple interpolation
of the indicators. When the ORK is run, it accesses the probability
information and solves the kriging matrix to assign the weights to
the sample data accordingly.
Nearest Neighbor Kriging
Nearest Neighbor Kriging (NNK) is also a modified version of
the ordinary kriging where more emphasis is given to the nearest
sample using a variance correction factor (Arik, 1998).

MineSight for Modelers—Geostatistics Page - 209


Practical Geostatistics for Earth Sciences—
Other Kriging Techniques Proprietary Information of Mintec, Inc.

Notes The NNK is a method that combines the strengths of the nearest
neighbor, inverse distance weighting and the ordinary kriging
methods. It is a method where the value of the nearest neighbor
sample is emphasized in determining the value of the blocks. This
emphasis is directly proportional to the variability of the deposit. In
this method, the OK weight assigned to the nearest neighbor sample
is increased by a certain proportion. To compensate this increase, the
weights of the other samples in the neighborhood are lowered at the
same proportion. Therefore, the sum of the resulting NNK weights is
preserved to be one, to satisfy the unbiasedness condition.
The NNK weights are then obtained by adjusting the OK weights
(wtok) as follows:
Weight of the nearest sample = wtok + (1 - wtok) * f (8.4.1)
Weights of all other samples = wtok * (1 - f)
The f is the smoothing correction factor and has a value between 0
and 1. Thus, the NNK estimate is equal to OK estimate at the lower
end when f=0, and is equal to the polygonal estimate at the extreme
end when f=1.
Area Influence Kriging
The ordinary kriging (OK) is not designed for grade estimations
in highly variable deposits. But the practitioners keep using this
method even when it may not be suited for their cases. This is mainly
because the OK is practical, much easier to use and understand than
some advanced methods offered as an alternative. It is also robust,
especially when the kriging neighborhood is kept small. Therefore
most people are comfortable with the use of OK and its results.
Polygonal or the nearest neighbor method is simply the
assigning of the value of the nearest datum to the point or block
being estimated. The Area Influence Kriging (AIK) is a method
that combines some of the aspects of the polygonal and the
ordinary kriging methods. It is a technique where a sample value
is considered to be the primary starting point for the grade of the
blocks within its area of influence. The weights assigned to the
samples for a given block outside the area of influence of the nearest
sample then control the resulting grade for the block. Basically, in its
simplest form, the AIK has the following weighting scheme:
w1 = 1
∑ wj = 0 j=2,… ,N (8.5.1)
In this scheme, w1, the weight of the nearest sample is equal to
1, and ∑ wj, the sum of the weights of all other samples is equal to
0. N is the number of samples. Therefore, the sum of the resulting
AIK weights is preserved to be one, to satisfy the unbiasedness
condition. (Arik, 2002a). Although w1 can have a value between 0
and 1, having w1 less than 0.5 is not recommended as it defeats the
purpose the algorithm.

Page - 210 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Other Kriging Techniques

Non-Stationary Geostatistics Notes


Ordinary kriging is used under the stationarity assumption.
As we may recall, a variable is stationary if it behaves in the same
way throughout the whole area of consideration. This assumption
is rarely respected in practice. We always look for something
that could be stationary in order to use classical methods, such as
reducing the area of investigation (neighborhood).
Universal Kriging
An ever-increasing variance is a characteristic of non-stationarity.
It implies there is a trend or drift in the data. In such cases, a
modified ordinary kriging method called “universal kriging” may
become a useful tool. Universal kriging is in fact a kriging with a
prior trend model. Table 8.6.1 gives the universal kriging system of
equations in the case of a linear drift.
Table 8.6.1 Universal kriging system of equations in the case of a
linear drift.

Non-Linear Kriging Methods


The ordinary kriging and conventional estimation methods, such
as inverse distance weighting method, are all linear estimators. They
are appropriate for the estimation of a mean value for the blocks.
However, if the distribution of the samples is highly skewed, then
estimating a mean value for the blocks may not give realistic grade-
tonnage curves to calculate recoverable reserves because of the
smoothing that results from such estimates.
Non-linear kriging methods are designed to estimate the
distribution of grades within blocks, in order to better estimate
recoverable tonnages within each block. Some of these methods are
listed below with brief explanations:
• Multiple Indicator kriging (MIK): Kriging of indicator
transforms of the data.
• Probability kriging (PK): Advance form of indicator kriging
which uses both indicator and uniform variables, as well as
the cross-covariances between the indicator and uniform
variables.
• Lognormal kriging (LNK): Kriging applied to logarithms of
the data.

MineSight for Modelers—Geostatistics Page - 211


Practical Geostatistics for Earth Sciences—
Other Kriging Techniques Proprietary Information of Mintec, Inc.

Notes • Multi-Gaussian kriging (MGK): Kriging applied to


the normal score transforms of the data. It is actually a
generalization of lognormal kriging.
• Lognormal short-cut (LSC): A method which assumes a
lognormal distribution of grades in the block with a mean
equal to the ordinary kriging estimate, and the variance equal
to the estimation variance obtained, plus the variance of the
points in the block.
• Disjunctive kriging (DK): This method is kriging of specific
polynomial transforms of the data.
• Uniform Conditioning (UC): This is essentially a modified
version of disjunctive kriging. Its basic difference from
disjunctive kriging is that the mean grade of the block is
obtained from the OK to ensure the estimation is locally well
constrained. The proportions of ore within the block are
conditional to that kriged value.
Non-linear methods of estimation may be parametric or non-
parametric. Parametric methods involve assumptions about
distributions (defined by parameters). Non- parametric methods do
not involve assumptions about distributions and are sometimes also
called distribution-free methods. All non-linear methods involve
a transformation of the data. Almost all parametric methods (or at
least those in common use) involve a transformation to normality,
i.e., a parametric transformation. Non-parametric methods involve
a non-parametric transformation usually to indicator values.
However, the change of support problem generally involves some
parametric assumption.
Both indicator and probability kriging are known as non-
parametric methods because the desired estimator of the distribution
is entirely based on sample data, and not on any assumptions
about the underlying distribution (or model) of the data. On the
other hand, lognormal short-cut, lognormal, multi-gaussian and
disjunctive kriging are parametric methods. They are also classified
as non-linear geostatistics because:
• the data transformation used for estimation purposes are non-
linear transforms, for example y = log x,
• the estimates (in this case the distribution) are obtained using
non-linear combination of data.
In theory, all linear methods are non-parametric methods because
we do not make any assumptions about the underlying distribution
of the data. However, these methods do not work well if the data is
highly skewed.
Why Do We Need Non-linear Geostatistics?
When the sample data are highly skewed, the calculated
variograms are often unrecognizable, thereby making it quite
difficult to apply ordinary kriging. Furthermore, if we need
local recoverable reserves (grade distribution within a block),
linear geostatistics is not meant to handle that. Also, some of the

Page - 212 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Other Kriging Techniques

shortcomings of linear geostatistics, such as smoothing, can be Notes


overcome by non-linear geostatistical methods, particularly when
the underlying data are highly skewed.
In summary, non-linear methods are used to:
• overcome problems encountered with outliers
• provide “better” estimates than those provided by linear
methods
• take advantage of the properties on non-normal distributions
of data and thereby provide more optimal estimates
• provide answers to non-linear problems
• provide estimates of distributions on a scale different from
that of the data (the “change of support” problem)
A particular measure to use for deciding whether to use linear or
non-linear methods for reserve estimations can be the coefficient of
variation. As a rule of thumb, if the coefficient of variation of data
is less than 0.5, one can say that the linear methods work fine. If it
is greater than 1.5 or 2, these methods will not be suitable. If the
coefficient of variation is in between 0.5 and 1.5, definite caution is
needed in using the linear estimation methods.
One of the most popular non-linear geostatistical methods is the
multiple indicator kriging (MIK). This technique will be covered in
the next section.

MineSight for Modelers—Geostatistics Page - 213


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Multiple Indicator Kriging

Multiple Indicator Kriging Notes


The indicator kriging technique was introduced in 1983 by Andre
Journel. It can handle highly variable grade distributions, and does
not require any assumptions concerning the distribution of grades.
It estimates the local spatial distribution of grades within a block or
panel. Local recoverable reserves can then be obtained by applying
the economic cutoff to this distribution.
The basic concept of indicator kriging is very simple. Suppose
that equal weighting of N given samples is used to estimate the
probability that the grade of ore at a specified location is below a
cutoff grade. Then, the proportion of N samples that are below this
cutoff grade can be taken as the probability that grade estimated is
below this cutoff grade. If a series of cutoff grades is applied, then
a series of probabilities can be obtained. Indicator kriging obtains a
cumulative probability distribution at a given location in a similar
manner, except that it assigns different weights to surrounding
samples using the ordinary kriging technique to minimize the
estimation variance. The basis of indicator kriging technique is the
indicator function.
The Indicator Function
At each point x in the deposit, consider the following indicator
function of zc defined as:

{
1, if z(x) ≤ zc
i(x;zc) =
0, (9.1.1)
otherwise
where:
x is location, zc is a specified cutoff value, z(x) is the value at
location x.
The indicator function at a sampled point, i(x;zc), takes a simple
form which is shown in Figure 9.1.1. This function follows the
bimodal probability law and takes only two possible values, 0 and 1.
Given an observed point grade z(x) at location x, there is either a 0
or 100 percent chance that this value will be less than or equal to the
cutoff zc. All sample values are flagged by an indicator function; 1
for values less than or equal to zc and 0 otherwise.
Essentially, the indicator function transforms the grade at each
sampled location into a [0,1] random variable. Indicators are
assigned to each sampled location in the deposit for a series of cutoff
grades. As the cutoff grade increases, the percentage of points below
the cutoff grade zc increases.

MineSight for Modelers—Geostatistics Page - 215


Practical Geostatistics for Earth Sciences—
Multiple Indicator Kriging Proprietary Information of Mintec, Inc.

Notes




Figure 9.1.1 Indicator Function at Point x
The φ(A;zc) Function
The φ(A;zc) is a cumulative distribution function (cdf) which
is built using the information from the indicator functions. This
function is defined as the exact proportion of grades z(x) below the
cutoff zc within any area A in the deposit:
φ(A;zc) = 1/A ∫A i(x;zc) dx ∈ [0,1] (9.2.1)
For each cutoff grade zc, one point of cumulative probability
function φ(A;zc) is obtained as shown in Figure 9.2.1.
In mining practice, local recoverable reserves must be assessed
for large panels within the deposit. Local recoverable reserves of
ore grade and tonnage can be estimated by developing the φ(A;zc)
function for each panel A. The indicator data, i(x;zc), provide the
source of information which can be used to estimate point local
recoverable reserves in the same way that point grade data are
used to estimate block grades. The similarity between these two
estimation procedures is demonstrated in the following relations:
Estimate of block V: zv (x) = 1/V ∫x ∈V z(x) dx (9.2.2)
Indicator cdf for block V: φ(V;zc) = 1/V ∫x ∈V i(x;zc) dx (9.2.3)
Thus any estimator used to determine the block grades from point
grade data can also be used to determine the spatial distribution
from point indicator data.


Figure 9.2.1 Proportion of Values z(x) ≤ zc within area A

Page - 216 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Multiple Indicator Kriging

LOCAL RECOVERY FUNCTIONS Notes


The cumulative distribution function φ(A;zc) gives the proportion
of grades below the cutoff grade zc. Depending on the size of the
estimated panel, tonnage and quantity of metal values can be
calculated for panel A using the recovery factors as follows:
Tonnage point recovery factor in A:
t*(A;zc) = 1 - φ(A;zc) (9.3.1)
Quantity of metal recovery factor in A:
q*(A;zc) = ∫zc u d φ(A;u) (9.3.2)
A discrete approximation of this integral is given by
q*(A;zc) = ∑ 1/2 (zj + zj-1) [ φ*(A;zj) - φ*(A;zj-1) ] j=2,...,n (9.3.3)
This approximation sums the product of the median cutoff grade
and the median φ(A;zc) proportion for each cutoff grade increment.
The mean ore grade at cutoff zc gives the mean block grade above
the specified cutoff value.
Mean ore grade at cutoff zc:
m*(A;zc) = q*(A;zc) / t*(A;zc) (9.3.4)
Estimation of φ(A;zc)
Let φ(A;zc) to be the proportion of grades z(x) below cutoff zc
within panel A. In general this proportion would be unknown
since i(x;zc) is known at only a finite number of points. From
a deterministic point of view, φ(A;zc) can be approximated by
numerical approximation:
φ(A;zc) = 1/n ∑ i(xj;zc) j=1,...,n (9.4.1)
One disadvantage of this approach is that each sample is given
the same weight regardless of its location. Also, samples outside A
are not utilized, and no estimation error is provided. Therefore, the
following estimate of φ(A;zc) should be considered:
φ(A;zc) =∑ λj i(xj;zc) xj ∈ D j=1,...,N (9.4.2)
where n is the number of samples in the panel A, N is the number of
samples in search volume D, and λj are the weights assigned to the
samples. For the unbiasedness condition ∑ λj = 1 and usually N >> n.
Because of the similarity of Equation 9.4.2 to the linear estimators,
the ordinary kriging approach is used to estimate the cumulative
distribution function φ(A;zc) from the indicator data i(xj;zc). By
analogy with the ordinary kriging estimator, we use a random
function model for i(xj;zc), which will be designated by I(xj;zc).
Indicator Variography
Variograms are used to describe the spatial correlation between
grade values or any other variable in the deposit. Indicator
variograms are estimated for each cutoff grade by using the
indicator data i(xj;zc) found for that cutoff grade. The variogram for
the random function I(xj;zc) is estimated by a sample variogram in
the same way as is the grade variogram Z(x):

MineSight for Modelers—Geostatistics Page - 217


Practical Geostatistics for Earth Sciences—
Multiple Indicator Kriging Proprietary Information of Mintec, Inc.

Notes γI(h;zc) = 1/2 E [ I(x+h);zc) - I(x;zc) ]2 (9.5.1)


The sample indicator variograms are more robust than the grade
variograms since their estimation does not call for the data values
themselves but rather their indicator values with regard to a given
cutoff zc. Figure 9.5.1 illustrates the indicator variograms for a series
of cutoff grades.

Figure 9.5.1 Indicator Variograms at Different Cutoff Grades


Median Indicator Variogram
The best defined experimental indicator variograms correspond to
cutoffs zc close to the median zm since roughly 50% of the indicator
data will be equal to 0 and the rest will be 1’s.
The median indicator variogram in discrete form is defined as
λm(h;zm) = 1/2n ∑ [ I(xj+h);zm) - I(xj);zm) ]2 j=1,...,n (9.5.2)
The maximum sill value that an indicator variogram can have
is 0.25 when half of the samples are 0’s, the other half are 1’s. Thus
the sill values of indicator variograms increase until the median
indicator variogram is reached.
Order Relations
The indicator kriging estimator provides an unbiased estimate of
the recovered tonnage at any cutoff of interest. One disadvantage
of this method is the possibility of order relation problems. These
problems occur when the distribution function estimated by
indicator kriging is decreasing ( φ*(A;zk) < φ*(A;zk+1)), has negative
values or values greater than 1. In short, an estimated distribution
has order relation problems if it is not a valid distribution function.
In general, these order relation problems can occur simply because
the indicator kriging system at each cutoff is solved independently.
Each indicator kriging system provides an optimal solution by
minimizing the estimation variance at each cutoff. However, since
these solutions are arrived at independently, there is no guarantee
that they will yield a valid distribution function.
There are at least two feasible methods for resolving order relation
problems. One method involves combining the simple kriging

Page - 218 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Multiple Indicator Kriging

systems for all cutoffs into one giant system and minimizing the Notes
estimation variances. This system would contain constraints that
would force the order relations to hold. The fundamental drawback
of this method is that the system of equations would be too large to
be solved easily.
The second method, which closely approximates the results of
the first method, takes the results given by the indicator kriging
algorithm and fits a distribution function which minimizes the
weighted sum of squared deviations from the optimal solution,
as shown in Figure 9.6.1. Since this method is a bit complex, one
frequently employed solution is to set φ*(A;zk+1) = φ*(A;zk).

Figure 9.6.1 Solving the Order Relation Problems


Median Indicator Approach
The median indicator variogram provides an equal split of the
corresponding indicator data. Adopting only this variogram in
indicator kriging at all cutoff grades can be a practical approach
since the median indicator approximation has the advantage of
reducing the number of kriging systems from K to 1. Furthermore,
the order relation problems are reduced considerably. Use of the
median indicator variogram simplifies the estimation of the φ(A;zc)
function. It reduces the number of indicator variograms to be
computed and modeled. In a multiple indicator kriging program,
only one set of kriging weights need to be calculated. This is because
at each cutoff grade increment, the kriged weights remain the same
even though the indicators are different.
Construction of Grade-Tonnage Relationship
The estimated cumulative probability function φ*(A;zc) is defined
at K discrete points corresponding to indicator cutoffs used. If the
original data distribution is highly skewed, the estimated function
φ*(A;zc) should be also highly skewed. Keeping this fact in mind, one
must now complete (or interpolate) the estimated function φ*(A;zc) at
all possible ranges of data value z.
Any interpolation within an interval amounts to assuming a
particular cumulative probability distribution model within that
class. As such, this model must satisfy the monotonically increasing

MineSight for Modelers—Geostatistics Page - 219


Practical Geostatistics for Earth Sciences—
Multiple Indicator Kriging Proprietary Information of Mintec, Inc.

Notes property of a distribution function. Also, to each such intra-class


distribution, there is the mean of this class which may not necessarily
equal to the midpoint of this class.
The correct estimation of this class mean is probably the most
important task in obtaining the desired grade-tonnage relationship
from the φ*(A;zk). Since the multiple indicator kriging is mostly
used for highly skewed data, the estimation of the last class mean is
particularly critical simply because this last class mean will usually
dominate the overall estimated mean grade of the block or panel.
It is therefore suggested that one use either the lognormal or the
power model of cumulative distribution, at least for the last or last
few classes to ensure positive skewness at the high end of the data
values. Regardless of which model is used, one may always consider
a very high last cutoff value so that only a minute portion of the data
(1% or less) is above this last cutoff. This precaution will limit the
influence of this last class mean, simply because the last probability
would be estimated zero or a very small value for most blocks within
the deposit. The only practical problem of following this suggestion
is that the indicator variogram of this last cutoff will be a purely
random variogram.
Perorming Affine Correction
The estimated cumulative probability function φ*(A;zc) as well as
the grade-tonnage relationship for each block is based on entirely
the distribution point samples (composite values). Since the selective
mining unit (SMU) volume is generally much larger than the sample
volume, one must perform a volume-variance correction to the initial
grade-tonnage curve of each block. This volume-variance correction
is called the affine correction. The assumptions required to perform
affine correction are:
1. The distribution of block or SMU grades has the same shape
as the distribution of point or composite samples.
2. The ratio of the variances, i.e., the variance of block grades (or
the SMU grades) over that of point grades is non-conditional
to the surrounding data used for estimation.
Krige’s Relation
An important relationship involving the dispersion variances of
samples with different support is given by Krige’s relation:
σ2p = σ2b + σ2p ∈b
or (9.8.1)
D2(./D) = D2(smu/D) + D2(./smu)
where:
σ p = D2(./D) = Dispersion variance of composites in the deposit
2

σ2b = D2(smu/D) = Dispersion variance of blocks (SMU’s) in the


deposit
σ2p∈b = D2(./smu) = Dispersion variance of points in blocks
(SMU’s.)

Page - 220 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Multiple Indicator Kriging

This is the spatial complement to the partitioning of variances Notes


which simply says that the variance of point values is equal to the
variance of block values plus the variance of points within blocks.
Dispersion variance of composites in the deposit σ2p is calculated
directly from the composite or blasthole data. It can also be
estimated by γ(D,D), which is the sill of the variogram of the data.
The average variance of points within the block σ2p∈b is
equivalent to the average value of the variogram within the SMU or
γ(smu,smu).
The average variogram value within a block is estimated by
σ2p∈b = γ(smu,smu)
= 1/n2 ∑∑ γ(hi,j) i=1,...,n and j=1,...,n. (9.8.2)
Calculation of Affine Correction
Most kriging programs automatically calculate the values for σ2b
or σ2p∈b, and sometimes both. If σ2p and σ2p∈b are known, then σ2b can
be obtained easily from Krige’s relation:
σ2b = σ2p - σ2p∈b (9.8.3)
Once the dispersion variances are known, the affine correction
factor, K, can be calculated as follows:
K2 = σ2b / σ2p ≤ 1 (9.8.4)
Using the estimated values from the variogram averaging
K2 = [ γ(D,D) - γ(smu,smu) ]/γ(D,D)
= 1 - [ γ(smu,smu)/γ(D,D) ] ≤ 1 (9.8.5)
and
Affine correction factor, K = √ K2 ≤ 1 (9.8.6)
Then, the necessary equation for affine correction of any panel or
block is given by
φ*v(A;z) = φ*(A;zadj) (9.8.7)
where
zadj = adjusted cutoff grade = K * (z - ma) + ma (9.8.8)
The larger the SMU, the more spatial averaging occurs within each
SMU, and thus the dispersion variance of the SMU’s will decrease.
The permanence of shape applied during affine correction is quite
reasonable if the block size v (i.e., SMU) is small such that the data
available does not allow further resolution within v. As a rule of
thumb, v should be smaller than 1/3 the average data spacing, and
the relative change in dispersion variance between composite grade
and SMU grade is approximately 30% or less. In other words,
(σ2p - σ2b ) / σ2p < 30% (9.8.9)
Figure 9.8.1 illustrates of affine reduction of variance using
Equation 9.8.7 above. In this figure, the block (or the SMU) pdf
model fv(A;z) is obtained by shrinking the shape of f(A;z) around
the common mean, ma. The correction preserves the shape of f(A;z)
and reduces the variance and spread of fv(A;z). Note that f(A;z) is

MineSight for Modelers—Geostatistics Page - 221


Practical Geostatistics for Earth Sciences—
Multiple Indicator Kriging Proprietary Information of Mintec, Inc.

Notes the probability density function (pdf) whose cumulative probability


function (cdf) function is given by φ*(A;z).
In practice, we are generally interested in the average ore grade
and the proportion of ore within each block. In other words, we
are interested in performing the affine correction to the right of the
economic cutoff grade. Figure 9.8.2 illustrates how one can perform
this affine correction, without actually shrinking the obtained φ*(A;z)
or f(A;z).
During actual mining, we know that we will apply the economic
cutoff zc on the fv(A;z) of SMU’s rather than on f(A;z) of point
samples in Figure 9.8.2. Therefore, we need to compute the area to
the right of zc and also the average grade of this ore using fv(A;z).
The average ore grade is obtained by a simple weighted averaging
of probability times the associated grade from fv(A;z). However, we
only have f(A;z) to use. For this reason, we compute the equivalent
economic cutoff grade zc’ which will be applied to f(A;z) curve
instead, in order to obtain the correct proportion of area (or tonnage)
to the right of the economic cutoff.
After an integration using zc’ and class means of f(A;z), we now
have the correct tonnage (i.e., proportion) but higher average grade.
Consequently, the estimated metal recovery will be larger than
actual. Hence, we must perform another affine correction for the
recovered metal, by proportionately reducing the estimated average
grade obtained earlier.

Figure 9.8.1 Illustration of Affine Reduction of Variance

Figure 9.8.2 Affine Correction for Ore Grade and Tonnage Estimation

Page - 222 MineSight for Modelers—Geostatistics


Practical Geostatistics for Earth Sciences—
Proprietary Information of Mintec, Inc. Multiple Indicator Kriging

Equivalent Cutoff Calculation Notes


To apply affine correction to recovery calculations, one simply
transforms the specified cutoff grade to an equivalent cutoff
grade. When this equivalent cutoff is applied to the point sample
distribution, it provides SMU recoveries. The basic equation is
given by:
(zp - m) / σp = (zsmu - m) / σsmu (9.8.10)
where
zp = the equivalent cutoff grade to be applied to the point (or
composite) distribution
m = mean of composite and SMU distribution
σp = square root of composite dispersion variance
zsmu = the cutoff grade applied to the SMU
m = mean of composite and SMU distribution
σsmu = square root of SMU dispersion variance
Equation 9.8.10 is rearranged to get:
zp = (σp / σsmu) zsmu + m [1 - (σp / σsmu)] (9.8.11)
The ratio σp / σsmu is basically the inverse of the affine correction
factor K given in Equation 9.8.6. This ratio is ≥ 1.
Numeric Example:
Let the mean of composites = 0.0445, and the specified cutoff
grade zsmu = 0.055. If the ratio σp / σsmu = 1.23, what is the equivalent
cutoff grade?
zp = 1.23 (0.055) + 0.0445 (1 - 1.23) = 0.0574
Therefore, the equivalent cutoff grade to be applied to the
composite distribution is 0.0574. Note that if the specified cutoff
grade is less than the mean, the equivalent cutoff grade becomes less
than the cutoff, and if the specified cutoff grade is greater than the
mean, the equivalent cutoff grade becomes greater than the cutoff.
Advantages and Disadvantages of M.I.K.
Like any method of estimation, multiple indicator kriging also has
some advantages and disadvantages of its own. The advantages of
indicator kriging are the following:
1. It estimates the local recoverable reserves within each panel
or block.
2. It provides an unbiased estimate of the recovered tonnage at
any cutoff of interest.
3. It is non-parametric, i.e., no assumption is required
concerning the distribution of grades.
4. It can handle highly variable data.
5. It takes into account the influence of neighboring data and
continuity of mineralization.

MineSight for Modelers—Geostatistics Page - 223


Practical Geostatistics for Earth Sciences—
Multiple Indicator Kriging Proprietary Information of Mintec, Inc.

Notes The disadvantages of indicator kriging are the following:


1. It may be necessary to compute and fit a variogram for each
cutoff.
2. Estimators for various cutoff values may not show the
expected order relations.
3. Mine planning and pit design using MIK results can be more
complicated than conventional methods.
4. Correlation between indicator functions of various cutoff
values are not utilized. More information could become
available through the indicator cross variograms and
subsequent cokriging. These form the basis of the Probability
Kriging technique.

Page - 224 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Change of Support

Change of Support Notes


One of the important aspects of geostatistical ore reserve
estimation is to accurately predict the grade and tonnage of material
above a specified cutoff grade. In mine planning, the change of
support from the initial stage of sample collection to 3‑D deposit
modeling is the key for understanding many of the problems
encountered in reserve reconciliation.
The Support Effect
The term support at the sampling stage refers to the characteristics
of the sampling unit, such as the size, shape and orientation of
the sample. For example, channel samples and diamond drill
core samples have different supports. At the modeling and mine
planning stage, the term support refers to the volume of the blocks
used for estimation and production.
It is important to account for the effect of the support in our
estimation procedures, since increasing the support has the effect
of reducing the spread of data values. As the support increases,
the distribution of data gradually becomes more symmetrical. The
only parameter that is not affected by the support of the data is the
mean. The mean of the data should stay the same even if we change
the support. Figure 10.1.1 shows the histograms of data from
different block sizes. The original block size of 1x1 was combined
into 2x2, 5x5 and 20x20 size blocks. The histograms in this figure
illustrate the effects of changing block size on the variance
distribution of data values.

Figure 10.1.1 Histograms of Data Values from 1x1, 2x2, 5x5, and 20x20 Block Sizes.

MineSight for Modelers—Geostatistics Page - 225


Practical Geostatistics for Earth Sciences—Change of Support Proprietary Information of Mintec, Inc.

Notes 3‑D mine model blocks have a much larger volume than those
of the data points. Therefore, certain smoothing of block grades is
expected after an interpolation procedure. However, the selection
of the method and the parameters used in the interpolation may
contribute to additional smoothing of the block grades.
Smoothing and Its Impact on Reserves
The search strategy, the parameters and the procedure used
to select data, can play a significant role in the smoothness of the
estimates. As a matter of fact, it is one of the most consequential
steps of any estimation procedure (Arik, 1990; Journel, 1989). The
degree of smoothing depends on several factors, such as the size
and orientation of the local search neighborhood, and the minimum
and maximum number of samples used for a given interpolation.
Of all the methods, the nearest neighbor method does not introduce
any smoothing to the estimates since it assigns all the weight to the
nearest sample value. For the inverse distance weighting method,
increasing the inverse power used decreases the smoothing because,
as the distance power is increased, the estimate approaches that of
the nearest neighbor method. For ordinary kriging, the variogram
parameters used, especially the increase in nugget effect, contribute
to the degree of smoothing.
The immediate effect of smoothing caused by any interpolation
method is that the estimated grade and tonnage of ore above a given
cutoff are biased with respect to reality. As the degree of smoothing
increases, the average grade above cutoff usually decreases. Also
with increased smoothing, the ore tonnage usually increases for
cutoffs below the mean and decreases for cutoffs above the mean.
Figure 10.2.1 illustrates the effect of smoothing on the grade-tonnage
curves by comparing 1x1 size blocks to 5x5 and 20x20 size blocks.

Figure 10.2.1 The Effect of Smoothing on the Grade-Tonnage Curves

Page - 226 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Change of Support

Volume‑Variance Relationship Notes


Mining takes place on a much larger volume than those of the
data points. Therefore, certain smoothing of grades is expected. This
is in accordance with the volume‑variance relationship that implies
that as the volume of blocks increases, their variance decreases.
However, mining also takes place on a smaller volume than those
of the 3‑D model blocks that are based on exploration data spacing.
Therefore, the variance of these 3‑D model blocks is lower than what
would normally be observed during the mining of selective mining
unit (SMU) blocks.
Realistic recoverable reserve figures can be obtained if we
determine the grade‑tonnage curves corresponding to SMU
distribution. Since the actual distribution will not be known until
after mining, a theoretical or hypothetical one must be developed
and used. The application of this procedure can minimize the bias on
the estimated proportion and grade of ore above cutoff.
Block Variance
The block variance expressed usually as σ2(v/D) refers to the
variability among grades of a given support (size, shape and
orientation). If one treats the grade of each fixed size block as one
sample, then the block variance is simply the variance of these
samples. For a given size block, there is only one block variance in
the deposit. The block variance, which is also known as the “volume-
variance” relationship, therefore tells the variability associated with
different support of blocks or samples. The samples of large support
volume have smaller variance than those of smaller volume. In other
words, as the block size gets larger, its block variance gets smaller.
Figure 10.3.1 shows the relationship of variance to sample support.
In this figure, the data with a small support, such as the composite
grades, will have the curve A. On the other hand, the data with a
larger support, such as the block grades, will have the curve B.
In Figure 10.3.1, the global means of the two curves are the same.
However, if a sub-interval such as the area to the right of the cutoff
grade is considered, the conditional mean grade of curve A is higher
than the conditional mean grade of curve B. In other words, the
conditional mean grade of ore material having a large support is
always lower than that of a smaller support, whenever an economic
cutoff grade is applied to the distribution.
Krige’s Relationship
Krige’s relationship relates the variance of block grades within a
deposit to point grades within the deposit and within the block. It is
expressed in the following form:
σ2 (v/D) = σ2 (o/D) - σ2 (o/v) (10.3.1)
where
σ2 (v/D) = the variance of block grades within the deposit,
σ2 (o/D) = the variance of point grades within the deposit,
σ2 (o/v) = the variance of point grades within the block.

MineSight for Modelers—Geostatistics Page - 227


Practical Geostatistics for Earth Sciences—Change of Support Proprietary Information of Mintec, Inc.

Notes This relationship can be used to calculate a variance reduction


factor to address the problem with mining dilution in an ore deposit.

A = Assay or Composite Grades


B = Block Grades
Figure 10.3.1 Relation of Variance to Sample Support

How to Deal with Smoothing


There are a few ways to achieve this objective. One possible
solution that will obtain better recoveries is correcting for
smoothness of the estimated grades. This can be done by support
correction. There are methods available for doing this, such as the
affine correction or the indirect lognormal correction (Isaaks and
Srivastava, 1989).
Similar or better results for recoverable reserves can be obtained
through conditional simulation. A fine grid of simulated values
at the sample level is blocked according to the required SMU size.
This procedure is very simple, but also assumes perfect selection
(Dagdelen et al, 1997).
The use of higher distance powers in the traditional inverse
distance weighting method is an attempt to reduce the smoothing of
block grades during the interpolation of deposits with skewed grade
distributions. On the geostatistical side, there are methods, such as
lognormal kriging, lognormal short cut, outlier restricted kriging
and several others, which have been developed to get around the
problems associated with the smoothing of ordinary kriging (David,
1977; Dowd, 1982; Arik, 1992). There are also advanced geostatistical
methods such as indicator or probability kriging, which take into
account the SMU size in calculating the recoverable reserves (Verly
and Sullivan, 1985; Journel and Arik, 1988; Deutsch and Journel,
1992). Each of these methods provides the practitioner a variety of
tools from which to select and apply where they deem appropriate,
since each method has advantages as well as shortcomings.
How Much Smoothing is Reasonable?
If we are using a linear estimation technique to interpolate the
block grades, what would be the appropriate degree of smoothing

Page - 228 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Change of Support

which would result in “correct” grade and tonnage above a given Notes
cutoff when applied to our estimates? For one thing, if we know
the SMU size that will be applied during mining, we can determine
the theoretical or hypothetical distribution of SMU grades for our
deposit. Once we know this distribution or the grade‑tonnage
curves of SMUs, we can vary our search strategy and interpolation
parameters until we get close to these curves. The disadvantage
of this procedure is that one may end up using a small number of
samples per neighborhood of interpolation. This lack of information
may cause local biases. However, when we are trying to determine
the global and mineable resources at the exploration stage, we are
not usually interested in the local neighborhood. Rather, we are after
annual production schedules and mine plans (Parker, 1980).
Refining the search strategy and the kriging plan to control
smoothing of the kriged estimates works reasonably well,
depending on our goal. This can be accomplished by comparing
the grade‑tonnage curves from the estimated block grades to those
from the SMUs. Since the SMU grades are not known at the time
of exploration, we can determine the theoretical or hypothetical
distribution of SMU grades for our deposit based on a specified
SMU size.
Global Correction for the Support Effect
There are some methods available for adjusting an estimated
distribution to account for the support effect. The most popular ones
are affine correction and indirect lognormal correction. All of these
methods have two features in common:
1. They leave the mean of the distribution unchanged.
2. They change the variance of the distribution by some
“adjustment” factor.
Affine Correction
The affine correction is a very simple correction method. Basically,
it changes the variance of the distribution without changing its
mean by simply squeezing values together or by stretching them
around the mean. The underlying assumption for this method is
that the shape of the distribution does not change with increasing or
decreasing support.
The affine correction transforms the z value of one distribution to
z’ of another distribution using the following linear formula:
z’ = √f * (q‑m) + m (10.6.1)
where m is the mean of both distributions. If the variance of
the original distribution is σ2, the variance of the transformed
distribution will be f σ2.
Indirect Lognormal Correction
The indirect lognormal correction is a method that borrows the
transformation that would have been used if both the original
distribution and the transformed distribution were both lognormal.

MineSight for Modelers—Geostatistics Page - 229


Practical Geostatistics for Earth Sciences—Change of Support Proprietary Information of Mintec, Inc.

Notes The idea behind this method is that while skewed distributions
may differ in important respects from the lognormal distribution,
change of support may affect them in a manner similar to that
described by two lognormal distributions with the same mean but
different variances.
The indirect lognormal correction transforms the z value of
one distribution to z’ of another distribution using the following
exponential formula:
z’ = a zb (10.6.2)
where a and b are given by the following formulas:
a = m/sqrt(f cv2 +1) [sqrt(cv2 +1)/m]b (10.6.3)
b = sqrt( ln (f cv2 +1)/ln (cv2 +1) ) (10.6.4)
In these formulas, sqrt is used to denote √, and cv the coefficient
of variation. As before, m is the mean and f is the variance
adjustment factor.
One of the problems with the indirect lognormal correction
method is that it does not necessarily preserve the mean if it is
applied to values that are not exactly lognormally distributed. In
that case, the transformed values may have to be rescaled, using the
following equation:
z” = m/m’ * z’ (10.6.5)
where m’ is the mean of the distribution after it has been
transformed by equation 10.6.2.

Page - 230 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Conditional Simulation

Conditional Simulation Notes


Simulated deposits are computer models that represent a deposit
or a system. These models are used in place of the real system to
represent that system for some purpose. The simulation models are
built to have the same the distribution, dispersion characteristics,
and spatial relationships of the grade values in the deposit.
Conditionally simulated models additionally have the same values
at the known sample data locations. The difference between models
of estimation and conditional simulations lies in their objectives.
The Objectives of Simulation
Local and global estimations of recoverable reserves are often
insufficient at the planning stage of a new mine, or a new section
of an operating mine. For the mining engineer, as well as the
metallurgist and chemist, it is often essential to be able to predict the
variations of the characteristics of the recoverable reserves at various
stages in the operation.
For instance, in the processing of low-grade iron ore deposits,
keeping final product within strict quality standards may be a
complex task whenever impurities such as phosphorus are involved.
The blending process and the flexibility of the plant will depend on
the dispersion variance of the grades received at all scales (daily,
monthly, yearly). In this case, the actual grade at each moment is
not really relevant. What is important is the variability of that mill
feed, or the variance of feeding tonnages over a time period. If
kriged estimates are used to forecast that production variance, it
will certainly be underestimated since it is a known fact that kriging
smoothes reality.
Therefore, a detailed definition of an adequate mining control
method is essential. For a preliminary design, it is admissible to
use average values to perform an evaluation. When it comes to
detailed definitions, however, these averages are not sufficient due
to local fluctuations.
If the in situ reality were known, the required dispersions, and
thus the most suitable working methods, could be determined by
applying various simulated processes to this reality. Unfortunately,
the perfect knowledge of this in situ reality is not available at the
planning stages of the operation. The information available at this
stage is usually incomplete, and limited to the grades of a few
samples. The estimations deduced from this information are far
too imprecise or smooth for the exact calculations of dispersions or
fluctuations that are required. One solution is conditional simulation.
What is Conditional Simulation?
In any simulation, a model is employed in place of a real system
to represent that system for some purpose. To simulate an ore
deposit, one has to build a model of the deposit that will reflect
not only the correct grade distribution, but also the correct spatial
relationships of the grade values in the deposit.

MineSight for Modelers—Geostatistics Page - 231


Practical Geostatistics for Earth Sciences—Conditional Simulation Proprietary Information of Mintec, Inc.

Notes In a conditional simulation, there is an additional step where


the model values are conditioned to the experimental data. The
conditioning essentially forces the simulation to pass through the
available data points so that certain local characteristics of the
deposit can be imparted to the model. Therefore, the simulation
is said to be “conditional” if the resulting realizations honor the
hard data values at their locations. This conditioning gives a certain
robustness to the simulation with respect to characteristics of the real
data. If, for example, a sufficient number of data show a local drift,
then the conditional simulations, even though based on stationary
model, will reflect the local drift in the same zone. These conditional
simulations can be further improved by adding different qualitative
information available from the real deposit, such as geologic
boundaries, fault zones, etc.
For most practical problems, each conditional simulation or
realization can be seen as an exhaustive sampling of a similar field
generated by similar “physical random processes.” Each conditional
simulation typically:
• reproduces a histogram. Usually this is the histogram that is
deemed representative of the total sample domain.
• reproduces the covariance function Cz(h) that is deemed
representative of the same sample domain.
• honors the sample data values at their locations, i.e., zsim(xα) =
z(xα), for all simulations and at all data locations xα.
• contains a vastly larger number of simulated attribute values
relative to the number of sample values or conditioning data.
The ratio may be somewhere between 100:1 and 1000:1.
Figure 11.2.1 illustrates the relationship between the actual sample
values in a deposit and a conditional simulation of that deposit.

Figure 11.2.1 Schematic relationship between the actual sample values in a deposit and a
conditional simulation of that deposit.
As far as the dispersion of the simulated variable is concerned,
there is no difference between the simulated deposit and the real
deposit. The simulated deposit has the advantage of being known

Page - 232 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Conditional Simulation

at all points x and not only at the experimental data points xα. This Notes
simulated deposit is also called a “numerical model” of the real deposit.
Simulation or Estimation
If one analyzes a regular grid or block kriging estimates for a
spatial attribute such as grade, he or she can observe that there is
an uneven smoothing in the block estimates. This smoothing is
inversely proportional to the data density. Such distortion can be
visualized or confirmed in several ways:
1. The experimental variogram of the block estimates is
different from the original sample variogram. The block
variogram from the model has a smaller sill and a larger
range than the sample variogram, indicating an exaggerated
continuity in the estimated values.
2. The histogram of the samples is different from the histogram
of the estimated block values. Relative to the sample
histogram, the block histogram has fewer values in the tails
and a larger proportion close to the mean.
3. Cross validation of the sampling reveals that there is a
tendency of kriging to underestimate values above the
sample mean and to overestimate those below the mean. This
results in a regression line that has a less steep slope than
the ideal slope of 1.0 for the main diagonal. This distortion is
called conditional bias in the estimation.
Geostatistics addresses this smoothing in kriging by means of
stochastic simulation. The choice between kriging and simulation
must be decided based upon what is more relevant for each specific
application: minimum local estimation errors in a mean square sense
or correct spatial continuity. Therefore, estimation and simulation
are complementary tools. Estimation is appropriate for assessing
mineral reserves, particularly global in situ reserves. Simulation aims
at correctly representing spatial variability, and is more appropriate
than estimation for decisions in which spatial variability is a critical
concern and for risk analysis.
Any estimation technique such as kriging gives only one single
“best estimate”; best in some local measure such as unbiasedness
and minimum error variance, without regards to the global feature
of obtained estimates. For each unknown location in the deposit
being estimated, the technique generates a value that is close on
average to the unknown true value. This process is repeated for
every unknown point in the deposit without any consideration of the
spatial dependence that exists between true grades of the deposit. In
other words, the estimated values cannot produce the covariance or
the variogram computed from the data. Neither can an estimation
technique reproduce the histogram of the data. The only thing it can
reproduce is the data values at known data locations.
The criteria for measuring the quality of estimation are unbiased
and the minimal quadratic error, or estimation variance. There is
no reason, however, for these estimators to reproduce the spatial
variability of the true grades. In the case of kriging, for instance, the

MineSight for Modelers—Geostatistics Page - 233


Practical Geostatistics for Earth Sciences—Conditional Simulation Proprietary Information of Mintec, Inc.

Notes minimization of estimation variance involves a smoothing of the true


dispersions. Similarly, the polygonal method of estimation would
consider the grade as constant all over the polygon of influence of a
sample. Therefore, it would also underestimate the local variability
of true grades. The estimated deposit is, thus, a biased base on which
to study the dispersions of the true grades.
By being close on average, estimation techniques such as kriging
try to avoid colossal errors, and if well done, should be globally
unbiased. Hence, it is a good basis for assessing the global ore
reserve estimates. However, the technique becomes an exercise
in mediocrity, because of being close on average. Specifically,
estimation techniques produce a global feature that is smoother than
reality and they also underestimate extreme values. The smoother
estimated surface gives a false optimism in the sense that the true
reality would also be as smooth.
Conditional simulation, on the other hand, provides the same
mean, histogram, and the variogram as the real grades (assuming
that the samples are representative of the reality). Therefore it
identifies the main dispersion characteristics of these true grades.
In general, the objectives of simulation and estimation are not
compatible. It can be seen from Figure 11.3.1 that, even though
the estimation curve is, on average, closer to the real curve, the
simulation curve is a better reproduction of the fluctuations of
the real curve. The estimation curve is preferred for locating
and estimating reserves, while the simulation curve is preferred
for studying the dispersion characteristics of these reserves,
remembering that the real curve is known only at the experimental
data points.


True but unknown reality


......... Variations approximated by conditional simulation
- - - - Smoothed reality by kriging
Figure 11.3.1 Illustration of conditional simulation

Simulation Algorithms
There are many simulation methods available for use in the
conditional simulation of deposits. The turning bands method,
Gaussian sequential simulation, indicator sequential simulation,
L-U decomposition, probability field simulation, and annealing
techniques. The turning bands method was the first method
introduced in the early 1970’s. Since then many new techniques

Page - 234 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Conditional Simulation

have been introduced, particularly since the mid 1980’s. No single Notes
algorithm is flexible enough to allow the reproduction of the wide
variety of features and statistics encountered in practice.
Gaussian Sequential Simulation
One of the most straightforward algorithms for simulation of
continuous variables is the Gaussian sequential algorithm that is
based on classical multivariate theory. It assumes all conditional
distributions are normal and determined exactly by simple kriging
mean and variance. Since most original data is not multivariate
normal, they need to be transformed into normal scores, then
transformed back to the original distribution after the simulation.
However, the algorithm is computationally faster and easier. It also
has established record of successful applications.
Because of the unique properties of a multi-variate gaussian
model, performing the Gaussian sequential simulation is perhaps
the most convenient method. It is accomplished following the basic
idea below:
The multi-variate pdf f(x1, x2,...,xn; z1, z2,...,zn), where xi’s denote
the location in the domain A and zi’s denote particular attribute
values at these locations, can be expressed as a product of univariate
conditional distributions as follows:
f(x1, x2,...,xn; z1, z2,...,zn) = f(x1; z1)
x f(x2; z2 | Z(x1) = z1)
x f(x3; z3 | Z(x1) = z1, Z(x2) = z2)
x ... (11.4.1)
x f(xn; zn | Z(xα) = zα, α = 1,...,n-1)

If all univariate conditional distributions in Equation 11.4.1


are known, then a realization z(x) of RF Z(x) can be constructed
by a sequence of random drawings from each of the n univariate
conditional distributions:
1. A realization z1 of the random variable Z(x1) is obtained by
randomly drawing a value from the marginal distribution
f(x1; z1).
2. The realization z1 is used to condition the distribution of
Z(x2).
3. A realization z2 of the random variable Z(x2) is obtained by
randomly drawing a value from the conditional distribution
f(x2; z2 | Z(x1) = z1).
4. The realizations z1 and z2 are used to condition the
distribution of Z(x3).
5. A realization z3 of the random variable Z(x3) is obtained by
randomly drawing a value from the conditional distribution
f(x3; z3 | Z(x1) = z1, Z(x2) = z2).

MineSight for Modelers—Geostatistics Page - 235


Practical Geostatistics for Earth Sciences—Conditional Simulation Proprietary Information of Mintec, Inc.

Notes 6. The sequence of random draws and subsequent


conditionings is continued until the last distribution f(xn;
zn | Z(xα) = zα, α = 1,...,n-1) is fully conditioned. Then a
realization zn of the last random variable Z(xn) is randomly
drawn a value from this distribution.
Note that the number n above for n variate, multi-variate joint
pdf corresponds to the total number of simulated grid points during
conditional simulation.
For this algorithm to be of any practical use, the complete
sequence of conditional distributions for the given multi-variate
pdf must be known. It can be shown that the univariate conditional
distribution of a stationary, multi-variate gaussian RF model Y(x)
with the covariance CY(h) or variogram γ(h) function is gaussian
with a conditional mean and variance exactly equal to the simple
kriging estimate (or mean) and simple kriging variance.
Since the simulated values ys(x) are sequentially drawn from exact
univariate conditional distributions as shown in Equation 11.4.1, the
covariance CY(h) is reproduced by Ys(x). Unfortunately however,
most earth science data are not univariate normal, much less multi-
variate normal. Thus, to take advantage of the multi-variate normal
model, a normal scores transformation is typically applied to the
initial sample data z(xα).
y(xα) = φ( z(xα) ) α = 1,...,n (11.4.2)
where φ(.) is a one-to-one (invertible) transformation function and
y(xα)’s are normal with mean=0 and variance=1. The RF model Y(x)
is then assumed multi-variate normal which enables the drawing
of the realizations ys(x), s = 1,...,S from the multi-variate normal cdf
fully characterized by the covariance function CY(h) that is inferred
from the normal data y(xα).
These realizations are then back transformed into realizations of
Z(x) using the inverse function φ-1(.).
zs(x) = φ-1( ys(x) ) s = 1,...,S (11.4.3)
The resulting RF model Z(x) is said to be multi-φ-normal. In other
words, its φ transform Y(x) is multi-variate normal.
Another problem in implementing the sequential method in
Equation 11.4.1 comes from the increasing number of conditioning
data at each sequential step. The number of conditioning data keeps
growing from m to a maximum of m + n - 1 values where m is the
number of conditioning data and n number of simulation nodes.
The number of simulation nodes n may be as large as 106 that would
require the solution of unreasonably large kriging systems.
Therefore, in practice only those data closest to the current node
being simulated are retained. The rational for doing this is that
the information contained by data further from the simulation
node is “screened” by the closer data. The impact of such screened
information is deemed small enough so that it can be ignored
without consequence. For example, if only the closest four data are
retained in each sector of a quadrant search, then a simple kriging

Page - 236 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Conditional Simulation

system of maximum dimension 16 would have to be solved at each Notes


simulation node.
Indicator Sequential Simulation
Indicator sequential simulation allows different patterns of
spatial continuity for different cutoffs. It also allows incorporation
of secondary information and constraint intervals. It is the preferred
method if one is concerned with proportions, categorical variables,
or with the continuity properties of the extremes. However, the
algorithm is computationally intense and slower.
The basic idea of indicator sequential simulation is exactly
analogous to that of Gaussian sequential simulation. Again, multi-
variate pdf f(x1, x2,...,xn; z1, z2,...,zn) is expressed as a product of
univariate conditional distributions, and these distributions are
approximated using a series of indicator transforms of Z(x) and
simple or ordinary krigings. It has been shown that any kriging
technique that produces a distribution of possible outcomes (e.g.,
indicator kriging, probability kriging, disjunctive kriging) can be
used to obtain the local conditional distributions.
The main advantages of indicator sequential simulation over
Gaussian sequential simulation are the following:
• It is possible to control N spatial covariances (or indicator
variograms) instead of a single one.
• Not only the hard data values z(xα) but also any number of
soft local data, such as constraint intervals z(xα) ∈[aα, bα] and
prior probability distributions for the datum value z(xα) can
be utilized. Such additional or soft information may improve
the accuracy of the resulting conditional simulations to
simulations generated without soft information.
• It is particularly adept in displaying the spatial connectivity
of extreme values. On the other hand, Gaussian sequential
simulation would fail to show such connectivity because the
two indicators I(z;x) and I(z;x+h) become independent as the
cutoff value z gets away from the median.
One should keep in mind that the indicator sequential algorithm
requires the calculation of indicator variograms as well as the
construction and solution of one kriging system for each cutoff
value. Therefore, the computational requirements for this simulation
are much more demanding than the Gaussian sequential simulation.
Indicator sequential simulation is presently the only conditional
simulation method that is not directly or indirectly based on the
gaussian RF model. The method is non-parametric in the sense that
it need not call for prior estimation of any distributional parameters;
all results can be derived from the data. In cases where the data is
plentiful and gaussian RF model is shown to be inappropriate, the
indicator RF model provides an alternative.
The gaussian RF model is usually preferred if the problem at hand
has more to do with spatial averages, whereas indicator RF model is
preferred if one is concerned with proportions, categorical variables,
or with the continuity properties of the extremes.

MineSight for Modelers—Geostatistics Page - 237


Practical Geostatistics for Earth Sciences—Conditional Simulation Proprietary Information of Mintec, Inc.

Notes Turning Bands Method


The very first geostatistical simulation algorithm took a different
approach to obtaining a conditional simulation. The steps for this
method are given below:
1. Krige the grid to obtain Z*.
2. Produce an unconditional simulation (Zucs) that has the
correct histogram and variogram but happens not to honor
the available samples.
3. Sample the unconditional simulation at the locations where
actual sample values exists.
4. Krige the unconditional simulation to obtain Z*ucs.
5. Compute the simulated value:
Zsim(x) = Z*(x) + [ Z*ucs(x) - Zucs(x) ] (11.4.4)
Z (x) is the kriged estimate at location x and [Z ucs(x) - Zucs(x)] is
* *

the simulated error. The unconditional simulation is produced by


spatially averaging uncorrelated values to produce one-dimensional
“bands” that radiate from a common origin, then averaging again by
projecting each point on the simulated grid onto these bands.
Simulated Annealing
Simulated annealing is one of the newest and most popular
simulation methods. The method was initially formulated to solve
the statistical mechanics problem of calculating the variation with
temperature of properties of substances composed of interacting
molecules, such as energy levels in the metallurgical annealing
process of prolonged heating and slow cooling of a piece of metal.
In the application of annealing to geostatistics, one deals with
values of spatial attribute instead of molecules. The algorithm
presumes that the sampling took place at some of the sites to be
considered in the characterization through the partial realization of
the random function, typically a grid of regular nodes.
The grid values are assigned in two steps:
1. All nodes coinciding with an observation are given the value
of the observation at the node location.
2. Values are assigned to the remaining nodes by drawing at
random from a cumulative frequency distribution function
provided by the user. This is typically the cumulative
frequency distribution function of the samples.
The final solution does not depend on the choice of the initial
solution. The defining of the node values instantly assures that the
realization is a conditional simulation and that the realization has a
pre-specified histogram. All that is left to simulated annealing is to
achieve a match between a function of the original observations and
the one for simulated realizations, typically the variogram. In such
a case, matching is accomplished by reducing the sum weighted
differences, G, below a small threshold as follows:
G = ∑ [ϒ*(h) - ϒ(h)]2/ϒ(h)2 (11.4.5)

Page - 238 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Conditional Simulation

The objective is for the variogram ϒ*(h) of the simulated Notes


realization to match the pre-specified variogram model ϒ(h). The
division by the square of the model variogram value at each lag
standardizes the units and gives more weight to closely spaced (low
variogram) values.
The reduction in the objective function is achieved by swapping
pairs of values zs(xi) and zs(xj) chosen at random and observing the
effect on the objective function.
Making a Simulation Conditional
Suppose that we have a set of simulated values Z(x) for each point
of the deposit, obtained from an original set of Y(x), the real grade
know at sample points x.
Using the known values Y(xα) at the points xα we can compute
a kriged estimate Y*(xα) for any point x, remembering that if x = xα
then Y*(xα) = Y(xα) (the exact interpolation property of kriging).
Now, from the values of Z(x) at the sampling points xα we can
compute a set of kriged estimates Z*(x) for all x.
We now have three sets of values for each point:
Z(x), Z*(x), Y*(x)
and remembering that
Z*(xα) = Z(xα) and Y*(xα) = Y(xα)
we assign to each point x the value of the function
Zs(x) = Y*(x) + (Z(x) - Z*(x)).
The properties of this function are:
Zs(xα) = Y(xα), since
Y*(xα) = Y(xα) and Z*(xα) = z(xα),
and thus, the conditionality requirement is satisfied.
Further,
E(Zs(x)) = m
since E(Y*(x)) = m
and E(Z(x) - Z*(x)) = 0
and Y and Z are uncorrelated.
This takes care of generating a conditional set of values. The
procedure is summarized in Figure 11.5.1.

MineSight for Modelers—Geostatistics Page - 239


Practical Geostatistics for Earth Sciences—Conditional Simulation Proprietary Information of Mintec, Inc.

Notes Figure
11.5.1

Summary of the algorithm generating conditionally simulated grade as the sum of


three random variables all derived from the original set of samples.

Typical Uses of Simulated Deposits


During the past decades, the capital investment required to bring
new ore deposits into production has risen sharply. In countries
with favorable political and fiscal environments, new deposits are
characterized by lower average grades and more stringent economic
limits for mining and milling. Therefore the calculated economic
feasibility should be as independent of the uncertainty inherent in
unknown factors as possible. The unknown factors are numerous;
political and social conditions, stability of the market, quantity and
the quality of the ore that the future mine will effectively recover.
Unknowns about the ore itself are particularly important, as there
is a sizeable list of operations that have run into critical problems
because the recovered grades and tonnages were inaccurately
forecasted in the feasibility studies. It is therefore very important
to analyze the sensitivity of the principal production figures to the
unknown parameters before deciding on a project.
A conditionally simulated deposit represents a known numerical
model on a very dense grid. As the simulation can only reproduce
known, modeled structures, the simulation grid is limited to the
dimensions of the smallest modeled structure. Various methods of
sampling, selection, mining, haulage, blending, ore control and so
on, can be applied to this numerical model, to test their efficiency
before applying them to the real deposit. Figure 11.6.1 illustrates the
use of conditional simulation to forecast departures from planning in
the mining of a deposit.
Since each conditional simulation provides equally probable,
alternative sets of simulated values of the deposit, all consistent with
the same available information, simulation provides a more complete
look at uncertainty in the estimated values. This uncertainty can

Page - 240 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—Conditional Simulation

be used in conducting a serious financial risk analysis to weigh the Notes


economic consequences of overestimation and underestimation.

Figure 11.6.1 Use of conditional simulation to forecast departures from planning in


the mining of a deposit.
Some of the examples of typical uses of simulated deposits are
as follows:
• Application in grade control to determine dig-lines that are
most likely to maximize the profit or minimize the dollar
loss.
• Comparative studies of various estimation methods and
approaches to mine planning problems.
• Studies of the sampling level and drillhole spacing necessary
for any given objective.
• Studies to investigate the influence of mining machines on
the ore-waste ratios and on the variability in tonnage and
grade of ore produced.
• Analysis of how the size and use of the stockpile affects the
regulation of daily production in terms of both tonnage and
grade.
• Analysis of the fluctuations of the daily mill grades.
• Analysis of the contact dilution at hangingwall or footwall.
• Application for generating models of porosity and
permeability.
• Application in petroleum reservoir production.
• Application to determine the change of support correction
factors.
• Application to blending of stockpiles to stabilize tonnage and
quality of production (grades, hardness, mineralogy).

MineSight for Modelers—Geostatistics Page - 241


Practical Geostatistics for Earth Sciences—Conditional Simulation Proprietary Information of Mintec, Inc.

Notes • Studies to determine the probability of exceeding a


regulatory limit and application in development of emission
control strategy.
• Studies to quantify the variability of impurities or
contaminants in metal or coal delivered to a customer at
different scales and time frames.
• Prediction of recoverable reserves.
It must be understood that the results obtained from simulated
deposits will apply to reality only to the extent to which the
simulated deposit reproduces the essential characteristics of the real
system. Therefore, the more the real deposit is known, the better its
model will be, and the closer the conditional simulation will be to
reality. As the quality of the conditional simulation improves, not
only the reproduced structures of the variability become closer to
those of reality, but also will the qualitative characteristics (geology,
alteration, and so on) that can be introduced into the numerical
model. It must be stressed that simulation cannot replace a good
sampling campaign of the real deposit.

Page - 242 MineSight for Modelers—Geostatistics


Proprietary Information of Mintec, Inc. Practical Geostatistics for Earth Sciences—References

References Notes
Arik, A., 2002a, Area Influence Kriging, Mathematical
Geology, Vol. 34, No. 10 (to be published).
Arik, A., 2002b, “Comparison of Resource Classification
Methodologies With a New Approach,” APCOM
Proceedings, Phoenix, Arizona, pp. 57-64.
Arik, A., 2000, “Performance Analysis of Different Estimation
Methods on Conditionally Simulated Deposits,” SME
Annual Meeting, Salt Lake City, Utah. Preprint 00-
088.
Arik, A., 1999a, “An Alternative Approach to Resource
Classification,” APCOM Proceedings, Colorado
School of Mines, pp. 45-53.
Arik, A., 1999b, “Uncertainty, Confidence Intervals and
Resource Categorization: A Combined Variance
Approach,” Proceedings, ISGSM Symposium, Perth,
Australia
Arik, A., 1998, Nearest Neighbor Kriging: A Solution to
Control the Smoothing of Kriged Estimates. SME
Annual Meeting, Orlando, Florida. Preprint 98‑73.
Arik, A., 1996, “Application of Cokriging to Integrate
Drillhole and Blasthole Data in Ore Reserve
Estimation,” APCOM Proceedings, Computer
Applications in the Mineral Industries, Penn State
University, pp. 107-109.
Arik, A., Banfield, A. F., 1995, “Verification of Computer
Reserve Models,” SME Annual Meeting, Denver,
Colorado. Preprint 95-258.
Arik, A., 1992, “Outlier Restricted Kriging: A New Kriging
Algorithm for Handling of Outlier High Grade Data
in Ore Reserve Estimation,” APCOM Proceedings,
Tucson, Arizona, pp. 181‑188.
Arik, A., 1990, “Effects of Search Parameters on Kriged
Reserve Estimates,” International Journal of Mining
and Geological Engineering, Vol 8, No. 12, pp. 305-
318.
Dagdelen, K., Verly, G., and Coskun B., 1997, “Conditional
Simulation for Recoverable Reserve Estimation,” SME
Annual Meeting, Denver, Colorado. Preprint 97-201.
David, M., 1977, Geostatistical Ore Reserve Estimation,
Elsevier, Amsterdam.
Deutsch, C.V., Journel, A.G., 1992, GSLIB: Geostatistical
Software Library and User’s Guide, Oxford
University Press, New York.
Dowd, P.A, 1982, Lognormal Kriging The General Case,
Mathematical Geology, Vol. 14, No. 5.

MineSight for Modelers—Geostatistics Page - 243


Practical Geostatistics for Earth Sciences—References Proprietary Information of Mintec, Inc.

Notes Isaaks, E.H., 1996, Geostatistics Training Course Notes.


Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics,
New York, Oxford University Press.
Journel, A.G., Arik, A., 1988, “Dealing with Outlier High
Grade Data in Precious Metals Deposits,” CAMI
Proceedings, Computer Applications in the Mineral
Industry, Balkema, Rotterdam, pp. 161-171.
Journel A.G. and Huijbregts Ch.J., 1978,. Mining Geostatistics,
Academic Press, London.
Kim, Y.C, Knudsen, H.P. and Baafi, E.Y., 1980, Application
of Conditional Simulation to Emission Control
Strategy Development, University of Arizona, Tucson,
Arizona.
Kim, Y.C, Medhi, P.K., Arik, A., 1982, “Investigation of In-Pit
Ore-Waste Selection Procedures Using Conditionally
Simulated Orebodies,” APCOM Proceedings,
Colorado School of Mines, pp. 121‑142.
Parker, H.M., 1980, “The Volume-Variance Relationship:
A Useful Tool for Mine Planning,” Geostatistics
(Mousset-Jones, P.,ed.), McGraw Hill, New York.
Rossi, M.E., Parker, H.M. Roditis, Y.S., 1994, “Evaluation of
Existing Geostatistical Models and New Approaches
in Estimating Recoverable Reserves,” SME Annual
Meeting , Preprint #94-322.
Verly, G.W., Sullivan, J.A., 1985, “Multigaussian and
Probability Kriging, Application to Jerritt Canyon
Deposit,” Mining Engineering, Vol. 37, pp 568-574.

Page - 244 MineSight for Modelers—Geostatistics


Appendix
International Journal of Mining and Geological Engineering, 1990, 8, 305-318

Effects of search parameters on kriged reserve


estimates
ABDULLAH ARIK
Mining Engineer, Mintec, Inc., Tucson, Arizona, USA

Received 4 March 1990

Summary
Reliable ore reserve estimates for deposits with highly skewed grade distributions are difficult tasks to
perform. Although some recent geostatistical techniques are available to handle problems with these
estimations, ordinary kriging or conventional interpolation methods are still widely used to estimate
the ore reserves for such deposits. The estimation results can be very sensitive to the search parameters
used during the interpolation of grades with these methods.
This paper compares the ore reserve estimates from ordinary kriging using several cases in which
certain search parameters are varied. The comparisons are extended to different mineralizations to
show the changing effects of these parameters.
Keywords: Geostatistics, kriging, ore reserve estimation.

Introduction

In order to achieve reliable reserve estimates, a good model of a deposit must be built to
represent the deposit as close to reality as possible. The method used in the ore reserve
estimation can be very important in the outcome of good modelling work. The selection
of a method to model an ore deposit depends on geologic considerations, variability and
volume of data, the specific purpose of estimation and the requirements for the accuracy.
This selection may become a difficult task, especially for deposits with highly skewed
grade distributions. Although some recent geostatistical techniques such as indicator or
probability kriging are available to handle the estimation problems in these types of
deposits, ordinary kriging or conventional interpolation methods are still widely used to
calculate the ore reserves for such deposits. The results can be very sensitive to the search
parameters and assumptions used during these interpolations. According to Journel
(1989), The definition of the search strategy, next to the prior stationary decision and
equal to the variogram model, is possibly the most consequential step of any
geostatistical study.'
The objective of this paper is to study the effects of varying search parameters on the
estimation results. The method used in this study is the ordinary kriging method;
however, the results should also be applicable to other types of linear interpolation
methods.
0269-0136/90 S03.00+.12 © 1990 Chapman and Hall Ltd.
306 Arik
Search parameters
In mining practice, one problem is to find the best possible estimator of the mean grade of a
block, using the assay values of the different samples inside or outside the block to be
estimated. Although the estimator itself is the controlling factor in the resultant grade
estimation, the amount and type of data included for a given block also influence the grade of
the block. A common example is given in Fig. 1. The two circles in this figure represent two

Fig. 1. A sample block with two different search strategies in an irregular drill pattern

different search radii for the block at the centre. There are eight holes around the block
numbered from 1 to 8 based on their distances to the block centre. Four of the holes are
within the inner circle, the other four are between the two circles. The grade of the sample at
each hole location is also shown in this diagram. Based on this grade distribution around the
block, it does not make any significant difference on the estimated grade of the block whether
one chooses to use the small or large search radius. However, there are certain cases where
there will be a significant difference between the selection of the small and the large search
distance. For example, if the sample at hole no. 5 had a grade 10-20 times more than the
0.055 value it currently has, then the estimated block-grade using the large search radius
would be much higher than the estimated grade from using the smaller search radius. The
proper handling of such outlier high grade is crucial to ore reserve estimation. Since ordinary
kriging does not consider the grades when assigning weights to the samples, most people with
such problems resort to manipulating the data or the search-distances for the blocks around
the very high mineralizations to compensate for the deficiencies of the technique.
Besides the effect of grade distribution around a block, there is also another common case
where search strategy will make significant differences in the reserves. This usually happens
when there is irregular drill spacing with some areas having in-fill holes and some areas not.
Effects of search parameters on kriged reserve estimates 307
For example, referring back to Fig. 1, if we did not have the holes inside the inner circle, using
the larger search radius could mean adding a block of ore to the reserves. The same problems
exist also around the periphery of the drilling campaign, especially if the holes have ore
intersections.
In this paper, the term 'search parameter' will be applied not only to the search distances
but also to any parameter that controls the amount, type and distribution of data included
for a given block. Table 1 gives a list of such parameters most commonly used in reserve
estimations.

Table 1. A list of “search parameters” that control the amount and type of data used to
interpolate the blocks
_____________________________________________________________________
Search distances in X, Y and Z directions
2D or 3D search radius
Elliptical or ellipsoidal search distances
Dip and strike angles for the search plane
Search distances down dip, along strike or vertical to the search plane
Octant or quadrant search parameters
Minimum number of samples allowed
Maximum number of samples allowed
Maximum number of samples allowed from an individual hole
Minimum grade value
Maximum grade value (cut down or cut out)
Maximum distance allowed to the nearest sample value
Maximum extension distance allowed if there is only a single sample
Maximum extension distance allowed for grades higher than a specified value
Geologic codes used to restrict or match similar rock types or mineralizations
_____________________________________________________________________

Study approach
The approach to this study is to take one bench with blasthole data from an operating mine.
The blasthole locations are then kriged using the exploration drillhole data. Several runs are
made, each time selectively changing one of the search parameters, including the variogram
parameters. Finally, the estimation results are compared with the blasthole grades to see the
effect of varying interpolation parameters on the ore reserves and grades at a specified cutoff
grade.
The study is carried out using two data sets, each from a different mine. Both data sets have
skewed grade distributions, however, one data set is twice as variable as the other. This is to
compare the varying effects of the search parameters on the reserve estimates in different
types of mineralizations.

Case studies on deposit 1


One of the deposits used for this study is a molybdenum deposit. There are 2082 blastholes on
the selected bench. The number of exploration holes intersecting this bench and within 61m
308 Arik
(200 ft) of the blastholes is 110. The bench height is 15.2 m (50 ft). The total number of bench
composites from exploration holes is 513, which includes two benches both above and below
the selected bench. The drillholes are irregularly spaced with spacing varying mostly between
15 to 61 m (50 to 200 ft). The blasthole density is about 8 x 8 m (26 x 26 ft). Fig. 2 shows the
location of the exploration holes and the outer boundary of the blastholes.

Fig. 2. The location of the drillholes and the boundary for the blastholes - Deposit 1
(Molybdenum) (1 ft =0.3048 m)

The coefficient of variation of total molybdenum grades in this deposit is around 1. Figs 3
and 4 give the statistics and the frequency distributions of the blasthole and the drillhole
grades, respectively. Fig. 5 shows a scatter graph of blasthole grades vs. drillhole grades. The
drillholes selected from this scatter graph are within 7.6 m (25 ft) of the blastholes on the
selected bench and the benches above and below it. The statistics and the scatter graph '
results indicate good correlation and no apparent bias between the blasthole and the
drillhole grades.
Drillhole variograms were developed for the intrusives and meta-sediments, the two major
rock types that contain mineralization in this deposit. Blasthole variograms were also
developed to be used in the study.
Using the variogram models developed and the drillhole data, the blasthole locations on
the selected bench are interpolated with ordinary kriging in several cases. In each case, a
search parameter is selectively changed from the previous one. The effect of this change on
Effects of search parameters on kriged reserve estimates 309

Fig. 3. Blasthole data statistics and frequency distribution - Deposit 1 (Molybdenum)

Fig. 4. Drillhole data statistics and frequency distribution - Deposit 1 (Molybdenum)

the ore reserve estimates at 0.02 cutoff grade is then investigated by comparing the estimates
to the blasthole grades. The results of these case studies are given in Table 2. The numbers in
the parentheses in this table represent the percentage difference of the results in each case
from the blasthole grades.
310 Arik

Fig. 5. Blasthole vs. drillhole grades within 25' of distance on three benches - Deposit 1
(Molybdenum)

Case studies on deposit 2


The other deposit used for this study is a gold deposit. There are 1933 blastholes on the
selected bench. The bench height is 6.1m (20ft). The number of exploration holes
intersecting this bench and within 38.1 m (125 ft) of the blastholes is 170. Since most holes in
this deposit are inclined holes, the assays are composited to 6.1 m (20 ft) lengths down the
hole, rather than compositing to the bench height. The number of these composites is 533,
which includes those within 12.2 m (40 ft) above and below the selected bench. The drillholes
are irregularly spaced with spacing varying mostly between 7.6 to 46 m (25 to 150 ft). The
blasthole density is about 4.3 x 4.3 m (14 x 14 ft). Fig. 6 shows the location of the exploration
holes and the outer boundary of the blastholes.
The coefficient of variation of gold grades in this deposit is around 2. Figs 7 and 8 give the
statistics and the frequency distributions of the blasthole and the drillhole grades,
respectively. Fig. 9 shows a scatter graph of blasthole grades vs. drillhole grades. The
drillholes selected from this scatter graph are within 4.6 m (15 ft) of the blastholes on the
selected bench and the benches above and below it. The statistics and the scatter graph
results indicate poor correlation but no apparent bias between the blasthole and the drillhole
grades.
Effects of search parameters on kriged reserve estimates 311
312 Arik

Fig. 6. The location of the drillholes and the boundary for the blastholes - Deposit 2
(Gold)(lft=0.3048m)

Fig. 7. Blasthole data statistics and frequency distribution - Deposit 2 (Gold)


Effects of search parameters on kriged reserve estimates 313

Fig. 8. Drillhole data statistics and frequency distribution - Deposit 2 (Gold)

A variogram using the drillhole data was developed for the case study, without geologic
separation or rock types, since such information was not available.
Using the variogram model developed and the drillhole data, the blasthole locations on
the selected bench are interpolated with ordinary kriging in several cases. In each case, a
search parameter is selectively changed from the previous one, following a similar format
tried in the molybdenum deposit. The effect of this change on the ore reserve estimates at 0.02
cutoff grade is again investigated by comparing the estimates to the blasthole grades. The
results of these case studies are given in Table 3.

Review of the results

There are six columns of information in Tables 2 and 3. Some of the column headings are self
explanatory; however, the others may need some explanation. The column 'No. of values @
0.02 cutoff’ gives the number of samples that are equal to or greater than 0.02 grade value for
each case. The columns 'Mean grade' and 'Std. dev.' give the average grade and the standard
deviation of the estimated grades at this cutoff, respectively. The column 'Metal quantity'
gives the product of the number of samples and the average grade. Finally, the column 'Corr.
coef.' gives the correlation coefficient between the drillhole grade estimates and the blasthole
grades based on the least square regression.
The numbers in the parentheses in these tables represent the percentage difference of the
results in each case from the blastholes. For example, referring to Table 3, the number of
blasthole grades at 0.02 cutoff is 1151 with an average grade of 0.065. In Case 1, these
numbers are estimated to be 1294 and 0.059, respectively. Hence, the amount of ore is
overestimated by 12%, and the average grade is underestimated by 10%. Using these
percentages, the performance of each case relative to the blasthole grades as well as each
other can be reviewed.
*
314 Arik

Fig. 9. Blasthole vs. drillhole grades within 15' of distance on three benches - Deposit 2
(Gold)

Horizontal search distance


The maximum distance to include samples for a given block is normally based on the range of
the variogram for the deposit. When this range is clearly defined and the drillhole spacing is
somewhat regular, there is virtually no difficulty in deciding on the search distance to use.
However, if the variogram has a nested structure with two different ranges, or worse if there is
no decent variogram obtainable, then obviously it becomes difficult to decide on the
appropriate search radius. This decision can be more difficult and even crucial on the ore
reserve estimates if the drilling density is irregular, which is not uncommon.
The results of Cases 1 and 2 from both the molybdenum and gold deposits indicate that
when the horizontal distance is increased well beyond the average drillhole density, there is
more smoothing in the estimation results, i.e. higher tonnage at lower grade. However, the
major effect of increased distance on the ore reserves, which is not studied here, is the effect of
increasing the ore reserves by allowing interpolation of blocks outside the drilled areas.
A good horizontal search strategy should include at least a ring of drillholes with enough
samples around the blocks to be estimated. However, it should also have provisions for not
extending the grades of the peripheral holes to the un-drilled areas.
Effects of search parameters on kriged reserve estimates 315
316 Arik
Vertical search distance
In the molybdenum deposit, increasing the vertical search distance caused smoothing in the
reserve estimates as shown by comparing Cases 2 to 4 in Table 2. In the gold deposit,
increasing the search distance by one bench height improved the estimation results.
However, a further increase of vertical search by another bench had a very negative effect.
Although the grade estimate was good, the tonnage estimate was 16% more than the
blastholes. The metal content is thus over-estimated by 14% as shown by comparing Cases 2
and 4 in Table 3.
Since most drilling is vertically oriented, increasing the vertical search distance has more
impact on the number of samples available for a given block, than increasing the horizontal
search distance. For example, if the vertical search is increased by one bench, the maximum
number of samples available from each hole increases from 1 to 3. Thus, if there are 8 holes
around a block, then as many as 24 samples could be available for the block.
If the vertical search is considerable for a given block, then there might be a problem of
having a large portion of the samples for the block coming from the nearest hole, thus
carrying the most weight. This may cause excessive smoothing as well as bias in the reserve
estimates. If the circumstances warrant a large vertical search, then one solution could be to
limit the number of samples used from each individual drillhole.
Number of samples used
The number of samples used to estimate the grade of a block is one of the important 'search
parameters'. Too few samples for the block may cause loss of information as well as increased
kriging variance. Too many samples, on the other hand, will waste computing time. It may
also cause excessive smoothing depending on the variogram parameters used. The
appropriate number of samples should be based on the number of holes available around a
block as well as the number of samples contributed by each hole. The rule of thumb is to use
at least one ring of holes around the block. Therefore, if the drilling pattern is such that there
are five holes around a block in most cases, then minimum number of samples to be used
should be five if there is no vertical search. This number should be increased to 15, if the
vertical search is increased by one bench.
In the molybdenum deposit, decreasing the number of samples per block from 20 to 5 did
not make a significant difference in the ore reserve estimates as shown by comparing Cases 2
and 5 of Table 2. However, this statement will be misleading without an explanation.
Although Case 2 had the maximum number of samples set to 20, because of no vertical
search, the number of samples for a given block never reached 20. As a matter of fact, the
average number samples per block for Case 2 was around 8. This explains why there is little
difference between the results from the two cases.
In the gold deposit, decreasing the number of composites from 20 to 10, caused under-
estimation of ore tons, and over-estimation of the grade as shown by comparing Cases 3 and
5 of Table 3. Basically, the results from Case 5 are exactly opposite of the results from Case 3.
It is clear that 10 is not an adequate number of samples for the selected search strategy in
Case 3.
Octant search
It is common, especially in precious metals deposits, to have denser drilling in highly
mineralized areas of the deposit. When this happens, it might be necessary to have a balanced
Effects of search parameters on kriged reserve estimates 317
representation of the samples in all directions in space, rather than taking the nearest N
samples for the blocks to be interpolated. This can be achieved by either declustering the
samples before the interpolation or by a simple octant or quadrant search in which the
number of samples in each octant or quadrant is limited to a specified number during the
interpolation.
In both the molybdenum and gold deposits, the octant search did not significantly effect
the estimation results as shown in Case 6 of Tables 2 and 3. In fact, the quantities of metal
from the cases with an octant search in both deposits were 2 to 4% lower than the cases
without an octant search.
While some interpolation methods may require an octant search or declustering in order
to avoid the misrepresentation of the data around the blocks, this is not necessary for
ordinary kriging because of the well-known 'screening effect' of the algorithm. When a cluster
of samples is present during an interpolation of a block in kriging, the covariance between the
samples automatically splits the influence among the clustered samples avoiding its over-
representation.

Anisotropy and elliptical search


The continuity of mineralization may not be the same in all directions within a deposit. This
can be investigated by calculating the variograms at different directions and determining the
range of influence of the samples in these directions. However, there are times when
anisotropic mineralization is not always obvious through these variograms. For example, in
the molybdenum deposit, the drillhole variograms in both rock types 1 and 2 did not reveal
any anisotropic mineralization. However, the blasthole variograms clearly indicated an
anisotropy with major axis in the N90E direction. When this anisotropy is included in the
variograms during the interpolation, the estimation results are improved as shown by
comparing Cases 2 and 9 of Table 2.
Similarly, the anisotropy was not very clear in the gold deposit with the drillhole
variograms. However, the blasthole variograms indicated an anisotropy with major axis in
the N90E direction. When the anisotropic variogram model is used in the interpolation, the
estimation results again improved over the isotropic case as shown by comparing Cases 3
and 9 in Table 3.
Adding an elliptic search to the interpolation when an anisotropy exists did produce mixed
results. The quantity of metal was under-estimated by 2% in the molybdenum deposit, vs. the
case without using an elliptic search. On the contrary, in the gold deposit, the quantity of
metal was over-estimated by 2% when elliptic search is added.
Since the kriging algorithm has the ability to account for anisotropic mineralization, it is
not necessary to use an elliptic search during the interpolation. However, for some other
interpolation methods, such inverse distance weighting, this type of search should be useful.

Other parameters
There are several other search parameters, such as those listed in Table 1, that will have some
effect on the estimation results of a grade interpolation. The most important thing in deciding
on a proper search strategy is to consider all the parameters and their effects on the
interpolation as a whole. Sometimes, the effect that one search parameter will have on the
estimation results is cancelled by another parameter that is used in the same run.
318 Arik
Conclusions

The selection of appropriate search parameters to be used in grade interpolation of a deposit


is an important part of the ore reserve estimation process. This selection is especially crucial if
the deposit falls into one or more of the following categories:
(1) Deposits with irregular drill density.
(2) Deposits without adequate drilling.
(3) Deposits with highly skewed grade distributions.
(4) Deposits which lack proper geologic modelling.
The deposits that fall into the first and second categories can be handled by using different
search parameters at different parts of the deposit. In other words, the search parameters are
adjusted based on the amount of data and drill density in each area of the deposit.
The deposits that fall into the third category can be handled by limiting the grade
interpolations to specific mineralizations that are delineated by geologic modelling of the
deposit. This way, over-extension of high grade mineralizations into predominantly low or
median grade mineralizations can be prevented. •
The deposits that fall into both the third and the last categories will have severe problems
with ore reserve estimations unless a probabilistic method, such as indicator or probability
kriging algorithm is used for the reserve estimation. If ordinary kriging or a conventional
linear estimation method is used because of the time and/or resource constraints, then the
search parameters should be carefully selected and some adjustments to the extension of high
grade mineralizations should be provided in the interpolation process. Otherwise, the
estimated tonnage and grade values for the deposit can be extremely erroneous.

References

Buxton, B.E. (1984) Estimation variance of global recoverable reserve estimates, in Proceedings of
NATO. Lake Tahoe, Nevada.
David, M. (1977) Geostatistical Ore Reserve Estimation, Elsevier, New York.
David, M. (1988) Dilution and geostatistics, CIM Bulletin 81 (914), June, pp. 29-35.
Journel, A.G. (1989) A democratic research impetus, NACOG Geostatistics Newsletter 3 (3), Summer,
p. 5.
Journel, A.G. and Arik, A. (1988) Dealing with outlier high grade data in precious metals deposits,
Computer Applications in the Mining Industry, Balkema, Rotterdam, pp. 161-71.
Journel, A.G. and Huijbregts, Ch. J. (1978) Mining Geostatistics, Academic Press, London.
Kim, Y.C. (1988) Advanced geostatistics for highly skewed data, (short course notes) Department of
Mining and Geological Engineering, University of Arizona.
Knudsen, H.P., Kim, Y.C. and Mueller, E. (1978) Comparative study of the geostatistical ore reserve
estimation method over the conventional methods, A^iningt engineering, January 30(1), pp. 54-8.
1

Nearest Neighbor Kriging:


A Solution to Control the Smoothing of Kriged Estimates

ABDULLAH ARIK

MINTEC, INC.

SME PREPRINT 98-73


2

Abstract. This paper presents a new estimation method triangulation methods. Among the geostatistical techniques,
referred to as the “Nearest Neighbor Kriging” (NNK). The the most popular one is the ordinary kriging. To insure
method is a simple and practical tool to use when the unbiasedness, all these methods require that the weights
ordinary kriging (OK) is no more reasonable because of its assigned to the sample data add up to one:
excessive smoothing of grades, especially for estimating
recoverable ore reserves for deposits with highly skewed Σ wi = 1, i=1,n
grade distributions. NNK is robust and globally unbiased
like OK. However, it is more advantageous over OK because Then for all methods, the estimate is a weighted linear
it can control the smoothing and conditional bias which combination of sample data z:
have been the major pitfalls of OK in many grade estimation
cases. The paper details the methodology of NNK and its estimate = Σ wi zi, i=1,n
application to a gold deposit.
Having arrived at some weighting scheme depending on
Introduction the method, the decision remains as to what criteria to use in
order to identify data points that should contribute to
An ore reserve block model is the basis for various interpolation. This is the selection of a search strategy that
decisions in mine development and production. Accurate is appropriate for the method used. Here, considerable
prediction of the mining reserves is essential for a successful divergence exists in practice, involving the use of fixed
mining operation. Estimating recoverable ore reserves for numbers, observations within a specified radius, quadrant
deposits with highly skewed grade distributions is especially and octant searches, elliptical or ellipsoidal searches with
a challenge to the mining engineers and geologists. They anisotropic data, and so on. Since the varying of the
require considerable amount of work to make sure that parameters may affect the outcome of estimation
representative block grade distributions can be obtained to considerably, the definition of the search strategy is
estimate the reserves with certain confidence. There are therefore one of the most consequential steps of any
many advanced geostatistical methods available to tackle estimation procedure (Arik, 1990; Journel, 1989).
some of the problems associated with grade estimates in
these type of deposits. However, the ordinary kriging or Smoothing and Its Impact on Reserves
traditional inverse distance weighting methods are still
widely used to estimate the ore reserves for such deposits. The search strategy can play a significant role in the
One reason for this is the fact that the alternative methods smoothness of the estimates. The degree of smoothing
offered are either too complex or too exhaustive for the depends on several factors such as the size and orientation
practitioners who often do not have the expertise or the time of the local search neighborhood, minimum and maximum
to apply these methods. number of samples used for a given interpolation. Of all the
methods, the nearest neighbor method does not introduce
The objective of this paper is to present a new estimation any smoothing to the estimates since it assigns all the
method which will be referred to as “the Nearest Neighbor weight to the nearest sample value. For the inverse distance
Kriging” (NNK). The paper will review some of the weighting method, increasing of the inverse power used
interpolation procedures and the potential problems in decreases the smoothing because as the distance power is
reserve estimations with linear estimation techniques, such increased, the estimate approaches to that of the nearest
as the ordinary kriging, in deposits with skewed grade neighbor method. For the ordinary kriging, the variogram
distributions. This discussion will be followed by the parameters used, especially the increase in nugget effect,
methodology of NNK. Finally, its application to a gold contributes to the degree smoothing.
deposit, and comparison of grade-tonnage curves with other
methods will be presented. The immediate effect of smoothing caused by any
interpolation method is that the estimated grade and tonnage
Linear Estimation Techniques of ore above a given cutoff are biased with respect to reality.
As the degree of smo othing increases, the average grade
All available techniques involve some weighting of above cutoff usually decreases. Also with increased
neighboring measurements to estimate the value of the smoothing the ore tonnage usually increases for cutoffs
variable at an interpolation point or block. Among the below the mean, and decreases for cutoffs above the mean.
traditional techniques are the polygonal or the nearest
neighbor method, inverse distance weighting, and
3

Volume-variance Relationship well as shortcomings.


How Much Smoothing is Reasonable?
Mining takes place on a much larger volume than those
of the data points. Therefore, certain smoothing of grades is If we are using a linear estimation technique to
expected. This is in accordance with the volume-variance interpolate the block grades, what would be the appropriate
relationship which implies that as the volume of blocks degree of smoothing which would result in “correct” grade
increases, their variance decreases. However, mining also and tonnage above a given cutoff when applied to our
takes place on a smaller volume than those of the 3-D model estimates? For one thing, if we know the SMU size to be
blocks that are based on exploration data spacing. Therefore, applied during mining, we can determine the theoretical or
the variance of these 3-D model blocks is lower than what hypothetical distribution of SMU grades for our deposit.
would normally be observed during the mining of selective Once we know this distribution or the grade-tonnage curves
mining unit (SMU) blocks. of SMU’s, we can vary our search strategy and interpolation
parameters until we get close to these curves. The
Realistic recoverable reserve figures can be obtained if disadvantage of this procedure is that one may end up using
we determine the grade-tonnage curves corresponding to a small number of samples per neighborhood of
SMU distribution. Since the actual distribution will not be interpolation. The lack of information may cause local biases.
known until after mining, a theoretical or hypothetical one However, when we are trying to determine the global and
has to be developed and used. The application of this minable resources at exploration stage, we are not usually
procedure can minimize the bias on the estimated proportion interested in the local neighborhood. Rather we are after
and grade of ore above cutoff (Rossi et al, 1994). annual production schedules and mine plans (Parker, 1980).

How to Deal With Smoothing Nearest Neighbor Kriging

There are a few ways to achieve this objective. One The ordinary kriging (OK) is not designed for grade
possible solution to obtain better recoveries is to correct for estimations in highly variable deposits. But why do the
smoothness of the estimated grades. This can be done by practitioners keep using this method when it is not suited for
support correction. There are methods available for this, their case? This is mainly because OK is robust, practical,
such as the affine correction or the indirect lognormal much easier to use and understand than some advanced
correction (Isaaks and Srivastava, 1989). methods offered as an alternative. Therefore the people go
to great lengths to justify the usage of OK because they are
Similar or better results for recoverable reserves can be comfortable with it and its results. Whether they lack the
obtained by conditional simulation. A fine grid of simulated time, resources or the expertise to use another “better”
values at the sample level is blocked according to the suited method is, of course, debatable. Nonetheless, it does
required SMU size. This procedure is very simple but also not change the fact.
assumes perfect selection (Dagdelen et al, 1997).
The nearest neighbor kriging (NNK), as the author calls,
The use of higher distance powers in the traditional is a new method which combines the strengths of the
inverse distance weighting method is an attempt to reduce nearest neighbor, inverse distance weighting and the
the smoothing of block grades during the interpolation of ordinary kriging methods. It is a method where the value of
deposits with skewed grade distributions. On the the nearest neighbor sample is emphasized in determining
geostatistical side, there are methods, such as lognormal the value of the blocks. This emphasis is directly
kriging, lognormal short cut, outlier restricted kriging and proportional to the variability of the deposit. The method is
several others, which have been developed to get around essentially no more than a modified OK algorithm in which
the problems associated with the smoothing of ordinary the OK weight assigned to the nearest neighbor sample is
kriging (David, 1977; Dowd, 1982, Arik, 1996). There are also increased by a certain proportion while the weights of the
advanced geostatistical methods such as indicator or other samples in the neighborhood are lowered at the same
probability kriging which takes into account the SMU size to proportion. Therefore, the sum of the resulting NNK weights
calculate the recoverable reserves (Verly and Sullivan, 1985; is preserved to be one, to satisfy the unbiasedness
Journel and Arik, 1988; Deutsch and Journel, 1992). Each of condition.
these methods provide the practitioners with a variety of The proportion used in adjusting OK weights to
tools from which they can select, and apply where they see it determine the NNK weights will be referred as the smoothing
appropriate since each method has its own advantages as correction factor. A reasonable value to use for this factor is
4

the ratio of the SMU block variance to the sample variance, the resulting NNK block grade would have reflected that
which is no other than the variance correction factor. The value more. This minimizes over-estimating of low grade
SMU block variance can be obtained simply using the well- areas, and under-estimating the high grade areas that result
known Krige’s relation: in from smooth estimates in highly variable deposits.

s 2p = s 2b + s 2p∈b

This is the spatial complement to the partitioning of


variances which simply says that the variance of point
values is equal to the variance of block values plus the
variance of points within blocks. s 2p is calculated directly
from the composite data, whereas s2p∈b is calculated by
integrating the variogram over the block b. Once those two
values are known, then s 2b can be obtained easily.

s 2b = s 2p - s 2p∈b

Most kriging programs automatically calculate the values


for s 2b or s 2p∈b, sometimes both. We can then use the ratio of
the block variance to the point variance to determine the
correction factor:

f = s 2b / s 2p

The factor f has a value between 0 and 1. The NNK One may argue that the weight assigned to the nearest
weights are then obtained by adjusting the OK weights sample can be increased by decreasing the nugget of the
(wt ok) as follows: variogram, or simply by decreasing the number of samples
used. That is correct. However, there would be cases that
Weight of the nearest sample = wt ok + (1 - wt ok) * f would not work as effective as NNK. Furthermore, the loss
of information from using less number of samples can also
Weights of all other samples = wt ok * (1 - f) be questioned. For example, if we used zero nugget effect for
the variogram in our example with five samples, the
Thus, the NNK estimate is equal to OK estimate at the interpolated grade from OK would increase from .2207 to
lower end when f=0, and is equal to the polygonal estimate at .2535. Alternatively, if we used only three closest samples,
the extreme end when f=1. we would get .2678 as our OK estimate. By going to the
extreme, we could have used only two samples and no
An Example nugget effect, then we would get .3281. This is now closer to
NNK estimate. However, all that came at the expense of
Let us take a hypothetical case shown in Figure 1. For losing important information from other samples.
simplicity, there are only five data points for the sample
block to be interpolated. Using a spherical variogram model Many different scenarios are of course likely for any
with an isotropic range of 100, sill of 1.0, and nugget of 0.2, given block. For example, what happens if all the samples are
both OK and NNK runs were made. In NNK run, a correction the same distance apart from the block center. In that case,
factor f equal to 0.5 was used. Table 1 summarizes the the estimate is defaulted to OK grade unless the statistical
results obtained. As one can see from this table, the OK distance for the samples are different. In other words, the
estimate is much lower than the NNK estimate. nearest sample should then be based on the statistical (or
the anisotropic) distance, not on the true distances.
In this example, a high grade sample was the nearest
neighbor. Since OK weights are sample independent, we get
the same weights even if we change the data values at the
given points. So, since the NNK weights emphasize the
nearest data value more, if the nearest value was a low grade,
5

of 1.044. The kriging plan that was used for the entire
deposit had the following parameters.
Table 1. The weights assigned to five data points from OK
and NNK methods, and the resulting estimated grades for Variogram Parameters:
the sample block. Spherical model, nugget = 4, sill = 9.5
Ranges: 100 m. down dip (50 degree west),
75 m. along strike,
No Grade Distance OK Wts NNK Wts
8 m. up/down the dip/strike plane.
1 .461 12.0 .307 .653
Search Parameters:
2 .095 19.0 .221 .111 Ellipsoidal search as defined by the variogram.
Minimum number of composites: 2
3 .133 20.0 .200 .100 Maximum number of composites: 12

4 .119 22.4 .146 .073 The blast holes in the area have spacings of 4-5 m. The
first thing done in the study was to average the value of
5 .114 23.3 .126 .063 these blast holes that fall into the block model blocks. By
using a minimum of two holes, the blocks containing the
Estimated Grade .221 .341
blast hole averages were identified and flagged. There
were a total of 5500 blast hole blocks. Since the block size
was small enough, it was also considered to be the size of
Like any method, the NNK results can be improved SMU for simplicity.
through the selection of proper search strategy and
kriging plan. But, as an additional advantage, this method Block Model and Reconciliation Results
utilizes a smoothing correction factor in adjusting the
weights in order to give realistic reserve estimates which The entire deposit was already interpolated and the
better reflects the variability of block grades expected at grades were assigned to 3-D blocks using OK, inverse
the time of mining. distance weighting of power 3 (ID3), and the polygonal
method (POLY). In addition to these methods, NNK grade
Application to a Gold Deposit assignment was done using the same kriging plan for OK.
The smoothing correction factor used was 0.5 and applied
An NNK exercise was carried out in a structurally to the nearest sample only. Table 2 gives the gold grade
controlled gold deposit and the results were compared to statistics of all 5500 mined-out blocks at zero cutoff from
those from OK, IDW and the polygonal or the nearest different interpolation methods. The coefficient of
neighbor method. variation (standard deviation divided by the mean) of OK
block values is the lowest among the methods used.
The gold mineralization in this deposit occurs in a Compared to SMU’s (or the blast hole averages), the
specific geologic unit which is striking north and dipping variance of OK blocks is 28% lower. On the other hand,
about 50 degrees west. The mineralization zone is the variance of NNK blocks is comparable to that of
therefore distinct. This zone is identified and digitized on SMU’s.
each section of the model. However, there are waste and
low grade bands within this zone which occur right within Figure 2 shows the grade-tonnage curves from POLY,
the high grade areas. These bands are to hard to map to be OK, NNK and SMU (or the blast hole averages). ID3
separated from the high grades. curves were not plotted to maintain the clarity of the
graph. However, they plot somewhere between OK and
The area studied is a mined-out portion of the mine NNK curves. Table 3 summarizes the grade and tonnage
about 20,000 square meter in plan on 10 benches. The recoveries at 1-gram gold cutoff, and Table 4 at 3-gram
bench height is 5 m. The block size is 5 x 10 m., cutoff, from all the different methods applied.
corresponding the easting and the northing of the model.
There were 233 five-meter composites intersecting these
benches in the study area. The average grade of these
composites was 4.145 grams with a coefficient of variation
6

Table 2. The grade and tonnage recoveries at 1-gram gold


cutoff. Table 3. The grade and tonnage recoveries at 1-gram gold
cutoff.
Type Average Standard C.V. Max
Grade Deviation (s/m) Grade Type % Average C.V. Grams
Above Grade (s/m) Above
SMU 4.171 2.753 .66 32.09
SMU 96.1 4.277 .64 411.02
POLY 4.180 4.014 .96 34.32
POLY 87.9 4.689 .86 412.16
ID3 4.181 2.428 .58 32.52
ID3 97.0 4.300 .60 417.10
OK 4.182 2.303 .55 14.42
OK 99.0 4.179 .46 413.72
NNK 4.173 2.788 .67 24.37
NNK 96.6 4.284 .64 413.83

Figure 2. Grade-tonnage curves from different methods


Table 4. The grade and tonnage recoveries at 3-gram gold
cutoff.

Type % Average C.V. Grams


Above Grade (s/m) Above

SMU 59.7 5.608 .48 334.80

POLY 51.0 6.599 .66 336.55

ID3 65.6 5.293 .45 347.22

OK 69.6 4.996 .35 347.72

NNK 60.9 5.514 .51 335.80

In order to put these results into perspective, another


table was prepared showing the percent differences of
these methods on grade, tonnage and metal recoveries as
compared to SMU base case. These are summarized in
Table 5 using the results at 3-gram cutoff in Table 4. The
results of this study indicate that NNK recoveries were the
best to reconcile with those of the SMU’s in all three
categories, namely grade, tonnage and metal content.
7

Table 5. The comparison grade, tonnage and metal


recoveries at 3-gram gold cutoff from different methods References
using SMU values as the base case.
Arik, A., 1990, “Effects of Search Parameters on Kriged
Reserve Estimates,” International Journal of Mining
Type % Tonnage % Grade % Metal
and Geological Engineering, Vol 8, No. 12, pp. 305-
Difference Difference Difference
318.
SMU — — —
Dagdelen, K., Verly, G., Coskun, B., 1997, Conditional
POLY -14.57 +17.67 +0.52 Simulation For Recoverable Reserve Estimation, SME
Annual Meeting , Preprint #97-201.
ID3 +9.88 -5.62 +3.71
David, M., 1977, Geostatistical Ore Reserve Estimation,
OK +16.58 -10.91 +3.86 Elsevier, Amsterdam.

NNK +2.01 -1.68 +0.30 Deutsch, C.V., Journel, A.G., 1992, GSLIB: Geostatistical
Software Library and User’s Guide, Oxford
University Press, New York.
Conclusions
Dowd, P.A, 1982, Lognormal Kriging The General Case,
NNK is a new interpolation method modified from OK Mathematical Geology, Vol. 14, No. 5.
algorithm. It is practical, robust and globally unbiased like
OK. It is designed to help solve smoothing problem of OK Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics,
encountered in resource estimation of highly variable New York, Oxford University Press.
deposits. The method is another tool for the practitioners
who may find it useful in predicting more realistic Journel, A.G., Arik, A., 1988, “Dealing with Outlier High
recoverable reserves from their models developed at Grade Data in Precious Metals Deposits,” Compute
different stages of mining. Applications in the Mining Industry, Balkema,
Rotterdam, pp. 161-171.
NNK is not a method to replace the advanced
geostatistical methods. Rather it should be looked at as an Parker, H.M., 1980, The Volume-Variance Relationship: A
alternative method to try in those mines which use more Useful Tool for Mine Planning, Geostatistics
traditional techniques such as the polygonal, IDW or OK. (Mousset-Jones, P.,ed.), McGraw Hill, New York.
Having a smoothing correction factor which can be varied
from one area or one geologic unit to another like the Rossi, M.E., Parker, H.M. Roditis, Y.S., 1994, Evaluation of
powers of IDW, NNK can offer flexibility and practicality Existing Geostatistical Models and New Approaches
that the practitioners looking for in achieving their in Estimating Recoverable Reserves, SME Annual
objectives. Meeting , Preprint #94-322.

Each deposit is unique and therefore the application Verly, G.W., Sullivan, J.A., 1985, Multigaussian and
and performance of any method may differ from one to Probability Kriging, Application to Jerritt Canyon
another. By adopting the strong points of the polygonal, Deposit, Mining Engineering, Vol. 37, pp 568-574.
IDW and OK techniques, NNK has turned out to be a
good performer in the deposit studied. Since it is only a
few lines of coding change in OK algorithm, the author
hopes that some practitioners will find easy access to the
method, apply it to their problems and compare the results
with those from their current methods in order to find out
more about its performance.
XXVI APCOM
September 16 - 20, 1996
University Park, Pennsylvania Chapter 17

APPLICATION OF COKRIGING TO INTEGRATE DRILLHOLE AND


BLASTHOLE DATA IN ORE RESERVE ESTIMATION

ABDULLAH ARIK
MINTEC, INC.2590 N. Alvernon Way, Tucson, AZ 85712

ABSTRACT ing utilizes only the spatial correlation between samples of a single
variable to obtain the best linear unbiased estimate of this variable.
In addition to this feature, cokriging also utilizes the cross-correla-
An ore reserve block model is the basis for various decisions in tions between several variables to further improve the estimate.
mine development and production. The block model is initially built Therefore, cokriging can be defined as a method for estimation
using the information from the drillhole data only. Once the mine is
that minimizes the variance of the estimation error by exploiting the
put in operation, the information from the blasthole data becomes
cross-correlation between two or more variables. The estimates are
available. Since the spacing of the blastholes are much closer than
derived using secondary variables as well as the primary variable
the exploration drillholes, the blasthole data gathered from the mine
(Isaaks, 1989; Kim, 1988).
over the years become quite extensive containing valuable informa-
tion about the deposit.
Reasons For Cokriging
In practice, the information from the blastholes is the basis of
determining the ore and waste boundaries between and within min-
ing blocks. In an ore reserve model, the valuation of blocks on and Cokriging is particularly suitable when the primary variable has
close to the pit face can obviously be improved by using blasthole not been sampled sufficiently. The precision of the estimation may
data from nearby drilled and mined-out blocks in addition to using the then be improved by considering the spatial correlations between
widely spaced drillhole data. The ideal basis on which to integrate the primary variable and the other better-sampled variables (Jour-
blasthole and drillhole data is to use cokriging. nel, 1978). Therefore, having extensive data from blastholes as the
secondary variable with the widely spaced exploration data as the
primary variable is an ideal case for cokriging.
INTRODUCTION
Cokriging Equation
Success in mining is dependent upon sound decisions, adequate
engineering and operational control, and all are dependent upon Cokriging system of equations for two variables, exploration drill-
accurate prediction of the mining reserves. In order to achieve reli-
hole data being the primary variable and the blasthole data being the
able reserve estimates, a good model of the in-situ resources must
secondary variable, is given in Table 1.
be built to represent the deposit as close to reality as possible. The
method used in the reserve estimation can be very important in the
Steps Required for Cokriging
outcome of good modeling work.
A block model is initially built using the information from the ex-
Since cokriging uses multiple variables, the amount of work in-
ploration drillhole data only. After the start of mine operations, the in-
volved prior to cokriging itself is a function of the number of variables
formation from the blasthole data become available. Since the spac-
ing of the blastholes are much closer than the exploration drillholes, used. For cokriging of drill and blasthole data of the same item, the
the blasthole data gathered from the mine over the years become following steps are required.
quite extensive containing valuable information about the deposit. • The regularization of blasthole data into a specified block size.
Therefore, not using the blasthole data for routine ore reserve valu- This block size could be the same as the size of the model
ations cannot be justified. blocks to be valued, or a discreet sub-division of such blocks.
The objective of this paper is twofold: The first objective is to One thus establishes a new data base of average blasthole
briefly explain the cokriging algorithm, then to show how the blast- block values.
hole data can be integrated to use with the drillhole data via cokrig- • Variogram analysis of drillhole data.
ing. The second objective is to document the performance of cokrig- • Variogram analysis of blasthole data.
ing over ordinary kriging based on the reserve estimates and their • Cross-variogram analysis between drill and blasthole data.
comparisons to “true” grades for a mined-out area in a porphyry cop- This is done by pairing each drillhole value with all blasthole
per deposit. values.
• Selection of search and interpolation parameters.
COKRIGING • Cokriging.

Definition APPLICATION TO A COPPER DEPOSIT

Cokriging is an extension of ordinary kriging. It is basically the A cokriging exercise was carried out in a typical porphyry cop-
kriging of several variables simultaneously. It can be used when per deposit and the results were compared to those from ordinary
there is one or more secondary variables that are spatially cross-cor- kriging. The area selected was a mined-out section of the deposit
related with the primary variable. The commonly used ordinary krig- in order to see the performance of both methods by comparing the
108 26th APCOM PROCEEDINGS
Table 1. Cokriging System of Equations for Two Variables. Primary Variable is Drillhole Data. Secondary Variable is Blasthole Data.

corresponding estimated grades from each method to “true” block was that the lower benches in the model did not get much influence
values. from the blasthole data making the CK estimates almost the same
The deposit had 19 different rock types which were outlined on as the OK estimates.
each bench. Some of these rock types were combined together In rock type 1, which is one of the major rock types in this de-
when the copper mineralization was similar. Therefore, there were posit, the overall improvement in correlation coefficient of CK over
a total of eight rock groups to study. A cokriging block model was OK estimates was 5%. Table 3 summarizes the results for this rock
built for this study using 20m x 20m block size, with a bench height type broken down by benches. Since the blastholes used in cokrig-
of 15m. The model consisted of only eight benches, enough to cover ing came from benches above the modeled area, one would expect
the mined-out area. On plan, the actual area covered was about better correlations for the upper benches of the model than for those
1000m by 1500m. benches at the bottom. The results from this table confirm this. The
The blasthole data used for the variogram analysis and in inter- top two benches show 6.4% improvement whereas the fifth and the
polation of the block grades came from the benches above this mod- sixth benches show only 1.6% improvement in correlations when CK
el. However, no blastholes were taken from the bench immediately is used instead of OK. The last two benches have very few blocks
above the model. In other words, one bench gap was left between
the blastholes used and the top of the model.
The blasthole data have been averaged into 20m x 20m blocks Table 2. Comparison of OK and CK by Rock Type, Based on the
for every bench. The average values of blastholes in these blocks Linear Regression Between the Estimates and the “True” Grade of
with a minimum of four holes were retained for the study. The drill- Blocks.
hole data base came from exploration holes which are spaced on
average some 50m or more apart. The total copper variograms
were developed for each rock group for both drill and blasthole data.
Similarly, the cross variograms between the two were developed for
cokriging.
Ordinary kriging was performed using only the drillhole data.
Ordinary cokriging was performed using the drillhole data as the
primary variable, and the blasthole data as the secondary variable.
Each rock group was kriged and cokriged independently, using only
the data that belong to this group, and using the corresponding var-
iogram and interpolation parameters.

Correlations

Correlations between the estimated grades and the “true” block


values in the form of the average blasthole values have been ana-
lyzed using the least square linear regression. Table 2 summarizes
the results for each rock type group. The table lists the number of
blocks interpolated, the correlation coefficients from ordinary krig-
ing (OK) and those from cokriging (CK). The percent change in the
correlation coefficient from OK to CK is given in the last column.
As can be seen from this table, the correlations between the block
estimates and the “true” grades of the blocks have improved from
1.6% to 16.1% in different rock types when CK was used instead of
OK. Combining all rock types over-shadowed the results obtained
in individual rock types. There was only 2.2% improvement of CK
estimates over OK estimates. However, part of the reason for this
APPLICATION OF COKRIGING TO INTEGRATE DRILLHOLE/BLASTHOLE DATA 109
Table 3. Comparison of OK and CK by Bench for Rock Type 1, Based errors away from zero.
on Linear Regression Between the Estimates and “True” Grade of
Blocks. REMARKS

Since cokriging is done for the same item, say gold, the two data
sets, drillholes and blastholes, have to be compared for the same
section of the mine to test for any global bias. If any global bias is
evident, the reason for it must be investigated and the necessary
adjustments made to one or both data sets.
The block variances for the relevant model block size, as calcu-
lated from drillhole and blasthole variograms, have to be compared
and should theoretically agree. If they do not, the two corresponding
covariograms and the cross-variogram should be compared. Except
for the short lags, these should also be very similar and, if possible,
a single covariogram model should be fitted. The corresponding
variogram and cross-variogram models should be defined with any
necessary adjustments for obvious differences at short lags. These
should then give matching estimates of the block variance (Krige,
1995).
Cokriging program search routines should be very flexible to in-
clude separate search parameters for drillhole and blasthole data.
interpolated and the results are almost identical, showing the ever
Trial sets of kriging runs, using blasthole and drillhole data sepa-
decreasing effect of blastholes in the estimates as the blocks get
rately, as well as together should be first carried out for a limited
farther and farther away from the blastholes.
number of blocks to determine the practical limit for extrapolation of
Another way to check the performance of an estimation method is
blasthole data and the best drill and blasthole data patterns to use.
to look at the distribution of estimation errors. When “true” grades of
This is to ensure, as much as possible, conditional unbiasedness,
the blocks are known, the estimation error is the difference between
and error variances significantly lower than the block variance.
the “true” and the estimated grade of the block. Figure 1 shows his-
tograms of estimation errors from OK and CK methods overlaid for
rock type 1. It also gives the means and standard deviations of the CONCLUSIONS
errors from these methods.
For an unbiased estimator, the expected value of the mean of
In an ore reserve model, the valuation of blocks on and close
estimation errors is zero. One can see from Figure 1 that the means
to the pit face can obviously be improved by using blasthole data
of the estimation errors are 0.022 and 0.007 for OK and CK, respec-
from nearby drilled and mined-out blocks in addition to using the
tively. Although CK mean is smaller than OK mean, they both are
widely spaced drillhole data. One of the methods to integrate the
pretty close to zero. Therefore, both are unbiased.
blasthole data with the exploration drillhole data is to use cokriging
In the same figure, the standard deviations of the errors are 0.703
which was demonstrated in this paper. From the case study done
and 0.651 for OK and CK, respectively. The smaller the standard
on the performance of CK with respect to OK, the following results
deviation, the more desirable is the estimator, since more errors will
were conclusive.
be closer to zero. Therefore, one can see the improvement in block
• Improvements in correlation coefficients from as low as 1%
estimates by using CK over OK in rock type 1. This is displayed in the
figure where CK errors, shown in solid bars, are higher in frequency to as high as 16% have been achieved in different rock types
than OK errors around zero; but they are lower in frequency than OK when the estimates are compared to “true” block grades.
• Cokriging was especially effective when the blocks were clos-
er to the blasthole data used.
Figure 1. Histograms of Estimation Errors from OK and CK in Rock The level of correlations between estimates and the “true”
Type 1. block grades, the observed error variances, and the slopes of
the regression lines have confirmed the advantages of cokrig-
ing over ordinary kriging estimates in this particular applica-
tion.

ACKNOWLEDGEMENTS

The author would like to acknowledge the valuable notes and assis-
tance he has received from Prof. Danie Krige of South Africa.

REFERENCES
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics,
New York, Oxford University Press.
Journel, A.G., Huijbregts, Ch.J., 1978, Mining Geostatistics,
London, Academic Press.
Kim, Y.C., 1988, Advance Geostatistics For Highly Skewed Data,
(Short Course Notes), Department of Mining and Geological
Engineering, The University of Arizona.
Krige, D.G., 1995, Private Communication.
Uncertainty, Confidence Intervals and Resource Categorization:
A Combined Variance Approach

Abdullah Arik
MINTEC, INC.
Tucson, Arizona, USA

ABSTRACT
The classification of ore resources into different categories has always been a topic of research
and debate. However, currently there is no standard practice of resource classification. It seems
that each company or practitioner has his own methodology of classifying resources. This is
partly because there is almost no established and easy way of measuring the confidence in the
estimates. Therefore, their categorization becomes a subject of debate.
In this paper, a new approach is proposed to measure the uncertainty of the estimated block
grades using a “Combined Variance.” This variance has two components: One is the traditional
kriging variance, the other is the local variance of the weighted average. With this approach of
using “Combined Variance,” one essentially takes into account the local variation, as well as the
spatial data configuration around the block being estimated. An example is given to compute this
measure with two different data sets of the same configuration for a single block. A case study
for a gold deposit is presented to demonstrate how this new measure of variability can be helpful
for estimating local and global confidence intervals for grade and tonnage. Finally, the use of
“Relative Confidence Bound” index is demonstrated to categorize the resources or reserves at
different cutoffs.

INTRODUCTION
There is yet no easy or practical way of measuring the confidence in the estimates that can in
turn be used as a guide in classifying the resources. Therefore, currently it is hard to find a
standard practice of ore resource classification. Rather each company or practitioner has adopted
a methodology to categorize the resources within some guidelines, applying that method from
deposit to deposit with some refinement.
The problem of assessing uncertainty in grade estimates has been deliberated since kriging was
first introduced. Use of kriging variance was originally touted as all that was needed in order to
quantify the confidence in grade estimates. Since then, the use of kriging variance for such a
purpose has been questioned, and techniques such as the jackknife and conditional simulation
have been proposed to develop a more meaningful measurement of uncertainty in the estimates.

1
The objectives of this paper are threefold. The first objective is to propose an alternative measure
to assess the uncertainty of the estimated block grades. We will review the traditional kriging
variance and its limits as an indicator for uncertainty. This discussion will be followed by the
methodology of computing the proposed “Combined Variance.” The second objective of the
paper is to use the “Combined Variance” in determining confidence intervals for the grade
estimates. The last objective of the paper is to demonstrate the application of this new variance to
a gold deposit, and its use in categorizing the ore reserves or resources.

KRIGING VARIANCE
As an estimation technique, ordinary kriging distinguishes itself by its attempt to produce a set of
estimates for which the variance of the errors is minimized. Using the probabilistic random
function (RF) model, one can express the error variance as a function of RF model parameters
s 2, Cij, and the weights wi (Isaaks and Srivastava, 1989):
s 2k = s 2 + ? wi wj Cij – 2 ? wi Cio i = j = 1, n [Eq. 1]
2
where n is the number of samples, s is the overall sample variance, Cij are the sample-to-sample
covariances, Cio are the sample-to-block covariances, and wi are the weights assigned to the
samples. The minimized error variance through the use of the Lagrange multipliers technique is
usually referred to as the ordinary kriging variance:
s 2ok = s 2 - ? wi Cio + µ i = 1, n [Eq. 2]
where µ is the Lagrange multiplier.
The kriging variance computed for a given point or block being estimated is essentially
independent of the data values used in the estimation. It is purely a function of the spatial
distribution and configuration of data points, and the version of the kriging used. The only link
between the kriging variance and data values is through the variogram, which is global rather
than local in its definition. For this reason, the kriging variance does not always give an accurate
reflection of the local variation. For instance, using the same variogram and the same data points
around the block being estimated will give the same kriging variance, regardless of the values of
the data points.

COMBINED VARIANCE : AN ALTERNATIVE


The kriging variance was not intended to specifically address the question of assessing
uncertainty. It is an intermediate that is used to find an optimum set of weights. Although the
kriging variance is a good measure of the spatial configuration of data points, it fails to impart
their local variation since it does not depend on them directly. Therefore, any alternative measure
of variance should include a provision to measure the local variability.
Here, the author suggests the determination of a “Combined Variance” (s 2cv) to tackle the
question of assessing uncertainty. This variance is a combination of the kriging variance and the
variance of the weighted average block value based on the data values used.

2
In an ordinary kriging program, the first component of s 2cv, the kriging variance (s 2k), is already
computed. We will compute the second component of s 2cv, the local variance of the weighted
average (s 2w) as follows:
s 2w = ? w2i * (Z0 - zi)2 i = 1, n (n>1) [Eq. 3]
where n is the number of data used, wi are the weights corresponding to each datum, Z0 is the
block estimate, and zi are the data values. If there is only one datum, s 2w is set to s 2k.
The Combined Variance (s 2cv) is then calculated as follows:
s 2cv = /(s 2k * s 2w) [Eq. 4]

AN EXAMPLE OF COMBINED VARIANCE


Let us take the hypothetical case shown in Figure 1. For simplicity, there are only five data
points for the sample block to be interpolated. Using a spherical variogram model with an
isotropic range of 100, sill of 0.6, and nugget of 0.1, an ordinary kriging estimation was made.
The resulting grade from this run is 0.237, and the kriging variance is 0.065.
Let us now change the value of the first data point Figure 1. Sample data for a block
from 0.5 to 0.1, and keep everything else the same.
This time the grade estimate is 0.110. The kriging
variance remains the same at 0.065, since we have not 0.114
changed the variogram parameters or the data 0.119
+
configuration. Similarly, the kriging weight for each +
5
point stays the same as the first run. Table 1 4
summarizes the results. The same table gives the local
variance of the weighted average (s 2w) for both data
sets used. We will use s 2w values to compute the 0.133
Combined Variance (s 2cv). 0.500
+ + +
Using a kriging variance of 0.065 for both the first 3
1
and the second data sets, and applying the Eq. 4, we
get
s 2cv = /(0.065 * 0.008474) = 0.02347 for the 0.095
first set, and +
2
s 2cv = /(0.065 * 0.000045) = 0.00171 for the
second set.
The above variances are reflective of the magnitude of local variability and the uncertainty for
the block estimates that we have in this example.

3
Table 1. First and second sample data set results
(Note: only the grade of Sample #1 was changed)

Grade (zi) Distance Weight | Z0 - zi | w2i * (Z0 - zi)2


No (d) (wi)
First/second First/second First Second
set set sample set sample set
1 .500 / .100 12.0 .307 .263 .010 .006519 .0000094
2 .095 19.0 .221 .142 .016 .000985 .0000125
3 .133 20.0 .200 .104 .023 .000433 .0000212
4 .119 22.4 .146 .118 .009 .000297 .0000017
5 .114 23.3 .126 .123 .004 .000240 .0000002
Block grade using the first set (Z0) = .237 s 2w = .008474 .0000450
Block grade using the second set (Z0) = .110

APPLICATION TO A GOLD DEPOSIT


A study was performed on a gold deposit. The study area was approximately 1000x1000m in
plan. The nominal drillhole spacing in the deposit was 50 m. The block size used for the model
built for the deposit was 12.5x12.5m with a bench height of 5m. The average grade of the 5-m
composites within the mineralized zone was 1.51 g/t, with a coefficient of variation of 0.57. The
variogram model used in the ordinary kriging plan for the mineralized zone was an exponential
model with a nugget of 0.138 and sill of 0.893. The ellipsoidal search (150x90x15m) was based
on the variogram ranges with the major axis along 45-degree east. Minimum 3 and maximum 15
composites were used for the interpolation.
The average grade of the kriged block grades was 1.50 g/t, with a coefficient of variation of 0.41.
For each block, both the kriging variance and the Combined Variance were computed. Figure 2
shows the contour plot of the kriging variance at the lower and upper quartiles on bench 225 of
the model. In this figure, the drillhole intercepts on the bench were also plotted to show the
composite gold values on this bench. This figure is a prime example for demonstrating how
useless kriging variance can be for assessing uncertainty in the estimates. For example, the
kriging variance naturally gets lower when the blocks are close to the drillholes. Therefore, what
the kriging variance contours at the lower quartile indicate in Figure 2 is nothing more than the
locations of the drillholes. Similarly, the kriging variance contoured at the upper quartile is
basically showing the periphery of the drillholes. It is obvious that this information is not very
helpful at all.
Now let us look at the same plots generated for the Combined Variance. Figure 3 shows the
contour plot of the Combined Variance at the lower quartile value on bench 225 of the model. If
we zoom in a portion of this plot, shown in Figure 4, we can see that the area surrounded by
similar-grade composites have lower Combined Variances, as we would expect.

4
Figure 2.
Contour plot of kriging
variance of the blocks
at the lower and upper
quartiles on bench 225
shown with the drill-
hole intercepts on this
bench

Similarly, Figure 5 shows the contour plot of Combined Variance at the upper quartile value on
bench 225 of the model. If we also zoom in a portion of this plot, shown in Figure 6, we can see
that the high-grade area surrounded by lower-grade composites have higher Combined
Variances. Thus, any time there is a greater variability in the surrounding data, the blocks will
have higher Combined Variances, reflecting the uncertainty in the estimates.

CONFIDENCE INTERVALS
Confidence intervals are perhaps the most familiar way of reporting uncertainty. A confidence
interval accounts for our inability to determine the unknown block grade exactly. However, the
confidence intervals for individual blocks are not very easy to derive. There are many factors
contributing to the width and degree of symmetry of these intervals, such as data errors,
estimation errors and modeling errors. Although it is important to be able to establish a range
within which the unknown true value is likely to fall, the confidence intervals may not be very
helpful if they are used per block basis. They will rather be more useful for ranking the
uncertainty associated with the estimates relative to each other.
Many variables encountered in resource analysis have positively skewed distributions. Even
though not all positively skewed distributions are lognormal, most of these distributions can be
described adequately by a lognormal model depending on one’s objective. For example, Figure 7
shows the log-scale cumulative probability plot of composite gold grades in the deposit studied.
This is a typical case one may encounter where the majority of the grades often plot on straight
line with the exception of low grades and a few outlier values. Since we are interested in grades
above an ore cutoff, it will seem appropriate that a lognormal model can be an approximate for
the distribution of grades within an ore block.

5
Figure 3. Contour plot of Combined Variance Figure 4. Contour plot of Combined Variance
of kriged blocks at the lower quartile value on of kriged blocks at the lower quartile value
bench 225 shown with the drillhole intercepts zoomed in to show that similar composite
on this bench values generate lower Combined Variance

Figure 5. Contour plot of Combined Variance Figure 6. Contour plot of Combined Variance
of kriged blocks at the upper quartile value on of kriged blocks at the upper quartile value
bench 225 shown with the drillhole intercepts zoomed in to show that variable composite
on this bench grades generate higher Combined Variance

6
The mean of this model is taken as the kriged estimate (mk) for this block and the variance is
represented by the Combined Variance (s 2cv). Then we can calculate the corresponding log mean
(∀ ) and variance (∃2) of the lognormal distribution as follows:
∀ = log mk - ∃2/2 [Eq. 5]
∃2 = log [1 + (s 2cv / mk2)] [Eq. 6]
We will first compute the confidence intervals in the log space. At 95% confidence level, we get:
Lower CI = ∀ L = ∀ - 2 /(∃2/n) [Eq. 7]
Upper CI = ∀ U = ∀ + 2 /(∃2/n) [Eq. 8]
The n in these equations is the number of composites used to estimate the block. Now we
compute the confidence intervals in terms of original variables:
Lower CI = mL = exp (∀ L + 1/2 ∃2) [Eq. 9]
Upper CI = mU = exp (∀ U + 1/2 ∃2) [Eq. 10]

Relative Confidence Bound Index:


To classify the ore reserves or resources into proven, probable and possible categories, we will
use what the author terms a Relative Confidence Bound (RCB) Index as follows:
RCB Index = 0.5 * (mU - mL) / mk (mk >0) [Eq. 11]
where mL and mU is the lower and upper confidence limits, respectively, and mk is the block
grade computed from kriging. This index will be useful for us in ranking the uncertainty
associated with the estimates relative to each other.

Figure 7.
Log-scale cumulative
probability plot of
composite gold grades

7
Figure 8 shows the resulting histogram and statistics of the RCB index for gold using the kriged
blocks whose gold grades ∃0.6 g/t. These index values display a skewed distribution as the gold
grades.

Figure 8.
Distribution and
statistics of Relative
Confidence Bound
(RCB) Index for gold
∃0.6 g/t.

RESOURCE CATEGORIZATION
In order to classify the resources into proven, probable and possible categories, we need to
determine two values: One value to separate proven/probable, and another value to separate
probable/possible categories. As a rule of thumb, we will use the median RCB index of the
kriged blocks above the specified cutoff for the proven/probable limit. Similarly, we will use
twice the median index for the probable/possible limit. The median and twice the median RCB
index values for 0.6 g/t gold cutoff are 0.16 and 0.32, respectively. The corresponding RCB
index values for 1.5 g/t cutoff are 0.12 and 0.24, respectively. The logic behind this is that, in
majority of the cases, the RCB index distribution will be skewed. By using median and twice the
median, we will be able to determine the blocks in the tail of the distribution for the possible
category.
In summary, we will categorize our reserves or resources into proven, probable and possible
categories using the Relative Confidence Bound (RCB) index as follows:
Proven Resources: blocks with RCB index values # median RCB index
Probable Resources: blocks between the median and twice the median RCB index
Possible Resources: blocks with RCB index values > twice the median RCB index

8
REVIEW OF THE RESULTS
Tables 2 and 3 report the resource categories at 0.6 and 1.5 g/t gold cutoffs, respectively. The
results are based on both Relative Confidence Bound Index (RCBI) and the “Distance to the
nearest composite (DTNC).” The reason for the inclusion of the latter method is to see how this
new categorization will compare to one of the conventional classification methods.
In order to prepare the “Distance to the nearest composite” results, we used a similar
methodology to determine the range of values to use for the classification of the resources.
Therefore, we applied the median distance (55m) to separate proven/probable categories, twice
the median “distance” (110m) to separate probable/possible categories for 0.6 g/t cutoff. The
corresponding “distance” values for 1.5 g/t cutoff were 51 and 102m, respectively.
Besides the tons and grade, also reported in Tables 2 and 3 are the average distance to the closest
composite and the average number of composites used in the interpolation. First we look at the
results from the RCB index categorization. We can see from these results that the average
distance gets larger from the proven to probable, and from the probable to possible categories.
Conversely, the average number of composites used per block gets less as we go from the proven
to probable, then from the probable to possible categories. This is what we should expect. This is
an inherent feature of this method through the use of the kriging variance in the calculation of the
Relative Confidence Bound Index.
Comparing the results from the two methods in these tables, we find that the tonnage figures are
more similar in each category than the grades. In general, the RCB index classification resulted
in much greater spread in grade between different ore categories. The proven category had much
higher grade than probable category, and the probable category had much higher grade than the
possible category. This was to be expected because of the definition of RCB index. The question
is if all this makes sense. We can see that as our grade of ore approaches to the cutoff we use, it
becomes less likely to be proven. For this reason, for example, the relatively marginal-grade ore
will end up in the possible category, indicating its higher uncertainty with respect to the cutoff
used.
Of course the major assumption here is that the ore resource calculations were done correctly to
represent the in-situ grade distribution in the deposit properly. Then, knowing that one is actually
accounting for the local variability of the grades in determining the resource categories, the RCB
index technique should provide people a more presentable resource estimates and categories.

CONCLUSIONS
The concept of Combined Variance can be very useful in assessing uncertainty in the blocks.
Furthermore, the Relative Confidence Bound (RCB) index can be an effective tool for
classifying ore reserves or resources into proven, probable and possible categories. Both the
Combined Variance and RCB index are easy to compute and implement.

9
Table 2. Resource categories at a 0.6 g/t gold using Relative Confidence Bound Index (RCBI)
vs. “Distance to the nearest composite” (DTNC) methods

Ore Tons x 1000 Grade g/t Average Distance Ave. #of comps
Category
RCBI DTNC RCBI DTNC RCBI DTNC RCBI DTNC
Proven 15,911 14,996 1.754 1.581 47.4 35.8 9.1 8.3
Probable 11,148 13,409 1.292 1.476 62.1 72.2 6.6 7.5
Possible 2,503 1,157 1.054 1.240 81.6 124.0 4.1 4.0
Proven + Prob. 27,059 28,405 1.564 1.532 53.4 53.0 8.1 7.9
Total 29,562 29,562 1.492 1.492 55.8 55.8 7.8 7.8

Table 3. Resource categories at a 1.5 g/t gold using Relative Confidence Bound Index (RCBI)
vs. “Distance to the nearest composite” (DTNC) methods

Ore Tons x 1000 Grade g/t Average Distance Ave. #of comps
Category
RCBI DTNC RCBI DTNC RCBI DTNC RCBI DTNC
Proven 6,900 6,807 2.102 2.055 44.3 33.7 9.9 9.0
Probable 5,889 6,386 1.920 1.965 57.0 67.1 7.6 8.5
Possible 825 397 1.820 1.836 81.7 117.7 4.2 4.3
Proven + Prob. 12,789 13,217 2.018 2.011 50.1 50.0 8.8 8.8
Total 13,614 13,614 2.006 2.006 52.0 52.0 8.6 8.6

One of the drawbacks of using the Combined Variance and RCB index approach is that it
requires the calculation of the kriging variance. Therefore, the easiest application of this
approach becomes when ordinary kriging is used to assign the block grades. It is possible,
however, to compute a Combined Variance even for a conventional method such as the Inverse
Distance Weighting (IDW) as long as the kriging variance is first computed using the same
interpolation parameters. The IDW program must then be modified to read in the kriging
variance for each block to calculate the resulting Combined Variance. Alternatively, the IDW
program can be modified to write out the first component of the Combined Variance [Eq. 3].
This value is then taken and multiplied by the kriging variance. The square root of this product
gives the Combined Variance. It should be noted that the kriging variance should be based on the
actual variogram parameters. If the normalized parameters used, then the kriging variance must
be multiplied by the variance of the samples to give the correct Combined Variance.

10
This technique of resource categorization using the Relative Confidence Bound (RCB) index has
been applied to other types of deposits, including copper, molybdenum, zinc, and silver. The
results have been more sensible than the conventional methods, such as the “Distance to the
nearest composite.” This is because the RCB index technique not only accounts for the local
variability, but it also incorporates all the advantageous criteria for classifying resources, such as
the distance, the number and configuration of the composites around the block, as well as the
spatial correlation of the composites.
Another nice property of RCB index is that, like the coefficient of variation that can be used to
compare different distributions, it can be helpful to compare the estimations in different deposits.
Generally, higher mean or median RCB index value in a deposit is an indication of insufficient
drilling or the presence of high local variations in the values of samples used for the block
estimation.

REFERENCES
Adisoma, G.S., Hester, M.G., 1996, “Grade Estimation and Its Precision in Mineral Resources:
The Jacknife Approach,” Mining Engineering, Vol. 48, No. 2, pp 84-88.
Arik, A., 1990, “Effects of Search Parameters on Kriged Reserve Estimates,” International
Journal of Mining and Geological Engineering, Vol 8, No. 12, pp. 305-318.
Crozel, D., David, M., 1985, “Global Estimation Variance: Formulas and Calculation,”
Mathematical Geology, Vol. 17, No. 8, pp 785-796.
Dagdelen, K., Verly, G., Coskun, B., 1997, Conditional Simulation for Recoverable Reserve
Estimation, SME Annual Meeting , Preprint #97-201.
Davis, B.M., 1992, Confidence Interval Estimation for Mineable Reserves, SME Annual
Meeting, Preprint #92-39.
Froidevaux, R., 1984, “Conditional Estimation Variances: An Empirical Approach,”
Mathematical Geology, Vol. 16, No. 4, pp 327-350.
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics, New York, Oxford University Press.
Journel, A.G., Arik, A., 1988, “Dealing with Outlier High Grade Data in Precious Metals
Deposits,” Proceedings, Computer Applications in the Mineral Industry, Balkema,
Rotterdam, pp. 161-171.
Mercks, J.W., 1997, “Applied Statistics in Mineral Exploration,” Mining Engineering, Vol. 49,
No. 2, pp 78-82.
Parker, H.M., 1980, “The Volume-Variance Relationship: A Useful Tool for Mine Planning,”
Geostatistics (Mousset-Jones, P.,ed.), McGraw Hill, New York.

11
MineSight® in the Foreground

Modeling Zonal Anisotropy


What is zonal anisotropy?
Anisotropy exists if the structural character of Another way to handle zonal anisotropy is to use
a mineralization of an ore deposit differs for nested variogram structures which is discussed next.
different directions. Studying the variography of such Nested Structure in General
mineralization can help you detect anisotropy.
A variogram function can often be modeled by
If the nugget and the sill values of variograms in combining several variogram functions:
different directions are the same but the ranges are
different, then one is dealing with geometric anisotropy. g(h) = g1(h) + g2(h) + ... +gn(h)
If the nugget and range of the variograms are For example, there might be two structures
generally the same, but their sills are different in various displayed by a variogram. The first structure may
directions then one is dealing with zonal anisotropy. describe the correlation on a short scale. The second
This situation is encountered in deposits in which the structure may describe the correlation on a much larger
mineralization is layered or stratified (see Picture 1). In scale. These two structures can be defined by a nested
the example of picture 1, variation inside each layer variogram model. Nested variograms can also be used
(horizontal direction) is small (maximum difference to solve the zonal anisotropy modeling problem of
is 5), but along the vertical direction the maximum having to define the same sill values in all directions in
difference is 35. M624V1 (kriging routine).
Example
Table 1 describes three variograms in 3 directions
where zonal anisotropy is detected (assume same
variography along minor and vertical axis for
simplicity):

Table 1 Initial (single) variography

Picture 1. Vertical section.


Zonal anisotropy is much more difficult to handle
during estimation than geometric anisotropy. Quite
often, combinations of geometric and zonal anisotropy
are encountered and can be very difficult to interpret.
One way to deal with zonal anisotropy is to partition
(continued on page 9)
the data into zones, and analyze each zone separately.

June 2001 8
June 2001

(Zonal Anisotropy continued from page 8)

This information (Table 1) in order to be used in


M624V1 needs to be modified to (Table 2):

Table 2 Modified (nested) variography


One needs to calculate range1 for minor and vertical
axis as well as range2 for major axis in a way that
new variograms (Table 2) are as close as possible to the
initial ones (Table 1).
In order to do so: Picture 3 Nested minor axis on top of overlay of
-Start M300V1 (variogram modeling). single minor axis.
-Save initial variograms as overlays (see Picture 2) For the major axis:
Enter nugget (type in 0.1)
Type in the second point (0.6 350)
Type in the third point (08.20000)
See Picture 4.
The huge range 20000 is only used so we can form
a nested variogram that matches (mathematically) the
single model. If you want to use the variogram ranges
as an indication for the search distances, you should
still use search distances around the 350 range.

Picture 2 Overlays of single variograms


-Create new nested models trying to match visually
the initial variograms (overlays). Make sure you click
on the nested option from the menu.
For the minor/vertical axis:
Enter nugget (type in 0.1)
Enter a second point (digitize)
Type in a thrid point (0.8 350)
Fix sill to 0.6 and edit model. Adjust range1 so it Picture 4 Nested major axis on top of overlay of
fits overlay (see Picture 3). single major axis.
(continued on page 10)

MineSight® in th
e
Foreground 9
June 2001

(Zonal Anisotropy continued from page 9)

The final variogram should now be:

Table 3 Final (nested) variography

The information from Table 3 can now be entered


in the Krigin procedure (p62401.dat) as in the panel
shown to the right.
Notice that the sill needs to be entered in incre-
ments. The direction of the major axis is assumed to be
at ) degrees (no dip or plunge).

Send me an update CD!


This update includes the latest version of MineSight® 2
and other information of interest.

Please send one copy of the latest MineSight® update CD-ROM to:

Company Name:

Name of Person requesting CD:

Telephone: FAX:

Date:

One update CD-ROM will be sent per site with an active maintenance contract.

Please note that the shipment will be directed to the person who is on le as your company’s contact
with Mintec, not necessarily the person who requests the CD. If you want to verify that an update has
already been requested and sent to your site, please contact Donna Dickerson.

Fax this form to: (520) 325-2568

or mail to: Mintec, inc., 3544 East Ft. Lowell Road, Tucson, AZ 85716-1705

MineSight® in th
e
Foreground 10
Technical Support Tips

Ellipsoidal Searches in
MEDSYSTEM®

(Used in M620V1, M620V2, M621V1,


M624IK, M624V1)

Differences in the structural character of


the mineralization of an ore deposit along
various directions are described by
anisotropy. Variograms along different
directions can determine the existence of IOP6 (zero for real or one for adjusted) comp2 inside the parallelogram
anisotropy. will determine if adjusted or real distances defined by major and minor axes
will be used and/or reported after the but outside the ellipse (50m from x
If anisotropy exists, an ellipse should be initial ellipsoidal search. and y axes; 70.7m direct distance
used to define a search neighborhood. The from the block).
ellipse is centered on the point or the In the kriging algorithm (M624IK, — a search of 100m (major), 50m
center of the block being estimated. The M624V1), ellipsoidal search and IOP6 (minor), 0m (vertical) with no
ellipse should be oriented with its major apply only to the selection of the rotation is used.
axis parallel to the direction of maximum composites, and not to the distances used
continuity. in variography. Kriging weights are
calculated from the variogram The adjusted distance for comp1 would
parameters. be:
Once you have determined the orientation (100/50)2*402= a2 ⇒ a=80m
of the anisotropy axes and the length of
those axes, you should: In the IDW algorithm (M620V1,
M620V2, M621V1), IOP6 will make a The adjusted distance for comp2 would
difference in the calculation of weights be:
1. Assign PAR1, PAR2 and PAR3 (x, y
(weights are based either on true or (100/50)2*502+(100/100)2*502 = b2 ⇒
and z search distances respectively) so
adjusted distances). b=111.8m
the ellipse is included inside the
parallelopiped formed by PAR1, PAR2
The following example shows how the If PAR4 equals 100 (=RY), comp1 will
and PAR3.
adjusted distances are calculated (a 2-D be included in calculations, whereas
2. Assign PAR4 (max allowed search search is assumed for simplicity): comp2 will not. If PAR4 were smaller
distance) equal to the length of the than 80, comp1 would not have been
major axis. Let us assume that two samples exist: considered either. If PAR4 were greater
3. Include the ellipse search using the than 111.8, comp2 would also have been
lengths and orientation of axes. — comp1 along the minor axis of an used.
ellipse (40m from the block) and
A search will be performed in the
following fashion:

1. Composites inside the box formed by


PAR1, PAR2 and PAR3 are
preselected.
2. Selection is further limited inside the
parallelopiped formed by the major,
minor and vertical ellipse axes.
3. The points inside the ellipse:
x2 *(RY/RX)2 + y2 + z2 *(RY/RZ)2 ≤ PAR42
are finally kept.

Page 8 MineSight®/MEDSYSTEM® BULLETIN December 1998


SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah

Preprint 00-88

PERFORMANCE ANALYSIS OF DIFFERENT ESTIMATION METHODS ON CONDITIONALLY SIMULATED


DEPOSITS

A. Arik
Mintec, Inc
Tucson, AZ, USA

ABSTRACT is 25 by 25 feet. The location of the 144 blastholes


An accurate estimation of mining block grades and the assay grades are given in Figure 1. The
continues to be a challenging task for ore reserve summary statistics and the histogram of the blas-
practitioners. Two of the factors that affect the thole data are presented in Figure 2. This is the
accuracy of the estimates are the estimation method same data set used in a previously published paper
used and the variability of the mineralization in the (Dagdelen, Verly and Coskun, 1997)
deposit. A smaller data set containing 36 samples is se-
The objectives of the study are twofold: Firstly, to lected from the original data set to represent
see how different estimation methods perform on a exploration holes. The location of these holes and
given deposit. Secondly, to determine the impact of the assay grades are given in Figure 3. The sum-
particular nature of the mineralization on the results. mary statistics and the histogram of the exploration
data are presented in Figure 4.
The comparison of several estimation methods for
the prediction accuracy is performed using the Using the blasthole data as the basis, a sequential
conditional simulation results as the basis. Gaussian simulation on a 1 by 1 grid was generated
containing 90,000 values. The variogram model
used for the normal scores was an anisotropic
exponential model with 174 feet range in E-W direc-
INTRODUCTION tion, 126 feet in N-S direction. It had a nugget of
The necessity for accurate resource estimates has 0.035 and sill value of 1.0. The simulation was
always been important. With the decreasing profit repeated for 30 times using different random number
margins and increasingly large investments required seeds. Thus we had 30 realizations for the same
to start up mines today, this need becomes very area. The summary statistics and the histogram of
critical. Yet some of the resource modeling done by the simulated data from one realization are pre-
the practitioners lack sound reasoning on the choice sented in Figure 5.
of the method used for their deposits. A selective mining unit (smu) size of 10 by 10 feet
Two of the factors that affect the accuracy of the was assumed for the sake of this study even though
estimates are the estimation method used and the the actual smu size in the mining of this deposit
variability of the mineralization in the deposit. would be different. Based on the smu size, the
recoverable resources at three different cutoffs were
The objective of the study is to look into these two computed for each realization. Then the average of
factors and see how they affect the estimates and the 30 values at each cutoff was taken as the in situ
the resulting recoverable resources. In order to resources or the “ground truth.” These resources
achieve this objective, we will use conditionally thus became the basis to compare the recoverable
simulated deposits as the basis of our in situ re- resources with the results from the estimation meth-
sources. ods employed for the study.
STUDY APPROACH
A small blasthole data set from a copper deposit was
selected for the study. The size of the study area is
300 by 300 feet. The sampling grid for the blastholes
1 Copyright © 2000 by SME
SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah

Figure 4. Histogram and statistics of exploration data -


Deposit 1

Figure 1. Blastholes locations and their values - Deposit 1

Figure 5. Histogram and statistics of conditional simulation


data on a 1x1 grid - Deposit 1

ESTIMATION PROCEDURE AND THE


Figure 2. Histogram and statistics of blastholes - Deposit TECHNIQUES USED
1
A block model with 50 by 50 feet blocks was set
up. Using the 36 exploration data as the input data,
a total of nine different estimation methods were
tried out to estimate these blocks so that their results
can be compared to in situ resources. The following
estimation methods were employed:
Ordinary Kriging (OK): Based on the exploration
data variogram parameters and a defined search
strategy, ordinary kriging estimates were obtained.
The variogram model used was an isotropic expo-
nential model with 90 feet range, 0.002 nugget and
0.048 sill value. A 90-feet circular search was ap-
plied to each block with a maximum of 12 closest
composites used for the interpolation.
OK – Adjusted: Using the indirect lognormal cor-
rection method, the ordinary kriged grades were
corrected for 10 by 10 feet smu size.
Inverse Distance Squared (ID2): Using the same
Figure 3. Exploration holes locations and their values - search strategy defined for kriging, inverse distance
Deposit 1
weighting estimates were obtained.
2 Copyright © 2000 by SME
SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah
Nearest Neighbor Kriging (NNK): This is a modi- Polygon: The value of the nearest exploration
fied version of OK where more emphasis is given to data to the centroid of the block. If there were more
the nearest sample using a variance correction than one sample within the block, the average of the
factor (Arik, 1998). In this study, a correction factor samples was assigned to the block to avoid missing
of 0.3 was used. any sample.
Multiple Indicator Kriging (MIK): Based on nine COMPARISON OF RESULTS – DEPOSIT 1
indicator cutoffs but using the median indicator
The recoverable resources from different estima-
variogram for all cutoffs because of lack of data, MIK
tion methods were compared to the in situ results.
estimates were obtained. Indirect lognormal correc-
Three different cutoffs were selected for comparison.
tion factor for a 10 by 10 smu size was applied to get
The cutoff selection strategy was to have one cutoff
recoverable resources.
near the mean of the deposit, one below the mean
Outlier Restricted Kriging (ORK): This is also a and one above the mean. Therefore, the cutoffs 0.4,
modified version of OK. It requires two steps. First 0.5 and 0.6 were used.
step is determination of the outlier cutoff. An indica-
Table 1 summarizes the results of the comparison.
tor value of 1 is assigned to the data equal or
In this table, percent tons, grade above cutoff and
greater than this cutoff. All other data have indica-
the quantity of metal were given at each cutoff
tors of 0. Based on these indicators, the probability
corresponding to different estimations. The “%
of occurrence of outliers is assigned to each block.
Difference” gives the difference in value of the
This step is very similar to Indicator kriging.
estimation from the in situ “reality” in percent.
The second step is handled internally by the pro-
gram which solves a modified kriging matrix to COMPARISON OF RESULTS – DEPOSIT 2
allocate appropriate weights to sample data based One of the objectives of this study was to deter-
on the assigned probabilities in each block (Arik, mine how the variability of the deposit affects the
1992). In this study, an outlier cutoff of 1.0 was estimates and the resulting recoverable resources.
used. That means there was only one outlier in our The coefficient of variation of the blasthole samples
data. To assign the probabilities, ID2 method was in the first deposit was 0.39. We want to perform the
used a 50 feet search radius. same exact study on another deposit with a coeffi-
Uniform Conditioning (UC): This method is es- cient of variation of the samples is about twice that
sentially a modified version of disjunctive kriging. Its value.
basic difference from disjunctive kriging is that the In order to eliminate as many factors as possible
mean grade of the block is obtained from the OK to that may affect the outcome of the estimation, it was
ensure the estimation is locally well constrained. The decided to use the same sample locations of the first
proportions of ore within the block are conditional to deposit, but make the assay values more variable.
that kriged value.
To increase the variability of the assay values, a
Similar to disjunctive kriging, UC requires the volume-variance correction was applied on the
Gaussian anamorphosis of data and calculation of original blasthole assays given in Figure 1. Using the
polynomials at each data point. Variogram analysis indirect lognormal correction method, the distribution
is based on the Gaussian transformed data. Recov- of these assay values is stretched to give us the new
erable resources are calculated on the Gaussian blasthole data shown in Figure 6. The new blasthole
model and the dispersion variances accommodating data set for Deposit 2 now has a coefficient of varia-
the change of support for the smu size (Guibal and tion of 0.78. The summary statistics and the
Touffait, 1982). histogram of the simulated data from one realization
Conditional Simulation (CSIM): Using exploration using these blastholes as conditioning data are
data as the conditioning data, a sequential Gaussian presented in Figure 7.
simulation on a 2 by 2 grid was generated. The The exploration holes were selected from this new
simulated grades were combined into 10 by 10 smu data set at the same locations as the first deposit.
sizes. Recoverable resources were calculated based From then on, the study was duplicated with the
on the corresponding smu values. The simulation appropriate changes in variogram and interpolation
was repeated 30 times with different random number parameters for each method. For example the
seeds. This resulted in 30 realizations to give us 30 variogram nugget and sill of the new exploration
different results for recoverable resources at the data were 0.03 and 0.24, respectively. The outlier
specified cutoffs. The average of the 30 simulations cutoff grade for ORK was raised to 2.0.
was retained for the comparison.
The major difference for Deposit 2 from the De-
posit 1 for recoverable resources was the cutoff
3 Copyright © 2000 by SME
SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah
grades applied. Instead of using 0.4, 0.5 and 0.6 ranking of the estimators based on their perform-
cutoffs, the cutoffs 0.3, 0.5 and 0.7 were used. The ance in Deposit 1 and Deposit 2, respectively.
reason for this was that since the distribution of the
Finally, using their average rank for performance,
original samples were stretched to achieve the
these nine estimators were grouped into three
higher variability in this deposit, we wanted the
simplified categories: Good, average and poor
recoverable resource percentages in each cutoff to
performers. Thus the top three performers were
be as similar to Deposit 1 as possible. This is to
included in the “Good” performance category, the
eliminate the possible sensitivity of the results to
next three performers were included in the “Average”
variable proportions while determining the percent
category, and the bottom three performers were
differences in recoverable resources. Table 2 sum-
included in the “Poor” performance category.
marizes the results of the comparison in Deposit 2.
In Deposit 1, Outlier Restricted Kriging (ORK),
Uniform Conditioning (UC), and Nearest Neighbor
Kriging (NNK) performed well, closely followed by
Conditional Simulation (CSIM), Adjusted or Cor-
rected Ordinary Kriging (OK – Adj.) and Multiple
Indicator Kriging (MIK). In the poor category, Inverse
Distance Squared method (ID2) was followed by
Ordinary Kriging (OK) and Polygon Estimator. Table
5 summarizes the results of the simplified ranking of
the estimators based on their performance in De-
posit 1.
In Deposit 2, Adjusted or Corrected Ordinary
Kriging (OK – Adj.), Conditional Simulation (CSIM),
and Outlier Restricted Kriging (ORK) performed well.
Average performers were Uniform Conditioning
Figure 6. Histogram and statistics of blastholes - Deposit 2 (UC), Nearest Neighbor Kriging (NNK) and Multiple
Indicator Kriging (MIK). The “poor” category did not
change. This time Ordinary Kriging (OK) was fol-
lowed by Inverse Distance Squared method (ID2)
and Polygon Estimator. Table 6 summarizes the
results of the simplified ranking of the estimators
based on their performance in Deposit 2.
It is interesting that the three estimators without
any change of support correction (OK, ID2 and
Polygon) performed poorly. There were no surprises
there. This is an indication of the importance of
considering the support when calculating the recov-
erable resources. Especially, the enhancement in
the performance of OK once the change of support
correction is done is worth noting.
On the other hand, MIK was anticipated to perform
better than average. This could be attributable to the
Figure 7. Histogram and statistics of conditional simulation
number of samples available. The 36 samples used
data on a 1x1 grid - Deposit 2
seem to be insufficient to apply MIK properly in this
REVIEW OF THE RESULTS case study. Since MIK technique computes the
There were nine different estimator used for the grade and proportion of ore at a cutoff by adding the
study. In order to evaluate the performance of each contribution of each class above that cutoff, it be-
method in estimating the in situ resources, a ranking comes rather critical to have the correct class
system of 1 to 9 was used for each cutoff, 1 being means, especially for the classes in the tail of the
the best estimate, 9 being the worst. In the case of distribution (Journel and Arik, 1988).
any ties in rank, the next rank or ranks were skipped. Outlier Restricted Kriging (ORK) performed well in
The overall rank of the estimator was calculated both deposits. Once the outlier probabilities are
based on its average rank in total of 9 categories. assigned properly, this method proved that it could
Table 3 and Table 4 summarize the results of the outperform some of the complex methods, especially
at cutoffs near or below the mean of the deposit.
4 Copyright © 2000 by SME
SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah
Uniform Conditioning (UC) technique, although not Local estimation accuracy of the methods has not
as popular as MIK, perhaps because of its complex- been studied. Also, the small data set used for the
ity, performed rather well. However, it had a study may have affected the performance of some
tendency to overestimate the tonnage at the higher methods, such as MIK. It may have also caused
cutoffs in the deposits studied. bigger fluctuations for the recoverable resource “%
Difference” calculations. A much larger data set and
Nearest Neighbor Kriging (NNK) method did well
a bigger study area may alleviate these problems.
also. Because the method is practical and simple,
one may want to consider it as an alternative to Nevertheless, the study has demonstrated the
complex methods when quick resource estimates significance of considering the support change,
are desired. NNK results usually fall somewhere namely the selective mining unit size, when calcu-
between OK and Polygon estimates, depending on lating the recoverable resources. The methods used
the variance correction factor used. without the support correction did not perform well,
even in a deposit with a low coefficient of variation.
The performance of Conditional Simulation (CSIM)
was very sound. Especially in Deposit 2, which was
REFERENCES
more variable, it was one of the top performers.
Furthermore, CSIM results revealed additional Arik, A., 1998, “Nearest Neighbor Kriging: A Solu-
information no other estimator could provide: A tion to Control the Smoothing of Kriged Estimates,”
possible range of recoverable resources at any SME Annual Meeting, Preprint #98-73.
cutoff. Remember that we used the average of 30 Arik, A., 1992, “Outlier Restricted Kriging: A New
simulations to compare with the in situ resources. Kriging Algorithm For Handling of Outlier High Grade
Thus we actually had 30 different results in each Data in Ore Reserve Estimation,” APCOM Proceed-
cutoff, one from each individual simulation. The ings, pp. 181-187.
distribution of these 30 simulations can inform us the
possible lows and highs expected for the recover- Dagdelen, K., Verly, G., Coskun, B., 1997, “Condi-
able resources at specified cutoffs. It also allows us tional Simulation For Recoverable Reserve
to establish confidence intervals for our resources. Estimation,” SME Annual Meeting, Preprint #97-201.
Guibal, D. and Touffait Y., 1982, “Grade-Tonnage
CONCLUSIONS Relationships: Their Use in Predicting Future Re-
This study has demonstrated another application of serves and Estimating Global Recoverable Reserves
conditional simulation. Nine different methods were of a Deposit,” APCOM Proceedings, pp. 535-543.
tested out for their performance in estimating the
Journel, A. G. and Arik, A., 1988, “Dealing with
recoverable resources by comparing their results to
in situ “reality” based on the conditional simulation. Outlier High Grade Data in Precious Metals Depos-
its,” Proceedings, Computer Applications in the
Mineral Industry, Balkema, Rotterdam, pp. 161-171.

5 Copyright © 2000 by SME


SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah

Table 1. Comparison of resources at different cutoffs - Deposit 1


0.4 cutoff 0.5 cutoff 0.6 cutoff
Ton% Grade Metal Ton% Grade Metal Ton% Grade Metal
1. In-situ 81.6 .555 45.29 48.2 .631 30.41 25.1 .710 17.83

OK 88.9 .539 47.92 55.6 .589 32.75 16.7 .683 11.41


% Difference +8.9 -2.9 +5.8 +15.4 -6.7 +7.7 -33.5 -3.8 -36.0

OK - adjusted 83.3 .547 45.57 52.8 .598 31.57 19.4 .693 13.44
% Difference +2.1 -1.4 +0.6 +9.5 -5.2 +3.8 -22.7 -2.4 -24.6

ID2 77.8 .559 43.49 55.6 .598 33.25 19.4 .713 13.83
% Difference -4.7 +0.7 -4.0 +15.4 -5.2 +9.3 -22.7 +0.4 -22.4

NNK 77.8 .564 43.88 52.8 .619 32.68 25.0 .703 17.58
% Difference -4.7 +1.6 -3.1 +9.5 -1.9 +7.5 -0.4 -1.0 -1.4

MIK 75.3 .581 43.75 50.0 .638 31.90 25.6 .739 18.92
% Difference -7.7 +4.7 -3.4 +3.7 +1.1 +4.9 +2.0 +4.1 +6.1

ORK 83.3 .544 45.32 50.0 .608 30.40 19.4 .710 13.77
% Difference +2.1 -2.0 +0.1 +3.7 -3.6 0.0 -22.7 0.0 -22.8

UC 78.9 .572 45.13 48.6 .634 30.81 28.3 .705 19.95


% Difference -3.3 +3.1 -0.3 +0.8 +0.5 +1.3 +12.7 -0.7 +11.9

CSIM 77.8 .570 44.35 46.0 .657 30.22 24.4 .759 18.52
% Difference -4.7 +2.7 -2.1 -4.6 +4.1 -0.6 -2.8 +6.9 +3.9

Polygon 72.2 .610 44.04 36.1 .769 27.76 30.6 .816 25.0
% Difference -11.5 +9.9 -2.8 -25.1 +21.9 -8.7 +21.9 +14.9 +40.2

6 Copyright © 2000 by SME


SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah

Table 2. Comparison of resources at different cutoffs - Deposit 2


0.3 cutoff 0.5 cutoff 0.7 cutoff
Ton% Grade Metal Ton% Grade Metal Ton% Grade Metal
1. In-situ 81.7 .579 47.30 42.43 .749 31.78 19.5 .946 18.45

OK 88.9 .567 50.41 47.2 .693 32.71 16.7 .902 15.06


% Difference +8.8 -2.1 +6.6 +11.2 -7.5 +2.9 -14.4 -4.7 -18.4

OK - adjusted 83.3 .571 47.56 41.7 .727 30.32 16.7 .960 16.03
% Difference +2.0 -1.4 +0.5 -1.7 -2.9 -4.6 -14.4 +1.5 -13.1

ID2 86.1 .570 49.08 47.2 .714 33.70 13.9 1.048 14.57
% Difference +5.4 -1.5 +3.8 +11.2 -4.8 +6.0 -28.7 +10.8 -21.0

NNK 80.6 .602 48.52 38.9 .819 31.86 13.9 1.270 17.65
% Difference -1.3 +4.0 +2.6 -8.3 +9.3 +0.3 -28.7 +34.2 -4.3

MIK 76.4 .627 47.90 36.9 .863 31.84 19.2 1.087 20.87
% Difference -6.5 +9.8 +1.3 -13.0 +15.2 +0.2 -1.5 +14.9 +13.1

ORK 86.1 .565 48.65 41.7 .732 30.52 16.7 .994 16.60
% Difference +5.4 -2.4 +2.8 -1.7 -2.3 -4.0 -14.4 +5.1 -10.0

UC 78.9 .620 48.92 44.4 .749 33.26 23.8 .928 22.09


% Difference -3.4 +7.1 +3.4 +4.6 0.0 +4.6 +22.0 -1.9 +19.7

CSIM 77.8 .597 46.45 40.5 .783 31.71 20.6 .996 20.52
% Difference -4.8 +3.1 -1.8 -4.5 +4.5 -0.2 +5.6 +5.3 +11.2

Polygon 69.4 .714 49.55 30.6 1.136 34.76 22.2 1.315 29.19
% Difference -15.1 +23.3 +4.8 -27.9 +51.7 +9.4 +13.8 +39.0 +58.2

Table 3. Performance ranking of estimators on a 1-9 scale at different cutoffs - Deposit 1


0.4 cutoff 0.5 cutoff 0.6 cutoff Avg.
Ton% Grade Metal Ton% Grade Metal Ton% Grade Metal Rank
OK 8 6 9 7 8 7 9 6 8 7.6
OK - adj. 1 2 3 5 6 4 6 5 7 4.3
ID2 4 1 8 7 6 9 6 2 5 5.3
NNK 4 3 6 5 3 6 1 4 1 3.7
MIK 7 8 7 2 2 5 2 7 3 4.8
ORK 1 4 1 2 4 1 6 1 6 2.9
UC 3 7 2 1 1 3 4 3 4 3.1
CSIM 4 5 4 4 5 2 3 8 2 4.1
Polygon 9 9 5 9 9 8 5 9 9 8.0

7 Copyright © 2000 by SME


SME Annual Meeting
Feb. 28-Mar. 1, Salt Lake City, Utah

Table 4. Performance ranking of estimators on a 1-9 scale at different cutoffs - Deposit 2

0.3 cutoff 0.5 cutoff 0.7 cutoff Avg.


Ton% Grade Metal Ton% Grade Metal Ton% Grade Metal Rank
OK 8 3 9 6 6 4 4 3 6 5.4
OK - adj. 2 1 1 1 3 6 4 1 4 2.6
ID2 5 2 7 6 5 8 8 6 8 6.1
NNK 1 6 4 5 7 3 8 8 1 4.8
MIK 7 8 2 8 8 1 1 7 4 5.1
ORK 5 4 5 1 2 5 4 4 2 3.6
UC 3 7 6 4 1 6 7 2 7 4.8
CSIM 4 5 3 3 4 1 2 5 3 3.3
Polygon 9 9 8 9 9 9 3 9 9 8.2

Table 5. Simplified performance ranking of estimators - Deposit 1

Performance Estimation Method (Average Rank)

Good ORK(2.9), UC(3.1), NNK(3.6)

Average CSIM(4.1), OK – adjusted(4.3), MIK(4.8)

Poor ID2(5.3), OK(7.6), Polygon(8.0)

Table 6. Simplified performance ranking of estimators - Deposit 2

Performance Estimation Method (Average Rank)

Good OK – adjusted(2.6), CSIM(3.3), ORK(3.6)

Average UC(4.8), NNK(4.8), MIK(5.1)

Poor OK(5.4), ID2(6.1), Polygon(8.2)

8 Copyright © 2000 by SME


30th APCOM International Symposium Proceedings, Phoenix, Arizona, 2002

Comparison of Resource Classification Methodologies


With a New Approach
Abdullah Arik1

ABSTRACT
The resource classification methodologies have always been a topic of research and debate. There
are both traditional and Geostatistical methods used in practice. Because of the uniqueness of each
ore deposit, it seems almost impossible to have a single industry-accepted norm for resource
classification. Therefore, some practitioners have adapted the methodologies to suit the deposits
they are working on, but many simply use the methods they are comfortable with. Another reason
for the existence of varying classification methodologies is the lack of an easy and established way
to measure the confidence in the estimates. Therefore, their classification becomes a subject of
debate.
In this paper, a new approach is offered to classify the resources using a “Resource
Classification Index.” The approach using this index takes into account the local variation, the
spatial data configuration around the block being estimated, as well as the traditional distance
measures. A case study for a gold deposit is presented to demonstrate how this new index can be
used to classify the resources. The results from this approach are compared to those from two
popular classification methodologies.

INTRODUCTION
Over the past decades, a number of resource/reserve classification codes have been developed to
standardize the classification principles and reporting guidelines in the mineral industry. These
codes or principles can be found in the Society for Mining, Metallurgy and Exploration (SME)
guides, the U.S. Bureau of Mines and the U.S. Geological Survey Circulars, the U.S. Securities
and Exchange Commission (SEC) guidelines, and the Canadian Institute of Mining, Metallurgy
and Petroleum (CIM) bulletins. There are also the SAMREC code from South Africa and the
JORC Code from Australia developed for the same purpose. The reference material to most of
these codes and guides can be found relatively easily by going to the appropriate web site of the
organization on the internet.
The basic concepts of the resource and reserve reporting guidelines revolve around mainly
geological assurance; technical factors and economic viability. The following categories are
essentially the norm for the resources:

• Measured Mineral Resources


• Indicated Mineral Resources
• Inferred Mineral Resources

For the reserves, which are mineable portion of the resources, the following categories and the
terminologies are commonly used:

1
Mintec, Inc., Tucson, Arizona
• Proven Mineral Reserves
• Probable Mineral Reserves
• Possible Mineral Reserves

The classification of resources and reserves must conform the standard codes of reporting.
However, what classification scheme or practice should be utilized is up to the practitioner, as
long as he or she uses acceptable criteria to satisfy both the regulatory authority and internal
company planning needs.
The objective of this paper is to compare two of the most popular classification criteria to a
new resource classification approach. This new approach, through the use of a “Resource
Classification Index,” takes into account the local variation, the spatial data configuration around
the block being estimated, as well as the traditional distance measures and other pertinent criteria.
The application of this method will be demonstrated in a gold deposit, and the results will be
compared to those from the selected traditional and Geostatistical methods.

POPULAR CLASSIFICATION CRITERIA


Although various criteria or measures exist for classifying the resources, only two of the most
popular ones will be discussed. One these methods is the traditional measure of “Distance” to the
nearest drill hole or the sample. The other method is the Geostatistical classification approach
based on the distribution of kriging variance.

Distance Classification
In its simplest form, this classification is based on the distance from the centroid of the block to be
interpolated to the nearest sample used in the interpolation. Obviously the closer the sample to a
block, the more confidence is assumed about the interpolated value of the block.
The decision what distances to use to classify the resources is usually based on the geology,
drilling density and/or the variogram ranges.
Variations of this method with additional requirements of minimum number of samples and
drilling density measures are possible. In some cases, the “distance to the nearest sample” is
replaced by the “average distance of all the samples” used in the interpolation of a block. The
advantage of the distance classification is in its simplicity and its availability with any estimation
method.

Kriging Variance Classification


In this method, the cumulative probability (cdf) or the histogram of the kriging variance is used to
categorize the resources. The cdf of the kriging variance is divided into classes based on some
criteria or visual inspection. These classes correspond to resource categories and mainly dependent
on major changes in the shape of the cdf, or are based on some preconceived percentiles.
There are no clear-cut rules of choosing different classes from the cdf or histogram of the
kriging variance. Unless there are obvious break points in the cdf curve, a rule of thumb for this
method could be to consider the median kriging variance and the 95th percentile as the cutoff
values for Measured/Indicated and Indicated/Inferred categories. However, if there are production
data or previous experience with the deposit available to calibrate the categories, that may be a
preferable and justifiable alternative.

AN ALTERNATIVE MEASURE FOR RESOURCE CLASSIFICATION


The existing resource classification methodologies have some advantageous points and they
provide more or less the necessary tools for the practitioners to use. Yet they all have one common
shortcoming. They fail to incorporate and use all the information or criteria available in the
resource classification scheme. Distance to the nearest sample is important, but so are the
minimum number of composites used and the kriging variance. How about local variability of the
samples? Or quality of the data or drilling type considerations?
Here, the author suggests classifying the resources using an index. This index will be referred
to as the “Resource Classification Index” and will take into account the local variation, the spatial
data configuration around the block being estimated, as well as the traditional distance measures,
and any other information that is considered important.

RESOURCE CLASSIFICATION INDEX


The “Resource Classification Index” (RCI) can have several components. But its major
component is the “Combined Variance” (σ2cv). This variance is a combination of the kriging
variance and the variance of the weighted average block value based on the data values used
(Arik, 1999a). The advantage this variance is that it is not only a function of the spatial
distribution and configuration of data points, but also is a function of the variability of the samples
used in the block estimation.

Combined Variance
In an ordinary kriging program, the first component of σ2cv, the kriging variance (σ2k), is already
computed. One can refer to any Geostatistical textbook (e.g. Journel and Huijbregts, 1978) for the
calculation of this variance. We compute the second component of σ2cv, the local variance of the
weighted average (σ2w) as follows:

σ2w = ∑ w2i * (Z0 - zi)2 i = 1, n (n>1) [Eq. 1]

where n is the number of data used, wi are the weights corresponding to each datum, Z0 is the
block estimate, and zi are the data values. If there is only one datum, σ2w is set to σ2k.
The Combined Variance (σ2cv) is then calculated as follows (Arik, 1999b):

σ2cv = √(σ2k * σ2w) [Eq. 2]

It should be noted that the kriging variance must be based on the original variogram
parameters so that it would be in the same scale as the local variance. If the variogram parameters
are normalized, or relative parameters are used, the necessary adjustments must be made.

Resource Classification Index


To classify the resources into measured, indicated and inferred categories, the use of a Resource
Classification Index (RCI) is suggested as follows:

RCI = √(σcv / mk ) * C (mk > 0) [Eq. 3]

where σcv is the square root of the Combined Variance, mk is the block grade computed from
kriging, and C is a calibration factor. Note that C is outside the square root sign.
The Calibration Factor (C) depends on which criteria we would like to include in the
computation of the Resource Classification Index. Each component of this factor is optional, and
other components can be added, as it seems appropriate. The following Calibration Factor (C) is
only a suggestion:

C = expd / (expn * expq * expt) [Eq. 4]

The superscripts of the exponents in the above equation are as follows:


d = Dist/Dismax
n = Nused/Nmax
q = Nquad/4 or Noctant/8
t = Nddh/Nused
These variables are explained below:
Dist = Distance to the nearest sample from the centroid of the block.
Dismax = This may be the distance used to determine the Indicated/Inferred, or the
Measured/Indicated resource categories in a traditional scheme, depending on how
much emphasis one wants to give on “Distance.”
Nused = Actual number of composites used to interpolate the block.
Nmax = Maximum number of composites allowed to interpolate the blocks.
Nquad = Number of quadrants with data (assuming a quadrant search is used) for a block.
Noctant = Number of octants with data (assuming an octant search is used).
Nddh = Number of diamond drill hole composites used for the block (assuming there are
data quality problems with different types of drilling).

The advantage of the calibration factor is that it could be modified and customized for each
deposit depending on the weight that must be given to certain measures.

APPLICATION TO A GOLD DEPOSIT


A study was performed on a gold deposit. The project was in imperial units. Therefore, all
references for measurements in the remainder of this paper will be in feet, and the gold grades will
be in ounces per ton if they are not specified. The study area was approximately 3000x3000 in
plan. The nominal drillhole spacing in the deposit was 50. The block size used for the model built
for the deposit was 20x20 with a bench height of also 20. The bench composites within the
broadly defined mineralized zone had a skewed, lognormal looking distribution with an average
grade of was 0.016. These composites had a very high coefficient of variation of 3.66.
There were four different zones with different trend planes for mineralization. These zones
had varying nested spherical variogram models to be used in the ordinary kriging. An ellipsoidal
search (120x90x60) was based on the variogram ranges with the major axis placed along strike
direction. The azimuth and dip of the zones varied. Minimum 3 and maximum 16 composites were
used for the interpolation of the blocks with 3 maximum composites from each individual drill
hole. Furthermore, a quadrant search was applied with maximum 4 composites per quadrant.
During the kriging interpolation, the kriged grade (AUK), standard deviation of kriging
variance (KSTD) and standard deviation of Combined Variance (CSTD) were computed and
stored in each block. The number of composites used for the blocks (NUSED), the distance to the
nearest composite (DIST), and the number of quadrants with data (NQUAD) were also stored into
the model blocks. Cumulative probability plots of DIST and KSTD were generated using only the
blocks whose estimated kriged grades (AUK) equal to or greater than 0.01. Therefore, the blocks
with lower than 0.01 grades, well below the lowest ore cutoff grade of 0.013 for this deposit, were
not considered in any classification scheme applied.

Classification Based on DIST variable


Traditionally, 50 and 90 feet were used to classify the resources into Measured, Indicated and
Inferred categories in this deposit. Based on DIST cdf calculated using the values from the model,
these values correspond approximately to 50th and 95th percentiles. Table 1 reports the resulting
resources classified based on DIST variable at 0.01 gold cutoff.

Classification Based on KSTD variable


In order to classify the resources based on the standard deviation of the kriging variance, the
KSTD values corresponding to the 50 and 95 percentiles were determined using KSTD cdf
calculated using the values from the model. These values were 0.036 and 0.065, respectively.
Therefore, 0.036 was used to separate Measured/Indicated, and 0.065 was used to separate
Indicated/Inferred categories. Table 2 reports the corresponding resources based on KSTD
variable at 0.01 gold cutoff.
Classification Based on RCI variable
The Resource Classification Index was computed for each block using a Calibration factor, C, as
follows:

C = expd / (expn * expq) [Eq. 5]

In this equation, d = Dist/50, n=Nused/16, and q=Nquad/4 were used. The choice of using
“dismax” of 50, instead of 90 was merely based on our preference to increase the weight of the
distance variable. Figure 1 shows the cumulative probability plot of RCI values based on model
blocks whose AUK grades that are greater or equal to 0.01.
In order to classify the resources based on the Resource Classification Index, the RCI values
corresponding to the 50th and 95th percentiles were determined using RCI cdf in Figure 1. These
values were 1.35 and 4.35, respectively. Therefore, 1.35 was used to separate Measured/Indicated,
and 4.35 was used to separate Indicated/Inferred categories. Table 3 reports the corresponding
resources based on RCI variable at 0.01 gold cutoff.

REVIEW OF THE RESULTS


Tables 1 through 3 summarize the resources classified at 0.01 gold cutoff based three different
criteria used in this study. These tables report the tonnage and the average grade of blocks from
each methodology. For the purposes of comparison, they also report the average distance to the
closest composite, the average number of composites used in the interpolation (NCOMP), and the
average standard deviation of kriging variance (KSTD).
Since the Measured, Indicated and Inferred categories are based on the 50th and 95th
percentiles, the tonnages in each category from different methodologies had to be the same.
However, as a result of not being able to pick the 50th and 95th percentiles precisely, there are
some small tonnage differences. These differences will be ignored in the comparison of the results
since they are not critical.
Comparing the results from the three methods in these tables, we find that the results are very
similar if we just look at the “Measured+Indicated” category. However, in the individual
categories, there are more noticeable differences. In general, the RCI classification resulted in
much greater spread in grade between different categories. The Measured category had much
higher grade than Indicated category, and the Indicated category had much higher grade than the
Inferred category. This was to be expected because of the definition of RCI and its inverse
relationship with the block grades. Therefore, the relatively marginal-grade ore will more likely
end up outside the Measured category, indicating its higher uncertainty with respect to the cutoff
used.
Figures 2, 3 and 4 display the plan views of resource categories based on DIST, KSTD, and
RCI variables, respectively, on a portion of 4280 bench. This particular portion of the bench has
relatively high-grade gold intercepts, but no outliers. Through the visual comparisons, one can see
that the views from DIST and RCI variables are similar to each other with some exceptions. The
Measured category blocks in RCI classification are more continuous whereas DIST classification
resulted in choppiness as the drilling density increased. KSTD classification, on the other hand,
gives a different view, with most of the blocks included in the Indicated category.
A comparison of the Inferred categories by different methods displayed in these figures
reiterates the similarity of DIST and RCI results with minor differences. For example, there are
two centrally located single “Indicated” blocks in Figure 2, one on the left side, and the other on
the right, based on DIST classification. These blocks were included in the Indicated category by
RCI method in Figure 4. Since we used a Calibration Factor in our RCI calculation to give more
weight to the DIST variable, obtaining comparable results to the DIST classification method in the
“Measured+Indicated” category was not surprising. However, knowing to have incorporated all
the variables of interest in our classification scheme and then come up with sensible results was
reassuring. That is one of the obvious strengths of the RCI approach.
Table 1 Resources at a 0.01 cutoff using DIST variable
Ave. Ave. Ave. Ave.
Category K-Tons Grade Distance NCOMP KSTD
Measured 144,099 .0273 32.71 9.74 .0318
Indicated 109,645 .0248 65.99 8.34 .0441
Inferred 13,754 .0238 100.38 4.88 .0529
Measured+Indicated 253,744 .0262 47.09 9.13 .0371
Total 267,498 .0261 49.83 8.92 .0379

Table 2 Resources at a 0.01 cutoff using KSTD variable


Ave. Ave. Ave. Ave.
Category K-Tons Grade Distance NCOMP KSTD
Measured 140,992 .0252 39.76 10.18 .0278
Indicated 112,429 .0267 59.73 7.66 .0458
Inferred 14,077 .0300 71.61 6.25 .0772
Measured+Indicated 253,421 .0259 48.62 9.06 .0358
Total 267,498 .0261 49.83 8.92 .0379

Table 3 Resources at a 0.01 cutoff using RCI variable


Ave. Ave. Ave. Ave.
Category K-Tons Grade Distance NCOMP KSTD
Measured 141,178 .0304 34.15 10.52 .0305
Indicated 112,451 .0218 63.65 7.51 .0448
Inferred 13,869 .0178 97.31 3.97 .0584
Measured+Indicated 253,628 .0265 47.23 9.19 .0368
Total 267,498 .0261 49.83 8.92 .0379

Figure 1. Cumulative probability plot of RCI values from the model


Figure 2 Resource categories at a 0.01 cutoff using DIST variable on Bench 4280

Figure 3 Resource categories at a 0.01 cutoff using KSTD variable on Bench 4280

Figure 4 Resource categories at a 0.01 cutoff using RCI variable on Bench 4280

Legend for Figures 2-4:


Dark Gray blocks – Measured Ore
Dashed Gray blocks – Indicated Ore
Black Blocks – Inferred Ore
White Blocks – Below 0.01 Cutoff
Big Black Dots – Drill hole intercepts
CONCLUSIONS
The Resource Classification Index (RCI) can be an effective tool for classifying resources or
reserves into specific categories for reporting. The main advantage of this index is in the fact that
it combines several desirable classification measures into one. Furthermore, with the use of a
customized Calibration Factor, it can be practically suited to different deposits.
The RCI values are easy to compute and implement. However, since it requires the
calculation of the kriging variance, it can only be available for deposits whose grades were
interpolated with ordinary kriging. This may not be too much of a drawback considering the fact
that most practitioners perform ordinary kriging interpolation anyway, at least as an alternative to
compare the resources, even if the interpolation method they use for the deposit is another method,
such as Inverse Distance Weighting (IDW) or Indicator Kriging (IK).
The RCI value calculation will obviously be more robust when there is sufficient number of
samples for a given block. As an absolute minimum, three composites should be used. For blocks
that need to be interpolated using the number of composites below the required minimum, one
may apply the simple “Distance” criterion for the classification, and shift their category one down.
For example, those blocks below the minimum required composite limits and yet the distance-
wise fall into Measured category may be down-graded to Indicated category, and so on.
This technique of resource classification using the RCI has been applied to other types of
deposits, including copper, molybdenum, zinc, and silver. The results have been comparable to, if
not more sensible than the popular methods, such as the one based on the kriging variance only.
This is because the RCI technique not only accounts for the spatial correlation of the composites,
but it also incorporates all the advantageous criteria for classifying resources, such as the distance,
the number and configuration of the composites around the block, as well as the local variability
of the composites.
Another nice property of RCI is that, like the coefficient of variation that can be used to
compare different distributions, it can be helpful to compare the estimations in different parts of
the deposit. Generally, relatively high RCI values in parts of the deposit are an indication of
insufficient drilling, the presence of high local variations, or the existence of marginal ore.
Furthermore, the distribution of the calculated Combined Variance is extremely useful in itself.
Of course the major assumption here is that the ore resource calculations were done correctly
to represent the in-situ grade distribution in the deposit properly. Then, knowing that one is
actually accounting for all the variables of interest in determining the resource categories, the RCI
technique should provide the practitioners a more presentable resource classification.

REFERENCES
Arik, A., 1999a, “An Alternative Approach to Resource Classification,” APCOM Proceedings,
Computer Applications in the Mineral Industries, Colorado School of Mines, pp. 45-53.
Arik, A., 1999b, “Uncertainty, Confidence Intervals and Resource Categorization: A Combined
Variance Approach,” Proceedings, ISGSM Symposium, Perth, Australia.
CIM, 1996, “Mineral Resource/Reserve Classification: Categories, Definitions and Guidelines,”
Canadian Institute of Mining, Metallurgy and Petroleum (CIM) Bulletin, Vol. 89, No. 1003,
pp. 39-44.
JORC, 1999, Australisian Code for Reporting of Identified Mineral Resources and Ore Reserves
(The JORC Code). The Joint Committee of the AusIMM, AIG and Mineral Council of
Australia.
Journel, A. G., Huijbregts, Ch. J., 1978, Mining Geostatistics, Academic Press, New York.
SAMREC, 2000, South African Code for Reporting of Mineral Resources and Ore Reserves (The
SAMREC Code). The South African Mineral Resource Committee under the Auspices of the
South African Institute of Mining and Metallurgy.
SME, 1999, A Guide for Reporting Exploration Information, Mineral Resources, and Mineral
Reserves. The Society for Mining, Metallurgy and Exploration (SME).
USBM and USGS, 1980, Principles of Resource/Reserve Classification for Minerals by the U.S.
Bureau of Mines and the U.S. Geological Survey, U.S. Geological Survey Circular 831.
[Mathematical Geology, Vol.34, No.7, 2002]

Area Influence Kriging


Abdullah Arik

MINTEC, INC.
3544 E. Ft. Lowell Rd,
Tucson, AZ 85716

aa@mintec.com

ABSTRACT
This paper presents a modified ordinary kriging technique referred to as the “Area Influence Kriging”
(AIK). The method is a simple and practical tool to use for more accurate prediction of global recoverable ore
reserves in any type of deposit. AIK performs well even in deposits with skewed grade distributions when the
ordinary kriging (OK) results are unreasonably smooth. It is robust and globally unbiased like OK.
The AIK method is not intended to replace OK which is a better estimator of the average grade of the
blocks. Rather it aims to complement OK with its excellent performance in predicting recoverable resources that
have been the major pitfalls of OK in many resource estimation cases. The paper details the methodology of AIK
with a couple of examples. It also reports the results from its application to a gold deposit.
KEY WORDS: kriging, recoverable ore reserves, global recoverable resources, area of influence,
reconciliation

INTRODUCTION
An ore reserve block model is the basis for various decisions in mine development and production. Accurate
prediction of the mining reserves is essential for a successful mining operation. Estimating recoverable ore
reserves for deposits with highly skewed grade distributions is especially a challenge to the mining engineers and
geologists. They require considerable amount of work to make sure that representative block grade distributions
can be obtained to estimate the reserves with certain confidence. There are many advanced geostatistical methods
available to tackle some of the problems associated with grade estimates in these types of deposits. However,
the ordinary kriging or even traditional inverse distance weighting methods are still widely used to estimate the
ore reserves for such deposits. One reason for this is the fact that the alternative methods offered are either too
complex or too exhaustive for the practitioners who often do not have the expertise or the time to apply them to
their problem.
The objective of this paper is to present a new estimation technique which will be referred to as “Area
Influence Kriging” (AIK). The paper will review some of the potential problems in reserve estimations with linear
estimation techniques, such as the ordinary kriging, in deposits with skewed grade distributions. This discussion
will be followed by the methodology of AIK. Finally, its application to a gold deposit, and the comparison of the
prediction performance of AIK to OK will be presented based on the blast hole data in a mined-out area of the
deposit.
LINEAR ESTIMATION TECHNIQUES
All available techniques involve some weighting of neighboring measurements to estimate the value of
the variable at an interpolation point or block. Among the traditional techniques are the polygonal or nearest
neighbor method, inverse distance weighting, and triangulation methods. Among the geostatistical techniques,
the most popular one is the ordinary kriging. To insure unbiasedness, all these methods require that the weights
assigned to the sample data add up to one:
∑ wi = 1, i = 1, N (1)
Then for all methods, the estimate is a weighted linear combination of sample data z:
Estimate = ∑ wi zi, i = 1, N (2)
Having arrived at some weighting scheme depending on the method, the decision remains as to what
criteria to use in order to identify the N data points that should contribute to interpolation. This is the selection
of a search strategy that is appropriate for the method used. Here, considerable divergence exists in practice,
involving the use of fixed numbers, observations within a specified radius, quadrant and octant searches, elliptical
or ellipsoidal searches with anisotropic data, and so on. Since the varying of the parameters may affect the
outcome of estimation considerably, the definition of the search strategy is therefore one of the most consequential
steps of any estimation procedure (Journel, 1989; Arik, 1990).

Smoothing and Its Impact on Reserves


The search strategy can play a significant role in the smoothness of the estimates. The degree of smoothing
depends on several factors such as the size and orientation of the local search neighborhood, minimum and
maximum number of samples used for a given interpolation. Of all the methods, the polygonal method does
not introduce any smoothing to the estimates since it assigns all the weight to the nearest sample value. For the
inverse distance weighting method, increasing of the inverse power used decreases the smoothing because as the
distance power is increased, the estimate approaches to that of the polygonal method. For the ordinary kriging,
the variogram parameters used, especially the increase in nugget effect, contributes to the degree smoothing. Of
course, regardless of the estimation method used, one must always consider the dimension or the support of the
blocks being estimated since it will have an effect on smoothing or the variability of blocks.
The immediate effect of smoothing caused by any interpolation method is that the estimated grade and
tonnage of ore above a given cutoff are biased with respect to reality. As the degree of smoothing increases, the
average grade above cutoff usually decreases. Also with increased smoothing the ore tonnage usually increases
for cutoffs below the mean, and decreases for cutoffs above the mean.
Realistic recoverable reserve figures can be obtained if we determine the grade-tonnage curves
corresponding to SMU (Selective Mining Unit) distribution. Since the actual distribution will not be known until
after mining, a theoretical or hypothetical one has to be developed and used. The application of this procedure can
minimize the bias on the estimated proportion and grade of ore above cutoff (Rossi, Parker and Roditis, 1994).

How to Deal With Smoothing


There are a few ways to achieve this objective. One possible solution to obtain better recoveries is to correct
for smoothness of the estimated grades. This can be done by support correction. There are methods available for
this, such as the affine correction or the indirect lognormal correction (Isaaks and Srivastava, 1989).
Similar or better results for recoverable reserves can be obtained by conditional simulation. A fine grid of
simulated values at the sample level is blocked according to the required SMU size. This procedure is very simple
but also assumes perfect selection (Dagdelen, Verly and Coskun, 1997).
The use of higher distance powers in the traditional inverse distance weighting method is an attempt to
reduce the smoothing of block grades during the interpolation of deposits with skewed grade distributions. On
the geostatistical side, there are methods, such as lognormal kriging, lognormal short cut, nearest neigbor kriging
and several others, which have been developed to get around the problems associated with the smoothing of
ordinary kriging (David, 1977; Dowd, 1982, Arik, 1998). There are also advanced geostatistical methods such as
indicator or probability kriging that takes into account the SMU size to calculate the recoverable reserves (Verly
and Sullivan, 1985; Journel and Arik, 1988; Deutsch and Journel, 1992). Each of these methods provide the
practitioners with a variety of tools from which they can select, and apply where they see it appropriate since each
method has its own advantages as well as shortcomings.
Refining the search strategy and the kriging plan to control smoothing of the kriged estimates works
reasonably well depending on our goal. However, there is another alternative. Here the author would like to
introduce a new technique.

AREA INFLUENCE KRIGING


The ordinary kriging (OK) is not designed for grade estimations in highly variable deposits. But the
practitioners keep using this method even when it may not be suited for their cases. This is mainly because OK
is practical, much easier to use and understand than some advanced methods offered as an alternative. It is also
robust, especially when the kriging neighborhood is kept small. Therefore most people are comfortable with the
use of OK and its results.
Polygonal or the nearest neighbor method is simply the assigning of the value of the nearest datum to the
point or block being estimated.
The Area Influence Kriging (AIK), as the author calls, is a new method that combines some of the aspects
of the polygonal and the ordinary kriging methods. It is a technique where a sample value is considered to be the
primary starting point for the grade of the blocks within its area of influence. The weights assigned to the samples
for a given block outside the area of influence of the nearest sample then control the resulting grade for the block.
Basically, in its simplest form, AIK has the following weighting scheme:
w1 = 1
∑ wj = 0 j=2,N (3)
In this scheme, w1, the weight of the nearest sample is equal to 1, and ∑wj, the sum of the weights of
all other samples is equal to 0. N is the number of samples. Therefore, the sum of the resulting AIK weights is
preserved to be one, to satisfy the unbiasedness condition.

AIK MATRICES
An OK program can easily be modified for AIK. The OK matrices will not be repeated here since they can
be found in any major Geostatistical textbook. Rather the expanded AIK matrices from OK will be provided. Let’s
first recall the OK system expressed in matrix form:
[K] . [λ] = [M] (4)
where [K] is the matrix of sample covariances, [λ] is the matrix of unknown weights, and [M] is the matrix of
sample-to-block covariances. We will expand matrix [K] by one row and column to include indicators I(xj) that
will tell us whether a sample is the nearest to block or not. The following matrix will be designated as [K1]:
(n+1) columns Indicator
column
C(v1,v2) … C(v1,vn) 1 I(x1)
. . . .
. . . .
[K1] = . . . . (n+1)
C(vn,v2) … C(vn,vn) 1 I(xn)
1 … 1 0 0 rows
I(x1) … I(xn) 0 0 Indicator
row

The indicator function I(xj) is based solely on the nearest datum:


1, if z(xj) is the nearest datum to the block
I(xj) = { j=1,N (5)
0, otherwise,

The column matrices [λ] and [M] are expanded likewise as

λ1 C(v1,V)
. .
. .
. .
[λ] = λn [M1] = C(vn,V)
-µ 1
-µ12 Ф(A;z(x1))

The function Ф(A;z(x1)) is the proportion of the area of influence of the nearest datum. For example,
Ф(A;z(x1)) equal to 1 indicates that the nearest datum has 100% influence over its area.
The above AIK system for unbiased kriging of order 1 can then be expressed in matrix notation as
[K1]. [λ] = [M1] (6)
This system always yields the kriging weight for the nearest datum equal to Ф(A;z(x1)). The block grade
is calculated by adding the products of the data values by the corresponding kriging weights (Arik, 1992).
The choice of the Ф(A;z(x1)) strongly affects the performance of AIK. Therefore, instead of using a rather
subjective and arbitrary value, it may be better to do some sensitivity analysis based on Ф(A;z(x1)) values and
compare the grade-tonnage curves of the estimates obtained in different cases to the theoretical or expected results
from SMU grades or the historical blast hole data. The application of this procedure to select Ф(A;z(x1)) can
minimize the bias on the estimated proportion and grade of ore above cutoff

SOME EXAMPLES
Let us take a hypothetical case with10 data for the sample blocks to be interpolated. Figure 1 shows the
configuration of these data with respect to the two sample blocks selected. The arithmetic average of the point
data is 0.217 with a coefficient of variation (standard deviation divided by the mean) of 0.64. The blocks used are
10x10m in size. Vertical dimension is ignored for simplicity. In these examples of the blocks to be interpolated,
Block#1 has a high-grade datum closest to it. In contrast, Block#2 has a low-grade datum closest to it. The intent
is to show what will happen to the estimated grades in different sample configurations around the blocks.
Using an arbitrary spherical variogram model with an isotropic range of 100, sill of 1.0, and nugget of
0.15, both OK and AIK interpolations were performed. The value of Ф(A;z(x1)) was assumed to be 1 in AIK
case. Tables 1 and 2 summarize the results obtained. As one can see from these tables, the OK estimates are much
smoother than the AIK estimates by being closer to the average.
Now let us look at what is happening to the values within the area of the influence of the samples. Here we
select the two data points with values 0.570 and 0133. Figure 2 displays the block estimates from AIK. There are
eight blocks within the area of the influence of the datum 0.570 (with values 0.590, 0.578, 0.589, 0.574, 0.568,
0.569, 0.556 and 0.557), and six blocks within the area of the influence of the datum 0.133 (with values 0.173,
0.150, 0.134, 0.125, 0.115 and 0.116).
One can observe from this example that, within the area of influence of a given datum, some blocks have
values higher than the value of this datum while others have lower values, depending on the configuration of
surrounding data and their values. The mean of the blocks within the area of the influence of the datum is virtually
the same as the value of this datum. This is the advantage of AIK over polygonal method. Instead of having
block values that are exactly the same as the datum throughout the area of influence of the datum, we now have a
distribution of blocks whose mean is none other than the value of this datum.

Table 1. The weights assigned to 10 data points from OK and AIK methods, and the resulting estimated
grades for the sample Block #1

No Distance Grade OK Wts AIK Wts


1 11.2 0.570 0.394 1.000
2 20.6 0.133 0.221 0.125
3 25.5 0.095 0.127 -0.012
4 25.5 0.190 0.124 0.045
5 29.2 0.152 0.063 -0.073
6 33.5 0.114 0.044 0.037
7 36.1 0.171 0.035 0.029
8 36.1 0.209 0.014 -0.125
9 42.7 0.304 -0.009 -0.037
10 44.7 0.228 -0.012 0.011
Estimated Grade 0.307 0.557

Table 2. The weights assigned to 10 data points from OK and AIK methods, and the resulting estimated
grades for the sample Block #2

No Distance Grade OK Wts AIK Wts


1 11.2 0.133 0.400 1.000
2 20.6 0.570 0.218 0.121
3 25.5 0.095 0.139 0.067
4 28.3 0.171 0.093 -0.049
5 29.2 0.190 0.087 0.024
6 30.4 0.114 0.055 -0.061
7 36.1 0.228 0.012 -0.137
8 38.1 0.152 0.016 0.020
9 44.7 0.209 -0.006 0.009
10 47.2 0.304 -0.014 0.008
Estimated Grade 0.229 0.173
APPLICATION TO A GOLD DEPOSIT
OK and AIK exercises were carried out in a structurally controlled gold deposit that is mostly mined-out.
The results from both methods were compared to those from the available blast holes.
The gold mineralization in this deposit occurs in specific geologic units. There are actually three distinct
areas in the deposit. Mineralization has a general trend in each of these areas with certain strike and dip. The broad
mineralization zones are identified and digitized on each section of the model. There are high-grade veins within
these zones that occur within much lower grade areas. These bands are too intermingled for mapping and being
separated from the low to medium disseminated mineralization.
The area studied is a mined-out portion of the mine about 2000 x 2000 ft. in plan on more than 40 benches.
The block size is 20 x 20 ft. The bench height is also 20 ft. There are 4743 bench composites within the study
area. The average grade of these data before any declustering is 0.0167 ounce per ton (opt) with a coefficient
of variation of 2.56. Figure 3 shows the histogram and statistics of exploration hole data within the mined-out
area. The average drill hole spacing in the deposit is about 65 ft. with drilling directed towards the mineralized
structures at 50 to 100 ft. spacing.
The kriging plan that was used for the entire deposit had search distances of 90, 75, and 60 ft. along strike,
dip and vertical directions based on variogram ranges. The strike and dip angles were different in each of the three
areas. The covariograms with spherical models were used with about 20% nugget effect. Minimum and maximum
number of composites used was 3 and 20, respectively. Maximum 5 composites were used per quadrant.
The blast holes in the area had spacings of 12-15 ft. There were 166,300 blast holes used in the study.
The average grade of these blast holes is 0.0148 opt with a coefficient of variation of 2.73. Figure 4 shows the
histogram and statistics of the blast holes. The arithmetic average of the blast holes that fall into the 20x20 block
model blocks were computed. These values were called “Blast Hole Averages” (BHAVG) and were the basis of
the reconciliation study. There were 68,598 blocks used in the study with a minimum of two blast holes within
each block. The average grade of these blocks is 0.0149 opt with a coefficient of variation of 2.12. Figure 5 shows
the histogram and statistics of the Blast Hole Averages.

Block Model and Reconciliation Results


The blocks with the Blast Hole Averages were interpolated using OK and AIK methods applying the same
kriging plan for both. The value of Ф(A;z(x1)) was assumed to be 1 in AIK application. Figures 6 and 7 show
the histograms and statistics of the estimated block grades from OK and AIK, respectively. The detailed reserve
recoveries were reported at two different cutoffs, 0.015 and 0.030 opt. Tables 3 and 4 give the comparisons of OK
and AIK grade and tonnage recoveries to Blast Hole Averages (BHAVG) at these cutoffs. The tonnage and metal
recoveries are given as percent. The total tonnage for the entire mined-out area represented by 68,598 blocks is
219,513,600 tons using an average tonnage factor of 2.5 cubic per ton.
One can see the smoothing effect of OK method by comparing the coefficients of variations of the OK
block values to the Blast Hole Averages at different cutoffs. They are less than half of what the Blast Hole Averages
suggest. At 0.015 opt cutoff, metal recovery of OK is reasonable because of the fact that the low average grade
estimate was compensated by a higher tonnage estimate. At 0.030 opt cutoff, however, the metal recovery of OK
is nearly 12% less than the Blast Hole Averages. In contrast, the recoveries suggested by AIK blocks consistently
reconcile well at these cutoffs with those of the Blast Hole Averages.
Figure 8 shows the grade-tonnage curves from BHAVG, OK, and AIK block values. As can be seen from
this figure, both the tonnage and the grade curves of AIK follow the curves of BHAVG, whereas OK curves
indicate extreme smoothing. Thus the expected recoverable reserves from OK at this deposit would not reconcile
well with what is to be mined.

Table 3. Comparison of OK and AIK grade and tonnage recoveries at 0.015 opt gold cutoff to Blast Hole
Averages

Type % Tons Average C.V. % Metal


Above Grade (σ/m) Above
BHAVG 23.7 .0426 1.51 67.4
OK 31.8 .0304 0.69 64.5
AIK 24.4 .0427 1.28 69.5
% % %
Difference Difference
OK 8.1 -28.6 — -2.9
AIK 0.7 0.2 — 2.1

Table 4. Comparison of OK and AIK grade and tonnage recoveries at 0.030 opt gold cutoff to Blast Hole
Averages

Type % Tons Average C.V. % Metal


Above Grade (σ/m) Above
BHAVG 9.1 .0788 1.18 47.6
OK 10.6 .0509 0.51 35.9
AIK 9.8 .0762 0.98 49.6
% % %
Difference Difference
OK 1.5 -35.4 — -11.7
AIK 0.7 -3.3 — 2.0

CONCLUSIONS
Area Influence Kriging (AIK) is a new interpolation technique modified from Ordinary Kriging (OK)
algorithm. It is practical, robust and globally unbiased like OK. It is intended to be a new tool for the practitioners
to help them predict more realistic global recoverable reserves from their models developed at different stages
of mining. AIK definitely eliminates the smoothing problem of OK encountered in resource estimation of highly
variable deposits such as the one studied in this paper.
AIK is not a method to replace the advanced geostatistical methods that can estimate the local recoverable
reserves, i.e., distribution of grades within the blocks or panels. It is not also intended to replace OK. Rather it
should be looked at as an additional tool that can be used along with OK in the ore reserve estimation process.
It may help the practitioners to fine tune their OK estimates or apply corrections to them by having alternative
results to compare to and contemplate with.
Because the use of the Ф(A;z(x1)) value as a pre-condition or assumption, AIK technique may sacrifice the
local accuracy and the precision of estimates in favor of predicting more accurate recoverable estimates for the
global resources. However, since the choice of the Ф(A;z(x1)) affects the performance of AIK and its results, this
could be used to its advantage. Accordingly, instead of using a rather subjective and arbitrary value, one might
consider doing sensitivity analysis based on varying Ф(A;z(x1)) values, then compare the grade-tonnage curves
of the estimates obtained in different cases to the theoretical or expected ones from SMU grades or the historical
blast hole data. The application of this procedure can minimize the bias on the estimated proportion and grade of
ore above cutoff. Thus taking advantage of the favorable points of the polygonal and OK methods, AIK may offer
flexibility and practicality that the mining engineers and geologists are looking for in achieving their objectives.
AIK has turned out to be a good performer in predicting the global recoverable reserves in the deposit
studied. However, it should be noted that each deposit is unique and therefore the application and performance
of any method may differ from one to another. Since it is only a few lines of coding change in OK algorithm to
get AIK estimates, the author hopes that some practitioners will find easy access to the method, apply it to their
problems and compare the results with those from their current methods or geostatistical simulation techniques in
order to find out more about its performance.

REFERENCES
Arik, A., 1990, “Effects of Search Parameters on Kriged Reserve Estimates,” International Journal of Mining
and Geological Engineering, Vol 8, No. 12, pp. 305-318.
Arik, A., 1992, “Outlier Restricted Kriging: A New Kriging Algorithm for Handling of Outlier High Grade Data
in Ore Reserve Estimation,” APCOM Proceedings, Port City Press, Baltimore, Maryland, pp. 181-187.
Arik, A., 1998, “Nearest Neighbor Kriging: A Solution to Control the Smoothing of Kriged Estimates,” SME
Annual Meeting, Preprint #98-73.
Dagdelen, K., Verly, G., Coskun, B., 1997, Conditional Simulation For Recoverable Reserve Estimation, SME
Annual Meeting , Preprint #97-201.
David, M., 1977, Geostatistical Ore Reserve Estimation, Elsevier, Amsterdam.
Deutsch, C.V., Journel, A.G., 1992, GSLIB: Geostatistical Software Library and User’s Guide, Oxford
University Press, New York.
Dowd, P.A, 1982, Lognormal Kriging The General Case, Mathematical Geology, Vol. 14, No. 5.
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics, New York, Oxford University Press.
Journel, A.G., 1989, “A Democratic Research Impetus,” NACOG Geostatistics Newsletter 3, Summer, p. 5.
Journel, A.G., Arik, A., 1988, “Dealing with Outlier High Grade Data in Precious Metals Deposits,”
Proceedings, Computer Applications in the Mineral Industry, Balkema, Rotterdam, pp. 161-171.
Parker, H.M., 1980, “The Volume-Variance Relationship: A Useful Tool for Mine Planning,” Geostatistics
(Mousset-Jones, P.,ed.), McGraw Hill, New York.
Rossi, M.E., Parker, H.M. Roditis, Y.S., 1994, “Evaluation of Existing Geostatistical Models and New
Approaches in Estimating Recoverable Reserves,” SME Annual Meeting , Preprint #94-322.
Verly, G.W., Sullivan, J.A., 1985, “Multigaussian and Probability Kriging, Application to Jerritt Canyon
Deposit,” Mining Engineering, Vol. 37, pp 568-574.
Figure 1. The 10 sample data with the Blocks #1 and #2 to be interpolated

Figure 2. Block values from AIK within the area of influence of the sample data 0.570 and 0.133
Figure 3. Histogram and statistics of exploration hole data within the mined-out area

Figure 4. Histogram and statistics of the blast hole data


Figure 5. Histogram and statistics of blast hole data averaged into model blocks

Figure 6. Histogram and statistics of Ordinary Kriging (OK) block values within the mined-out area
Figure 7. Histogram and statistics of Area Influence Kriging (AIK) block values within the mined-out
area

Figure 8. Grade-tonnage curves of Blast hole Averages (BHAVG), Area Influence Kriging (AIK), and
Ordinary Kriging (OK) resource estimates
MineSight® in the Foreground

Relative Elevation in Interpolation


(Selecting initial search parameters when interpolating based on relative elevation)

Interpolation programs
in MineSight® can use
“relative coordinates”
instead of actual Easting,
Northing, and Elevation
when performing final
ellipsoidal search and
computing distances for
inverse distance weights
or kriging. Among rela-
tive coordinates, the one
most often used is rela-
tive elevation. This article
discusses using relative
elevation in interpolation.
The use of relative eleva-
tion is helpful to interpo-
late model items where
mineralization follows a
specific surface.
We will look at a rather
common example: the
relative elevation is the
distance to a given sur-
face. When this option
is selected, the program Picture 1
uses composite and block
elevations relative to the surface instead of the actual Y-coordinate within PAR2, and the relative Z value
elevations. This methodology is similar to “unrolling within PAR20 of the corresponding block values.
a surface”. A new procedure relev.dat and a program When selecting PAR1, PAR2, and PAR20 make
gnrelev.exe are currently in testing and will soon be sure that the box defined by these parameters is big
available. They are designed to calculate and store enough to contain the search ellipsoid.
distances to a triangulated surface into both compos-
However, before processing each given row the
ite and model files.
interpolation program (e.g. m620v1, m624v1, etc.)
When using relative elevations, you must be very loads an “initial pool of composites”. Only those com-
careful in setting search parameters PAR1, PAR2, posites with actual Eastings, Northings, and Eleva-
PAR3, PAR4, and PAR20. tions within PAR1/PAR2/PAR3 distances from the
Ellipsoidal search parameter PAR4 is applied to the row are selected (Picture 1).
relative coordinates. This makes selection of PAR3 very important. It
If you are using relative elevation, you can specify should be large enough to ensure that all composites
PAR20 as a preliminary limit for relative Z when with relative elevation within PAR20 range of the
selecting composites that are to be used in interpola- block’s relative elevations on a row are included.
tion for each block. By default PAR20 = PAR3. Before (continued on page 6)
the search ellipsoid is applied, the set of composites is
limited to ones with X-coordinate within PAR1,

March 2003 5
MineSight® in the Foreground

(Relative Elevation in Interpolation continued from page 5)


The safest bet, though not necessarily optimal, is to set:
PAR3 = PAR20 + (elevation range of the surface on the whole bench).
However, the larger PAR3, the larger is the pool of composites to be searched through for each block. This
can visibly increase interpolation time.
You may want to bring PAR1 and PAR2 into play. Whether it is possible to improve the PAR3 estimate
depends on how large the values of PAR1 and PAR2 are, and how gradually the surface elevation changes.
For each particular model block, we are interested in the surface elevation change on the distances at most
PAR1 and PAR2 in Easting and Northing from block centers. If we know that over any rectangular area of size
PAR1 x PAR2 the surface elevation changes not more than some value A, then we can use a better estimate for
PAR3:
PAR3 = PAR20 + A.
In Picture 2, composites with Eastings (Northings) and relative elevation within PAR1 (PAR2) and PAR20
distances from corresponding values at the block centers are marked in purple.

Picture 2 (continued on page 7)

6 March 2003
MineSight® in the Foreground

(Relative Elevation in Interpolation continued from page 6)

For the surface shown in Picture 3, the elevation range of the whole bench is 250. If we take PAR1 and
PAR2 into consideration we may reduce the elevation range estimate that is needed for PAR3 computation.
For instance, if we use PAR1 = 100 and PAR2 = 75 we can limit areas of interest to rectangles of size 100 x 75.
Manual inspection of the surface shows that the surface range on any of such rectangles does not exceed 80.
Thus, we can reduce the value of PAR3 from (PAR20 + 250) to (PAR20 + 80).

Picture 3

Fine print. Computation of PAR3 estimation.


Let the surface defining relative elevations be defined by equation: z = surf(x,y), and the relative elevation be the “vertical distance”
to the surface: zrel = surf(x,y) – z.
Each composite has actual coordinates (x, y, z) and a relative z coordinate zrel. Each model block has actual coordinates (x, y, z) and
a relative elevation item zrel.
For blocks on one bench the value of z is fixed (it is toe or midpoint, depending on interpolation choices), but zrel can be different
(because the function surf depends on x and y).
Let us consider a composite with coordinates x, y, z, and zrel = surf(x,y) – z, and a model block with coordinates x0, y0, z0, and
zrel0 = surf(x0,y0) – z0.
We know that |x – x0| < PAR1, |y – y0| < PAR2, |zrel – zrel0| < PAR20. We want to estimate the value of |z – z0|.
Start with inverting equation for zrel: z = zrel – surf(x,y), z0 = zrel0 – surf(x0,y0). Then
(z - z0) =(zrel – zrel0) + (surf(x0,y0) – surf(x,y)). So, |(z - z0)| <= |zrel – zrel0| + |surf(x,y) – surf(x0,y0)|
Our goal is to choose PAR3 that will guarantee that all the composites with a zrel value within PAR20 of zrel0 have actual elevation
coordinate z within PAR3 of block elevation value z0.
We can guarantee this if PAR3 >= PAR20 + |surf(x,y) – surf(x0,y0|.
The most straightforward estimate for the difference in surface values is the elevation range of the surface on the whole bench:
PAR3 = PAR20 + (max(surf(x,y)) – min(surf(x,y)).
However, this may result in an unnecessarily large value of PAR3.
If you have some additional information about the surface and take PAR1 and PAR2 values into account, you may improve the
PAR3 estimation.
Note that the composites of interest have x and y coordinates within PAR1 and PAR2 distances from x0 and y0. Therefore, you are
interested only in the change of surface elevation, while x coordinate changes within PAR1 and y coordinate changes within PAR2
range. This change can be less than the overall surface elevation change.
For instance, if you know that overall slope of the surface at all points is less than some value S then you can set:
PAR3 = S * sqrt(PAR1 ** 2 + PAR2 ** 2) + PAR20.

March 2003 7
MineSight® in the Foreground

Variograms in MineSight® Data Analyst (MSDA)


MineSight® Data Analyst (MSDA) is a package In order to create a set of variograms, select Vario-
of statistical and geostatistical programs; roughly a gram from the Tools menu.
superset of the MineSight® M300 and M400 series. Once you select this option, a new panel (Vario-
It includes histograms, scatterplots, cumulative gram Parameters) pops up (Figure 2)
probability plots, variograms, Variogram 3-D mod-
eling, and custom (user-defined) reports. MSDA
supports all MineSight® drillhole, blasthole, and
block model files, as well as ODBC compliant data-
bases and text files.
This article discusses, in preliminary detail,
MSDA’s applications to variogram modeling in
MineSight® projects. Future articles will delve into the
many intricate details of MSDA.
Start MSDA and open a project. For the purposes
of this article, we will assume the project is using a
MineSight® composites file.
MSDA permits the user to display the filenames of
the project’s variograms, histograms, scatterplots, etc.,
for selection and viewing (Figure 1). These files, when
created, are displayed in the lower panel, under MSDA
Files. In this instance, we will only display Variograms
and, therefore, we will select this option only. Upon
selection, a check mark will be automatically placed
beside the word Variogram. Figure 2
A number of options are available in the Vario-
gram Parameters panel. Basic items such as Lag
distance, Number of lags, Tolerance, the data to
be modeled, etc., can be inserted where appropriate
in the General tab. By opening the drop-down list in
the box labeled Type, the desired type of variogram
to be built (Figure 3) can be selected. These variogram
choices include Normal, Covariance, Pairwise Rela-
tive, Relative (local mean), and Correlogram. The
item that will be controlling the variogram (TOTCU)
is selected from the drop-down list (Figure 3).
Figure 1 (continued on page 5)

4 November 2005
MineSight® in the Foreground

(Variograms in MineSight® Data Analyst (MSDA) continued from page 4)

Basic information on variogram orientations is in


the Directions tab (Figure 5).

Figure 3
The Filter tab (Figure 4) is where, for example, the
different rock types that will control the variograms
are specified. This window becomes active when the
Figure 5
Use Application Filter box is checked. A complete set
of variograms will be built using composites for each Details on azimuths, dips, and bandwidths are
specific set of rock types, i.e., for each line in the filters specified in the Directions tab. In the example, the
box. If you are building 30 different variograms, i.e., initial variogram has an initial azimuth of zero and a
30 different directions, and you specify five different Window of 22.5 degrees. Variograms will be built at
rock types, MSDA will build 150 variograms (30 for 45-degree azimuth increments in a total of four direc-
each rock type). Each filter (line) contains the rock tions. Likewise, the initial dip is horizontal (0 degrees)
type or types, a title suffix and a file suffix. When and varies at 30-degree increments for a total of four
MSDA creates the variograms for a given filter, it steps. The search Window is +/- 15-degrees wide. The
selects composites of the specified rock type or types, bandwidths are activated by putting a check mark on
appends the title suffix to the main chart title, and the corresponding box, then the appropriate band-
appends the file suffix to the variogram files. width can be added. The horizontal bandwidth in this
example is 60 feet; no vertical bandwidth was added
in this example.
There are options for coordinate rotation if they
become necessary to be applied to the variograms. The
Rotation type option includes several methods, some
of the most common options being GSLIB and MEDS.
The Title and labels tab, as its name implies, is
where the description of the variograms is entered
(Figure 6). Besides the title, labels for the X and Y axis
plus a description of the variogram can be added here.
(continued on page 6)

Figure 4

November 2005 5
MineSight® in the Foreground

(Variograms in MineSight® Data Analyst (MSDA) continued from page 5)

The four buttons at the bottom of each of the tabs


in the Variogram Parameters panel have specific
meanings and can be activated from any of the tabs.
The Cancel button, obviously, cancels the operation
and returns the user to the start of the Variogram
operation without saving the entries made. The Done
button does the same thing as the Cancel button, but
saves the changes that were made to the contents of
the panels.
The Queue and Build Now buttons execute the
run provided the Variogram file(s) root name has
been specified (Figure 8) by the user (cu bhs in the
example).

Figure 6
The last tab, Options (Figure 7), has three addi-
tional parameters that can be activated by placing a
check mark on the corresponding boxes and entering
the project-related values. Implementation of these
options depend upon the needs of the project. Normal
variograms can be transformed to log using the equa-
tion shown on the first option. The second option is
used to bracket the values of the item for which the
variogram is being generated. The types of vario-
grams that can be normalized are Normal and Cova-
riance only, and can be specified in the third option
in the last panel. Finally, the boxes for Constant for
relative estimator, Minimum distance to accept Figure 8
pairs, and Consider vertical if within must be filled
appropriately. The Build Now button executes the run immedi-
ately while the Queue option saves the run for even-
tual execution.
Upon execution of the run, the variograms are built
and their file names appear in the lower-left window
of the main panel (Figure 9). Before proceeding it is
recommended to review the file-names and make
sure that all of the requested variograms plus two (at
the bottom of the list) are present. These last two var-
iograms are global variograms in all directions (e. g.,
cu bhs_global.var), and a global variogram (cu
bhs_hrz_global.var) in the horizontal direction.
(continued on page 7)

Figure 7

6 November 2005
MineSight® in the Foreground

(Variograms in MineSight® Data Analyst (MSDA) continued from page 6)

One can also build a best fit model by simply click-


ing on the Auto-Fit button. A number of options are
available, such as tolerance, weight by number of
points, and so on.
Specific information pertaining to each data
point can be readily accessed and viewed by
placing the arrow of the mouse directly over any
desired data point. This shows such basic items as
the lag range, the value, number of pairs, average
distance, drift, mean grade, and standard deviation
for the point selected.
The Global Stats tab (Figure 11) displays the basic
statistics of the variogram. The panel is self-explana-
Figure 9 tory; however, it is important to note the number of
A variogram filename such as cu bhs_90_ samples in the data source vs. the number of actual
30.var consists of a root (cu bhs) chosen by the user, valid samples used in the run. It also shows the exact
the azimuth (90), its dip (30) and the file extension boundaries of the volume of sampled material cov-
var. This is the file naming convention used in MSDA. ered by the variogram.
Clicking on the Open button after highlighting one
of the variograms, will bring up a panel where addi-
tional information on the selected variogram, as well
as access to its related information, is made available
to the user (Figure 10). In the first tab (Chart) the
Auto-Fit button was depressed to fit a model to the
variogram (red line).

Figure 11
The Lag Stats tab shows a summary of the var-
iogram (Figure 12). It is a summary that includes
Figure 10
the range, number of pairs, distance, drift, etc.,
The parametersfor the variogram model can be of the variogram. The corresponding mean of the
modified manually by dragging the markers on the pairs involved as well as its standard deviation are
curve, or by keying parameters into the dialog at the also shown.
upper right corner. When you modify the model by
(continued on page 8)
dragging the markers, MSDA instantly updates the
parameters in the dialog, and vice versa. MSDA sup-
ports models with up to three structures.

November 2005 7
MineSight® in the Foreground

(Variograms in MineSight® Data Analyst (MSDA) continued from page 7)

Figure 12 Figure 14
Detailed variogram modeling options can be The variogram contours can now be viewed on
started from the main panel with the command Tools any plane by selecting the Rose tab in the variogram
| Variogram 3D Modeling. The available variograms panel. This shows the trends of the mineralization
will be displayed as in Figure 13. represented by the variograms on that plane. The
If there are no variograms, they have to be high- properties of the points, contour lines, etc., can be
lighted in the project directory (use Microsoft® Win- modified and tailored to the user’s specifications.
dows Explorer), then dragged and dropped into the
File area.

Figure 15
Figure 13
The variogram contours can also be individual-
Users can drag one or more variograms from their
ized for report or presentation purposes. Figure 16
MSDA project into the 3D Modeler’s list at the lower
shows an example. A variety of other modifications,
right corner. To automatically fit a 3-D variogram
as well as printing or exporting to a bitmap, can be
model to this collection of variograms, select Auto-
implemented using the icons in the Toolbar.
Fit from the 3D Model menu. As with the Auto-Fit
tool for the individual variograms discussed earlier, (continued on page 9)
several options are available such as tolerance, weight
by number of points, and so on. However, the default
values typically work well.

8 November 2005
MineSight® in the Foreground

(Variograms in MineSight® Data Analyst (MSDA) continued from page 8)

Figure 16
The orientation of the various structures of the
variogram model can also be displayed in plan and
cross-sections (Figure 17). To access this option, use
the command 3D Model | Display Structures on
Planes. In this example, there is only one variogram Figure 17
structure. Therefore, there are no displays in the There are many more helpful options that were not
second and third structures. Also, because we gen- covered in this article about variography in MSDA.
erated only the horizontal variograms and set the Additional descriptions of the tools related to vario-
verical range to a small distance, EW and NS section gram modeling, as well as to other MSDA options,
views are displaying flat ellipses with no dip. will be published in future newsletters.

Call for Papers Call for Photographs of Mines—


23rd Annual Mintec, Inc. Seminar
Technical Papers—You are invited to submit an abstract
of a proposed paper for presentation at the 23rd Annual
Mintec Seminar, March 20-24, 2006 at the Sheraton
Tucson Hotel & Suites, Tucson, AZ. Consider any applica-
tion of MineSight® to a mining problem of general interest
to the attendees. Presentations should be from 5 to 20
minutes in length.
Deadlines—Receipt of Abstracts - January 16, 2006
Receipt of final presentation files - March 12, 2006
Please make note on your registration form that you are a
presenter.
Submit your abstracts, in Microsoft® Word format, approx-
imately 300 words, with the following information:
Tilte, Subject, and Keyword We need photopraphs of your mine for our 2006 calendar. This
Name(s), Tilte(s), Address(es), and Telephone Number(s) is a great way to showcase the special features of your mine.
of all authors. Submit photographs of your mining operation to Robert Ash-
Address questions /abstracts to: baugh at Robert.A@mintec.com. High resolution, sharp focused
Fred Fest, Mintec, Inc. digital files at 300 dpi are preferred. Sharp focused prints can
Telephone: 520.795.3891 Fax: 520.325.2568 be sent to Mintec, Inc., 3544 E. Ft. Lowell Rd., Tucson, AZ 85716
E-mail: Fred.F@mintec.com USA. Attn: R. Ashbaugh. Prints will not be returned.

November 2005 9
MSDA Custom Reports and
22nd 3-D Variogram Modeling
Introduction
Annual MineSight® Data Analyst (MSDA) is a new data analysis package for MineSight®,
replacing the existing m300 and m400 series programs and adding significant new

Seminar functionality. In this paper we will be presenting two new MSDA modules:
• Custom Reports
• 3-D Variogram Modeling
Custom Reports
Custom Reports is a program which allows users to design and build their own reports
using any of the data sources supported by MSDA, ie., MineSight® assay, composite,
blasthole and block model files, ODBC compliant databases, and text files. Users have
complete control over the choice of fields, statistics, filters, layout, and style of the report.
Each report is an HTML file which may be viewed in a web browser, Microsoft® Word and
Microsoft® Excel. A sample is shown in Figure R-1

Figure R-1
Functional Overview
Custom Reports works in two stages, as shown in Figure R-2. The first stage is Report
Builder, which creates a report matrix file of values according to the user’s criteria. For
example, one may build a report matrix based on mean, median, and standard deviation
of cu, pb, zn. and mo data, by copper cutoff grade and rock type. The actual selection of
statistics, fields, and filters (criteria) is entirely up to the user. Report Builder computes
all combinations of these values. In other words, it computes all statistics of all fields
according to all filters (criteria) and stores the results in a report matrix file (.mrb)
in XML format. If the data source is large, e.g., a large block model, and the criteria
are complicated, this step may be quite time consuming due to the large number of
computations involved.
The second stage is Report Viewer, in which the user extracts data from the report file
and creates a readable HTML file. For example, using the report file mentioned in the

MSDA Custom Reports and 3-D Variogram Modeling Page 1


previous paragraph, we may ask for four tables, one per metal, with the mean and median
grade displayed by rock type in rows and cutoff grade in columns. The Report Viewer runs
22nd
essentially instantaneously, since all of the results are simply extracted from the report
file. It is common to do many different report views using a single report file. Report style
options are available to change color, text size, font, etc., for titles, column and row labels,
Annual
headers and footnotes, and so on. Since each report view is an HTML file, it can be easily
displayed, printed, opened in Microsoft® Word, and so on. Seminar

Figure R-2
A second option for Report Viewer is to display a chart rather than an HTML report.
For example, based on the same report matrix file, we could display a graph of copper
grade by cutoff with one series (curve) per rock type. This may be considered as just
another view of the report file. Like the HTML reports, charts are created pretty well
instantaneously.
Report Builder
In the Report Builder stage the user specifies the report content and builds a report
matrix file with an .mrb extension. This step is straightforward, but can be time
consuming to run if the data source is large and the filters are complicated.
There are two fundamental types of report which one may build: univariate and
bivariate. For univariate reports, one must specify:
• General info such as titles and chart style file (Figure R-3).
• Weighting info, used when you wish to weight statistics, e.g., by length or by tonnage.
When this option is used, the total weight (e.g., total tonnage) is available on the
report (Figure R-4).
• A list of fields and field expressions (Figure R-5).
• A list of statistics, such as mean and standard deviation (Figure R-6).

Page 2 MSDA Custom Reports and 3-D Variogram Modeling


22nd • One or more filters, ie., selection criteria such as rock type and cutoff grade (Figures
R-7 to R-9).
Each of these specifications is discussed below in more detail.
Annual
Seminar

Figure R-3

Figure R-4

MSDA Custom Reports and 3-D Variogram Modeling Page 3


22nd
Annual
Seminar

Figure R-5

Figure R-6

Page 4 MSDA Custom Reports and 3-D Variogram Modeling


22nd
Annual
Seminar

Figure R-7

Figure R-8

MSDA Custom Reports and 3-D Variogram Modeling Page 5


22nd
Annual
Seminar

Figure R-9

Bivariate reports are similar to univariate reports except that:


• You must specify two lists of fields rather than one. For instance, you may specify
Fields A = cu, pb, zn, mo and Fields B = cu, pb. Custom Reports would then calculate
bivariate statistics for every field in A against every field in B.
• The choice of statistics is different, e.g. you can select things such as best fit line and
correlation coefficient (r).
Definitions—At this point, it is convenient to define a couple of definitions:
• Dimension: This refers to a set of fields, statistics, or filters. A single report may have
up to five dimensions, e.g., a report with the following four dimensions would be
typical:
o Fields (cu,pb,zn,au,ag,etc.)
o Statistics ( count, mean, median, max, etc.)
o Rock Types
o Cutoff Grades
• Axis: An axis refers to a page, row, row group, column, or column group in the final
HTML file. The content of the axes is specified in the Report Viewer step.
Defining Fields
Univariate reports require exactly one dimension for fields. Bivariate reports require two
field dimensions and Report Builder calculates the bivariate stats for all fields in the first
field dimension against all fields in the second field dimension. The dialog used to enter
field info was shown in Figure R-5.
To add a field to the list, click on the Add button and the Field Definition dialog will
pop up. Enter the field label which is to appear on the report. You can then enter either a
simple field or a field expression. For a simple field, select the field from the combo box,
and enter a cutoff value (*). For a field expression, key in an expression such as cu +
Page 6 MSDA Custom Reports and 3-D Variogram Modeling
22nd 8.25 * mo, where cu and mo must be valid field labels in the data source. Buttons are
also available to Edit and Remove fields.
(*) Cutoff Values: Cutoff values are used in two ways in MSDA and Custom Reports and
Annual a bit of explanation is needed here.
• Cutoff Type I: When a cutoff value is used as part of a filter (see Defining Filters), it is

Seminar designed to filter entire records. For example, if we specify a filter with a cutoff grade
of 0.35% cu, then a record would be wholly accepted if it had cu >= 0.35%, and wholly
rejected otherwise. This is probably the case with which most users are familiar. For
example, if your ore cutoff is 0.35% cu, and you want ore stats for several metals, this
is the approach you would use. This is also the approach used throughout MSDA and
MineSight® and most other mining programs.
• Cutoff Type II: In certain circumstances, a user may want a report with statistics for
multiple fields wherein each field has it’s own cutoff. This is the type we use in the
individual field specification as shown in Figure R-5. For example, if we say that cu
cutoff = 0.3% and pb cutoff = 0.4%, then mean cu would be mean cu above 0.3% cu,
and mean pb would be mean pb above 0.4% pb.
Ø Note: A useful by-product of Cutoff Type II is that we can filter out values from a
database or text file which are missing or too low to be useful. For example, suppose
we are reading a text file wherein all fields have many zero values which are
uninterpolated blocks that we want to exclude from our statistics. In such a case, we
can just enter cutoff values such as 0.001 to exclude zeros from the statistics. Note
that this is very different from setting cutoff values of 0.001 for fields in the filter
definition, which would reject the entire record whenever one field was below cutoff.
When defining a list of fields, please note that all of these fields will be included in the
report matrix file (.mrb) created by Report Builder. However, during the Report Viewer
stage, one can select any subset of the fields for inclusion in the final HTML file. Therefore,
it is usually prudent to include more fields rather than less. If you aren’t sure whether or
not you will need a field, it’s usually best to include it.
Defining Statistics
Selecting report statistics is a simple matter of checking the desired statistics from a list
of about twenty that are supported by Custom Reports (see Figure R-6). Certain statistics
are only relevant for bivariate reports, e.g., correlation coefficient (r), and these are
suppressed from the list when the user chooses the univariate report type. Some statistics
such as median and quartiles require that extended metadata be set in MSDA (*). These
statistics are suppressed from the list if they are not available.
(*) Note: Extended metadata includes results such as min. and max. of all fields. It is an
optional calculation performed elsewhere in MSDA.
Defining Filters
Filters refer to selection criteria such as rock type and cutoff grade. There are two ways
to define a filter:
• Basic: The user defines a list of categories, cutoffs, or bins on a specified field (see
Figures R-7 and R-8).
• Custom: The user enters an SQL statement to define a filter (see Figure R-9).
The basic filter is the preferred option wherever possible, since it is easier to work with
and the processing time is much faster. In this case, you choose a field on which to filter,
such as rock type, then list the values, cutoffs or bins. For example, to filter on rock types
3,6,10,11,12,13,14,21, you would choose the List option and enter 3,6,10:14,21. If you
wished to use cu cutoff grades of 0.1, 0.2, 0.3, and 0.4, you would choose Cutoffs and enter
0.1,0.2,0.3,0.4. If you choose Bins rather than Cutoffs, the previous example would
report bins 0.1 to 0.2, 0.2 to 0.3 and 0.3 to 0.4.

MSDA Custom Reports and 3-D Variogram Modeling Page 7


When a basic filter is not flexible enough, you can enter a custom filter using statements
which are essentially SQL “where” clauses, such as “cu >= 0.3 and rock = 1 or 22nd
rock = 2” (see Figure R-9). In the Labels textbox, enter a label for each filter value, one
per line. In the Filters textbox, enter the SQL “where” clauses corresponding to each filter
type. For example, in Figure R-9 we have defined four Ore Types: Waste, SP1, SP2, and
Annual
Ore. The definition of Waste is cu < 0.3, and so on.
Note on performance: It is desirable (but not mandatory) to put filter dimensions ahead
of field and statistics dimensions in the Report Builder dialog since the performance is
Seminar
generally faster.
Other Options
One may save the current report builder parameters with a specified name, e.g., My
Favorites, and restore them at a later date. This may be done by using functions on the
File menu or the toolbar.
Report Viewer
The Report Viewer allows one to extract results from the report matrix file (.mrb) and
write them to an HTML file for viewing and printing. To start the Report Viewer, simply
Open the report file (.mrb) in MSDA Manager, or double click on it. A small dialog will
pop up asking you to choose between Report (HTML) mode or Chart mode. These will be
discussed below.
Report Mode
When a report file is opened in this mode, the HTML Report dialog pops up as shown
in Figure R-10. The first and most important tab is the Layout tab. In this tab you select an
axis for each of the dimensions in the report file. For example, in Figure R-10 the report file
has five dimensions: field, statistic, ore type, rock type, and elevation. We are placing fields
in row groups, then for each field we are placing statistics in rows. Elevations will be laid
out across the columns, and finally, for each unique combination of rock type and ore type,
we will generate a separate page, ie.. a separate HTML table with page break. The specific
fields, statistics, ore types, etc., can be chosen by clicking on the appropriate Choose button
and selecting the ones you want from the list of those in the report file (see Figure R-11).

Figure R-10

Page 8 MSDA Custom Reports and 3-D Variogram Modeling


22nd
Annual
Seminar

Figure R-11

Note regarding Total and All: Custom Reports automatically adds two items to each
filter dimension, Total and All. The former is the total (union) of all specified filter items,
whereas the latter selects all records regardless of the value of the filter. For instance, if we
ask for rock types 1, 2, and 3, then Total would capture all records from rock types 1 + 2
+ 3, whereas All would capture all records regardless of rock type (including rock type 4,
5, etc., and even records with missing rock types). In other words, All basically collapses
the dimension. If you have a table of cutoff by rock type, then the row for rock type = All
would reflect data filtered on cutoff only.
The second tab in the HTML Report dialog is the Options tab. This tab allows one to
place headers, footers, etc., on the report. Headers may contain special fields such as [date]
and [page], which are replaced by the appropriate values when the report is created. This
tab is basically very straightforward.
The final tab is the Style tab, as shown in Figure R-12. To set the style of a particular
report component, such as Title, Cell Value, Row Label, Column Label, Footnote, etc.,
simply select the component in the combo box at the top of the page, then use the font
color and justification buttons to edit it. Standard Microsoft® Windows color and font
dialogs are used. When you have made the style the way you like it (a sample is shown)
click on the Set button beside the report component at the top of the dialog.

Figure R-12

MSDA Custom Reports and 3-D Variogram Modeling Page 9


Custom Reports can be used to build a very wide array of reports on all data sources
supported by MSDA, from simple summary tables with a few rows and columns, to
massive reports with complex filters, detailed tables using row and column groups, and
22nd
custom fonts, colors, and text size. The following figures show selected pages from two
report views of the same report file. Annual
Seminar

Figure R-13

Figure R-14

Chart Mode
If you open the report file in Chart Mode, you are presented with a dialog similar to
that shown in Figure R-15. One can rapidly display charts using different combinations
of dimensions in the upper left corner. Each dimension in the report can be used in one of
four ways:
Page 10 MSDA Custom Reports and 3-D Variogram Modeling
• None: The dimension is ignored. Valid for filter fields only.
22nd • Points: The values of the dimension are laid out across the x-axis. In Figure R-15, we
lay out Elevation by points. Only one dimension can be assigned to Points.
Annual • Series: Each value of the dimension forms a series on the chart, ie. a curve or set of bars
or slices of pies, etc – depending on the chart type. In Figure R-15, we assigned Rock
Type by series. Note that the Legend shows each of the Rock Types (1,2,3,4,5). Only
Seminar one dimension can be assigned to Series.
• Chart: A specified value of the dimension is used for the chart. In Figure R-15, we
assigned Field = Copper, Statistic = Mean, and Ore Type = Ore to Chart. This means
that all of the chart statistics refer exclusively to mean grade of copper ore.
As with all MSDA charts, significant options are available to dress up the style and style
templates may be prepared, saved, and restored. For report charts, it is usually interesting
to explore different chart gallery types such as scatter, curve, bar, pie, and so on.

Figure R-15

3-D Variogram Modeling (Var3D)


MSDA includes a powerful new tool known as Var3D for viewing directional
variograms and building 3-D variogram models. In this section we discuss the capabilities
and basic operation of Var3D and show some illustrative examples.
The principal features and characteristics of Var3D are as follows. Each of these will be
discussed in detail in subsequent sections.
• Users define a list of individual experimental variograms by dragging and dropping
MSDA variograms into Var3D (up to 100). A typical variogram set consists of about
30 variograms spaced 30 degrees apart in azimuth and dip.
• A Standard View shows variograms, individual variogram models, and the
intersection of the 3-D variogram model along a specified direction (+/- window).
• A Rose View shows contours of the variogram 3-D model on a specified plane (+/-
window), as well as variogram range lines and error circles.

MSDA Custom Reports and 3-D Variogram Modeling Page 11


Ø Definition: A variogram range line is a ray from the origin in the direction of the
variogram whose length is equal to the range of the variogram. 22nd
Ø Definition: An error circle is a circle whose size is proportional to the difference
between the model and the experimental data at each lag point.
• An Auto-fit tool is available to calculate the best fit 3-D variogram model. Var3D
Annual
supports up to three structures.
• A dialog shows the best fit parameters. Users may manually edit these parameters to Seminar
modify the 3-D variogram model if desired.
The Variogram 3-D Manager Window
Var3D runs in a separate window called Variogram 3D Manager. This window may
be opened by choosing Tools, then Variogram 3D Modeling…, from the main MSDA
Manager window.
Figure V-1 shows Variogram 3D Manager in Standard View mode. The following key
features can be seen in this view:
• A list of all of the variograms which we wish to view and/or model.
• The current direction and window angle, ie., azimuth 90, dip 0, with a 30 degree
window.
• A multi-tabbed dialog which we use to set auto-fit parameters for the 3-D variogram
model and to view or edit the resulting model parameters.
• The chart shows all the variograms from the current list which are at the current
direction +/- window, as well as the intersection of the 3-D model in this direction.
• A menu bar is available from which the user may access tools and settings.

Figure V-1

Figure V-2 shows Variogram 3D Manager in Rose View mode. The following key
features can be seen in this view:

Page 12 MSDA Custom Reports and 3-D Variogram Modeling


• A list of all of the variograms which we wish to view and/or model.
22nd • The current plane and window angle, ie., “EN” (horizontal) +/- 5 degrees.
• A multi-tabbed dialog which we use to set auto-fit parameters for the 3-D variogram
Annual model and to view or edit the resulting model parameters.
• The chart shows:

Seminar o contours of the 3-D variogram model on the current plane


o range lines for each variogram that lies on the current plane (+/- window)
o Note: other details not visible in Figure V-2 may also be shown; these will all be
discussed in more detail below.
• A menu bar is available from which the user may access tools and settings.

Figure V-2

Loading the Experimental Variograms


In order to work with Var3D one needs to identify the experimental variograms with
which one wishes to work. This is done very simply by dragging MSDA variograms
into the list at the lower right corner of Variogram 3D Manager. Typically one loads
variograms which are spaced about 30 degrees apart in azimuth and dip, although this is
certainly not a requirement.
The variogram list offers the following features:
• Easy to add and remove variograms.
• Right click on a variogram to get detailed property information.
• One can uncheck variograms to suppress them from the charts (note: unchecked
variograms are still used for auto-fit calculations).
• The variogram text is color coded so that you can identify individual variograms on
the charts.

MSDA Custom Reports and 3-D Variogram Modeling Page 13


The Standard View
One can display variograms in a Standard View by choosing the Standard Chart tab in
22nd
Variogram 3D Manager (see Figure V-1). This view shows several things:
• all variograms which lie at the current variogram direction +/- window Annual
• the models of each of the individual variograms
• the intersection of the 3-D variogram model at the current direction
The contents and appearance of the Standard View can be fine tuned using the
Seminar
Variogram Settings dialog, available from the Variogram 3D Manager Settings menu. In
this dialog you can turn individual variogram models on or off, request that the variogram
be automatically re-sized to fit all variograms, change the chart style, etc. One can also
explicitly set the X and Y limits of the variogram chart using the Variogram Chart Limits
dialog, available from the Variogram 3D Manager View menu.
The current direction control for the Standard View is defined in the dialog shown
in Figure V-3. It is possible to define the current direction via an azimuth and dip, or by
selecting one of the principal directions (E, N, Z). One may apply a window to either the
azimuth and/or the dip by checking the checkbox at the left side, and filling in the “+/-”
field. Alternatively, one can use a circular window, ie., a cone shaped window about the
current direction. In Figure V-3, we are using a circular window of width 15 degrees, ie., 15
degrees from current direction to edge of cone. Finally, and perhaps most practically, one
can set the direction by simply selecting a variogram in the variogram list, then clicking on
the Set to Selected Variogram button.

Figure V-3

When working with the Standard View, it is important to remember that it has all of the
properties and benefits of any other MSDA chart, e.g., there is extensive right-click support
to modify the appearance of the chart (colors, fonts, titles, etc.), add a legend box, copy to
bitmap, print, etc. In addition, if you hover over any lag point with your cursor, a box will
pop up showing the lag properties, e.g., lag distance, number of pairs, value, etc.
Figures V-4 through V-6 show the Standard View as it appears with a variety of settings
illustrating some of the features discussed above.

Page 14 MSDA Custom Reports and 3-D Variogram Modeling


22nd
Annual
Seminar

Figure V-4

Figure V-5

MSDA Custom Reports and 3-D Variogram Modeling Page 15


22nd
Annual
Seminar

Figure V-6

The Rose View


One can display variograms in a Rose View by choosing the Rose Chart tab in
Variogram 3D Manager (see Figure V-2). This view shows several things:
• Contours of the 3-D variogram model on the current plane; these contours are ellipses
when the 3-D variogram model has only one structure, but they are more complicated
for two and three structure models.
• Range lines for each variogram which lies on the current plane (+/- window); recall,
a range line is a ray from the origin in the direction of the variogram whose length is
equal to the range of the variogram
• Error circles at each lag point on each variogram which lies on the current plane (+/
- window). The size of the circle is proportional to the difference between the actual
experimental value of the lag and the value predicted at that point by the 3-D variogram
model; if the difference is < 5%, a small cross is used, indicating good agreement.
The contents and appearance of the Rose View can be fine tuned using the Rose Chart
Settings dialog, available from the Variogram 3D Manager Settings menu. In this dialog
you can:
• Select which items you want to show on the Rose View, from amongst contours,
variogram range lines, error circles, etc. as discussed in the previous paragraph.
• Define the contour increments in terms of the variogram value. In Figure V-7, there
will be 10 contours from 0.0 to 0.2, ie. 0.02, 0.04, etc.
• The Contours – Advanced section lets you control the contouring engine. Usually
the defaults work well, but you can use these items to make the contouring more
precise (but slower) or less precise (but faster). Reducing the angle step enhances the
smoothness of the contours. Reducing tolerance and/or increasing max. iterations

Page 16 MSDA Custom Reports and 3-D Variogram Modeling


improves the accuracy of the contours. But remember, improved smoothness and
22nd accuracy mean longer refresh times.
• You may attach a style file to modify the chart’s style.
Annual
Seminar

Figure V-7
The current plane control for the Rose View is defined in the dialog shown in Figure
V-8. It is possible to define the current plane via three rotation angles in any of the
supported conventions (GSLIB, GSLIB-MS, MEDS, Sage), or by selecting one of the
principal directions (EN, EZ, NZ). One may assign a window by checking Use window
of and assigning an angle, e.g., in Figure V-8 we are using a window of 5 degrees. At any
time, one may also rotate the plane by setting a rotation angle and an axis about which to
rotate, then clicking Rotate, e.g., one could rotate the current plane by 10 degrees about
the Easting axis (E). The icon shows the orientation of the current plane, e.g., in Figure
V-8 the current plane is horizontal (Z is shown coming out of the view towards the user).

Figure V-8
When working with the Rose View, it is important to remember that it has all of the
properties and benefits of any other MSDA charts, e.g., there is extensive right-click
support to modify the appearance of the chart (colors, fonts, titles, etc.), add a legend box,
copy to bitmap, print, etc.
Figures V-9 through V-12 show the Rose View as it appears with a variety of settings,
illustrating some of the features discussed above.

MSDA Custom Reports and 3-D Variogram Modeling Page 17


22nd
Annual
Seminar

Figure V-9

Figure V-10
Page 18 MSDA Custom Reports and 3-D Variogram Modeling
22nd
Annual
Seminar

Figure V-11

Figure V-12

MSDA Custom Reports and 3-D Variogram Modeling Page 19


Using the 3-D Variogram Model Auto-Fit Function
Var3D includes a tool for automatically creating a 3-D variogram model from the 22nd
currently loaded experimental variogram data. The 3-D variogram model is a least
squares best fit to the experimental variogram data. A numerical method known as the
Downhill Simplex Method is used to minimize this function, or at least to find something
Annual
reasonably close.
Building a 3-D Variogram Model consists of three very simple steps: Seminar
• Enter the auto-fit parameters
• Initialize the 3-D Model
• Run Auto-Fit
These steps are discussed in more detail, below. After creating a model, you may accept
it, or re-run auto-fit beginning where the previous one left off (perhaps with modified
parameters) in order to fine tune the best fit model. At any time you may manually
modify any of the model parameters, e.g., you could change the range or sill of one or
several structures.
As the auto-fit program runs, the progress is shown in a progress bar at the bottom of
the screen, together with the current RMS error and the number of iterations. When the
auto-fit program finishes, you can see some results on the Stats tab (Figure V-14). The
Status indicates None (no model) or Manual (user made one or more manual adjustments
to the model) or Auto-Fit (the model was created using the auto-fit function). The RMS
error is the Root Mean Square difference between the 3D variogram model and the
experimental variogram data.
Figure V-13 shows the parameters which we enter to control the auto-fit. The first three
options allow us to weight data from each lag according to the number of underlying pairs,
select only those lags with a specified minimum number of pairs, and select only those
lags with lag distance less than a specified distance. The tolerance and max. iterations
apply to the Downhill Simplex Method and won’t be explained in detail here. However,
the defaults usually work well. Improved accuracy may be achieved by reducing the
tolerance and/or increasing the number of iterations, at a cost of increased processing time.
The Temperature is an advanced function which is beyond the scope of this paper and is
generally not useful; it should be set to 0.0 by most users, effectively turning off this option.

Figure V-13 Figure V-14

When the auto-fit parameters have been entered, the next step is to initialize the
variogram model—the auto-fit needs a valid model with which to start. The easiest way to
do this is as follows:

Page 20 MSDA Custom Reports and 3-D Variogram Modeling


• On the Model tab, General sub-tab, enter a rotation convention, e.g., GSLIB, MEDS,
22nd Sage (see Figure V-15). All rotation angles arising out of the auto-fit will be computed
in terms of the specified convention.

Annual • On the Model tab, Structures sub-tabs #1, #2, #3, check the structures you want, and
the model type for each structure (see Figure V-16). E.g., you may say that you want
two structures; the first is spherical and the second is exponential.
Seminar • You now need to enter all of the ranges and sills to define a valid starting model.
You could enter each of these by hand, but in practice one generally uses one of the
following methods:
o Select Initialize Model from the 3D Model menu to set all of the ranges, sills, and
rotations to 0.0.
o After running auto-fit once, the parameters will be contained within the dialogs
shown in Figures V-15 and V-16. It is very common to keep these parameters
and re-run auto-fit again, perhaps with smaller tolerance, etc., in order to fine
tune the best fit model. In other words, your initial model for run #2 is the best fit
result from run #1.
• Var3D lets you pin (fix) any of the parameters by checking the adjacent checkbox
(see Figures V-15 and V-16). E.g., one could set the nugget to 0.1, check the adjacent
checkbox, then run auto-fit. The result would be the best fit 3-D variogram model,
subject to nugget = 0.1.

Figure V-15 Figure V-16

The final step is to simply run the auto-fit program. This is done by selecting Auto-Fit
from the 3D Model menu. As the auto-fit progresses, the progress, current RMS Error, and
number of iterations are shown at the bottom of the screen. A typical auto-fit takes about
10 seconds, though this depends on many variables and could be longer.
Examining and Editing the 3-D Variogram Model Parameters
When the auto-fit program finishes, the best fit model parameters are stored in the
dialog tabs shown in Figures V-15 and V-16. This model is also shown on the Standard
View (i.e., the intersection of model with current direction) and the Rose View (i.e., the
intersection of model with current plane). Users are free to change any of the parameters
by simply keying in the new value. The model will be updated in the Standard and Rose
Views as soon as you tab out of the field.

MSDA Custom Reports and 3-D Variogram Modeling Page 21


Displaying the Individual Structures
Var3D can display the individual structures from the current model as they intersect
22nd
each of the major planes (EN, EZ, NZ). Since the range of each structure is an ellipsoid at
some orientation, it’s intersection on each major plane is an ellipse. To view these ellipses,
select Display Structures on Planes… from the 3D Model menu. A sample is shown in
Annual
Figure V-17.
Seminar

Figure V-17

Exporting to MineSight® Kriging


To export the current 3-D variogram
model to a file which can be read by the
MineSight® kriging program, select Export
Variogram Model… from the File menu. A
couple of formats are available, as shown in
the export dialog (Figure V-18).

Figure V-18

Page 22 MSDA Custom Reports and 3-D Variogram Modeling


MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar MineSight® Data Analyst (MSDA)
Practical Examples
MineSight® Data Analyst (MSDA) is a very dynamic tool created to help MineSight®
users evaluate resources. This tool, while easy to use, allows for a thorough analysis of
the available data in a very quick and efficient manner. This workshop will show some
examples of how this new statistics package can be used to help with some of the most
common requirements in a geostatistic study.
MetaData
The first step when approaching any geostatistical analysis is familiarization with
the dataset. At this point, we usually need to get general information about the dataset,
as a whole, as well as more detailed information with relation to each of the available
individual items. This kind of information is often referred to as MetaData. MSDA offers a
convenient tool, which gives a general description of our dataset.
After connecting to the Data Source, select the option Data | Metadata | Add
Extended. This option allows the program to read the data source and obtain basic
information about it. This information is then viewable using the option Data |
Metadata | Show.
If you are working with MineSight® data, you will find general information about
the project and the model or drillhole file under review. In addition, you will also find
a general description of the items available in your dataset, which includes such useful
information as the number of valid records for each item, the actual minimum and
maximum for each case, and the number of distinct values for the case of integer values.

Using the results for our sample data set and looking at the MetaData information, we
can easily determine the maximum value for CU is 7.86%, and the geology items ROCK,
ALT, and MIN are defined as integers with four or five distinct values each.
The MetaData information for each item can also be accessed via the Info button located
on the MSDA panels where an item must be specified.

Page 1
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

In addition, we can display actual data by using the option Data | Explorer.

Univariate Analysis
Following the first assessment of the dataset, we would likely continue with a univariate
description of our data. For this analysis we can use tools such as Histograms, Cumulative
Probability Plots, and Custom Reports.
We could, for instance, use Histograms to determine the distribution of the Geology
items as well as the grade items.

Page 2
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

We can easily display the cumulative curve and/or the grade tonnage curves for each of
the Histograms created. This is done from the View menu of the Histogram window.

Page 3
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

It is also possible to apply a weighting item or factor to the histogram primary item
when appropriate. In the case of drillhole or composite data, we could weight our grade
item by the length item, so the histogram is built based on the intervals lengths instead of
just the number of samples. In the same way, for the case of a model file, we could weight

Page 4
MineSight® Data Analyst (MSDA) Practical Example

23rd our grade item by a density item and a volume factor. Our results will then be based on
tonnage values rather than on just the number of blocks.
Annual
Seminar

Furthermore, we can also try analyzing the distribution of CU using a filter item. We
can define a filter item and its corresponding values from the Filter tab included in the
Histogram Parameters window. In this case, several histograms, one for each filter
value, will be created, each with a distinct name.

Page 5
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

It is also possible to show several histograms at the same time for easily comparison of
different populations. This last feature is available from the External Histograms menu
inside the Histogram window.

Page 6
MineSight® Data Analyst (MSDA) Practical Example

23rd On the other hand, in addition to analyzing our data throughout histograms, we could
also visualize information in the form of reports. The ability to create custom reports is a
Annual very powerful tool in MSDA, allowing us to display and arrange the necessary information
Seminar in any way needed. These reports are created with the Custom Report option from the
Tools menu. With this tool, we can calculate a variety of different statistics in relation to
our items and also apply weighting factors and various filters to our data.
Using some of the options available for filters in Custom Reports, we could classify
our data by geology codes, cutoffs, or any other classification based on any of the items
available in our data source file. In addition, when using the Field Expression option
associated with fields, we have the option to create a calculated field ‘on the fly’ to be
included in our reports. The following are examples of typical custom reports and their
special setting options.

Report showing values of Mean Cu for each domain defined by ROCK and ALT

Page 7
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

Report showing main statistics for different intervals of CU

Page 8
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

Report showing main statistics for different grade items and different ore types in the block model.

Correspondingly, the same information contained in the reports can also be displayed
as graphs when using the Chart option associated to the reports. This kind of display can
help us to better understand and visualize our data.

Figure showing mean values


for CU, MO, and EQCU for
different ROCK types.

Page 9
MineSight® Data Analyst (MSDA) Practical Example

Finally, we may also want to check the Cumulative Probability Plots for the main grade 23rd
values to see how close our populations are to a Normal or LogNormal distribution.
Annual
Seminar

From the Settings menu in the Cumulative Probability Plot window, we can
choose between the standard Normal or the Lognormal display. In addition, from
this menu we can also control the extent of the chart tails by choosing any of the three
percentage ranges available. Changing the range display will help us better identify and
describe outlier values.
Bivariate Analysis
Should we need to perform a bivariate analysis, we can use such tools as Scatter Plots
and Custom Reports and Graphs.
When working with Scatter Plots, there are some additional features available to help
with the interpretation of results. From the View menu in the ScatterPlot window,
there are three options available. The Best Fit Line option will show the straight line
that best fits our data set; slope and intercept values are shown in the summary statistics
window. The X=Y option helps when comparing two fields which are assumed equal or
nearly so,such as a grade item in a block model interpolated by two different methods.
Finally, the Conditional Expectation Line option shows a line representing a moving
window average of Y with respect to a predefined range of X. To set up the width of this
moving window on the X-axis, select one of the options from the Setting | Conditional
Expectation Window menu. The values under this menu are measured as a percent of the
length of the X-axis.

Page 10
MineSight® Data Analyst (MSDA) Practical Example

23rd If display of this information in a report is preferred, create a Bivariate Report. This
type of report includes additional statistic measures, such as the Coefficient of Variation
Annual
or Best Line parameters.
Seminar To create this kind of report, we need to specify a slightly different set up to account for
the different calculations. The main difference between Univariate Report and Bivariate
Reports is Bivariate Reports need two tabs or dimensions for Field definitions. With
two dimensions for Fields, MSDA can compare one group of fields against the other and
calculate all the special bivariate parameters.

Report showing bivariate statistics for CU, MO, AS, and PB.
Information from the previous report can be displayed as a graph by selecting the Chart
option when opening the report. An example is shown below.

Page 11
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

Correlation Coefficient between CU and MO for different Rock codes

Variograms
Finally, for our spatial description we will use directional variograms and the MSDA
tools for modeling in 3-D space.
We create a series of variograms for different directions by entering the appropriate
parameters in the Variogram panels.

Page 12
MineSight® Data Analyst (MSDA) Practical Example

23rd
Annual
Seminar

Each variogram can then be modeled with up to three nested structures; manually by
setting up some of the parameters or by using the Auto-fit tool.

After spending time with the individual variograms and determining the range, sill, and
nugget for each direction, we can try modeling all the directions in 3-D space. For this step,
use the Variogram 3-D Modeling option available under the Tools menu.
Page 13
MineSight® Data Analyst (MSDA) Practical Example

From the Standard tab of the Variogram 3-D Manager window, several variograms 23rd
and the associated models can be displayed at the same time. We can choose between
Annual
displaying all the variograms available in the list or a selection of variograms based on the
filter options available at the bottom of that window. Seminar

A Rose diagram can also be displayed showing the different ranges intersecting a
particular plane. This display is obtained from the Rose tab of the Variogram 3-D
Manager window.

Page 14
MineSight® Data Analyst (MSDA) Practical Example

23rd We can finally use the Auto-fit tool to determine a 3-D model to fit our data. This option
is available under the 3-D Model menu. When using the Auto-fit function we can choose
Annual to set some of the parameters of the final model by toggling on the corresponding windows
Seminar in the upper right corner of the Manager window. The parameters which we can control
when running the Auto-fitting process are the number of nested structures, model type,
nugget, sill, rotation angles, and ranges for the main axis of the model.
The contours of the model, as well as the error with respect to the experimental data, can
also be display in the Rose tab. This display option is controlled from the Setting | Rose
Chart menu.

To use the parameters obtained from this modeling process in MineSight® 3-D, we
have the option of exporting the 3-D variogram model to an ASCII file using File |
Export Variogram model. This ASCII file can then be imported into MineSight® 3-D for
visualization purposes or it can be used with the interpolation procedures.

Page 15
Grade Variability Index for the Estimates
GRADE VARIABILITY INDEX FOR THE ESTIMATES

Abdullah Arik
Mintec, INC.,USA

ABSTRACT
In order to achieve reliable resource estimates, a good model of a mineral deposit must
be built to represent the deposit as close to reality as possible. Many factors affect the
outcome of a modeling work. The methodology used and the subsequent selection of
data to estimate the blocks are some of these factors. When estimating the grades of
the blocks in the model, the definition of the search strategy becomes one of the most
consequential steps that affect the outcome of the interpolation results. It will then be
useful to quantify in some manner the reliability of the resulting block grade estimates
from the interpolation. To achieve this goal, this paper introduces the concept of a
“Grade Variability Index” as a measure of how well the nearby composites match to the
calculated block value. This index may thus help quantify the effects of varying search
parameters in the linear estimation methods. It can be a useful tool for assessing the
inherent variability in the estimates and to help identify the areas with higher variability
of the data used to estimate the blocks in the deposit.

INTRODUCTION
The need for accurate ore resource estimates has always been should be included for each block. In fact, the definition of
important. With the increasingly large investments required the search strategy is one of the most consequential steps
to open and operate mines today, this need becomes even that affect the outcome of the grade interpolation results.
more vital. Therefore a good and reliable model of a mineral Therefore anything that can help the resource practitioners
deposit must be built to represent the deposit as close to select the appropriate search strategy will result in their
reality as possible. Obviously the knowledge of the geology calculating the resources more accurately.
of the deposit and its mineralization are helpful for building The objectives of this paper are twofold. The first objective
a representative geologic model. With the knowledge from is to propose a parameter called “Grade Variability Index” for
geostatistical analysis of the drill hole data along with this the estimated grades to assess which interpolation strategy
geologic model, one can then construct a grade model of the introduces more or less grade mixtures into the estimates.
deposit that is fairly reasonable. The second objective of the paper is to demonstrate the
One area that can have a significant effect on the outcome application of this parameter to a copper deposit, and its
of the resource estimates is the methodology selected for the use to validate the refinement of the kriging interpolation
grade assignment for the block model. This includes not only parameters to estimate the resources.
the method used but also its parameters that define what data
144

APCOM 2007, Santiago, Chile | Chapter 01 Geological Modeling, Resource Estimate and Geostatistics
GRADE VARIABILITY INDEX FOR THE AN EXAMPLE OF GRADE VARIABILITY
ESTIMATE INDEX
In estimating the grade of a block for a mineral deposit, we A single block of size 5x5m, shown in Figure 1, was
use a set of samples in and around the block to be estimated. interpolated using ordinary kriging in three different cases.
In the case of linear estimators, such as ordinary kriging, we In the first case, a spherical search with a 15m search radius
have the following form: was used. In the second case, an ellipsoidal search was used.
The search distance was 15m in the major axis direction
Z* = Σ wi zi i = 1,...,n (1) and 12m in the minor axis direction. The major axis of the
ellipsoid had an azimuth of 150° as shown in the figure.
where Z* is the estimate of the grade of a block, zi refers to The spherical variogram models used in these kriging runs
sample data values, wi is the corresponding weight assigned had the same ranges as the search distances used in each
to zi and n is the number of samples. Depending on the run. Therefore, the first model was isotropic and the second
method of estimation and the samples used, our estimate for model was anisotropic. A 20% nugget effect was used for
the blocks may vary. It will then be useful to quantify in some both variograms. The number of data used was limited to the
manner the degree of variability in our estimate also. nearest 6 samples for simplicity.
One suggestion here is to compute the inherent variability Table 1 displays the data values used and the corresponding
of samples used for the “ore” blocks, the blocks that are equal weights obtained from the kriging of the first case – spherical
to or greater than a specific cutoff. If a block is considered search and isotropic variogram model. The resulting block
“ore,” then the proportion of the samples used for the block grade estimate from this kriging run is 1.052. The Grade
below the cutoff should give us some idea how much of the Variability Index (GVI) computed at 1.0 cutoff value is
estimate is from “waste” samples, thus “diluted.” So, when 0.803. There are two samples below the cutoff used. These
the block grade is equal to or greater than a specific cutoff, are samples no. 3 and no. 6. These “waste” samples are
we will determine the sum of weighted differences or the highlighted in the table.
“variability” between the block estimate and the sample Similarly, Table 2 displays the data values used and the
values below the cutoff (zc). We will call this Weighted corresponding weights obtained from the kriging of the
Differences of Samples from the estimate (WDS): second case – ellipsoidal search and anisotropic variogram
model. The resulting block grade from this kriging run is
WDS = Σ wj * (Z* - zj) j = 1, nb (Z* ≥ zc and zj < zc) (2) 1.058. The Grade Variability Index (GVI) computed at 1.0
cutoff value is 0.766. There are two samples below the cutoff
where nb is the number of data values below zc, zj are the used. These are samples no. 3 and no. 6. These “waste”
data values below zc and wj are the weights corresponding to samples are highlighted.
these data. Coincidentally the six samples selected in these cases
We will compute the Grade Variability Index (GVI) for the were identical. This was actually done on purpose to make
estimate by dividing the WDS by the estimate itself (Z*). The the comparison simpler and clearer. Thus, the only variable
GVI is expressed as a percent of the estimate and is calculated in the calculation of Grade Variability Index (GVI) is the
as follows: kriging weights assigned to the samples. The spherical search
case resulted in an GVI value of 0.803 whereas ellipsoidal
GVI = (WDS / √n) / Z* * 100 (n>1) (3) search case produced 0.766. This is an improvement of
about 5% in GVI by simply trying out a different search and
If all the samples used for the block are “ore” (zi ≥ zc), then variogram model in kriging. There was also a slight increase
there is no “dilution” and GVI = 0. in the estimate from 1.052 to 1.058.
The Grade Variability Index (GVI) can be computed at To emphasize the effect of search plan on the estimates,
zero cutoff with all the samples used for the estimate. In the another case was tried. In this third case, all parameters were
areas with similar grade estimates, relatively higher GVI kept the same as the second case, except the azimuth used
values indicate the variability of the samples (more grade for the mineralization direction. The major axis ellipsoid was
mixtures) used to the calculate block values. oriented at 60° along with the variogram rotation angle. In
other words, this ellipsoid was perpendicular to the ellipsoid
outline shown in Figure 1. The outline of this ellipsoid is not
shown in the figure, but the sample data values used and the
corresponding weights obtained from the kriging are reported
145
in Table 3. Table 3: Kriging results - Third case (ellipsoidal search - 60o).

Grade Variability Index for the Estimates


Again in the third case, the six samples selected were identical
with the previous cases. However in this case, the two “waste” No. Grade (zj) Statistical Weight (wj)
samples had more influence on the grade of the block because Distance
of their weights. The resulting block grade from this kriging 1 1.106 6.02 0.328
run is 1.042. The Grade Variability Index (GVI) computed 2 1.10 6.61 0.305
at 1.0 cutoff value is 0.908. The block is still “ore” but at a 3 0.99 6.82 0.268
slightly lower grade than the previous cases. Also this case 4 0.88 11.44 0.057
resulted in a much higher GVI indicating that the estimated 5 1.04 11.94 0.044
6 1.15 11.95 -0.002
block grade was inherently more variable. This confirms and
Block grade (Z*) = 1.042 GVI (at zc = 1) = 0.908
quantifies the adverse effect of selecting a wrong anisotropy
direction for kriging on the estimated grade.

APPLICATION TO A COPPER DEPOSIT


A study was performed on a major porphyry copper deposit
with an area approximately 2700x2700m in plan and 1000m
vertically. The block size used for the 3-D model built for the
deposit was 20x20m with a bench height of 17m. The assay
information came from over 100,000m of mostly diamond
drilling. The nominal drill hole spacing in the deposit was
less than 50m.
The current copper (Cu) variogram model used in the
ordinary kriging plan for the mineralized zone was a
spherical model with a nugget of 0.1 and sill of 0.6. Because
of good correlations between Cu and gold (Au) values,
Figure 1. Kriged block showing sample data and the spherical and
the Cu variogram was used in Au kriging interpolation
ellipsoidal search outlines
also. The ellipsoidal search (225x85x325m) was based on
the variogram ranges in major, minor and vertical axes. A
Table 1: Kriging results - First case (spherical search).
minimum 3 and a maximum 10 composites were used for
the interpolation of the blocks. The deposit was divided into
five (5) sectors or domains in a clockwise manner. Each
No. Grade (zj) Statistical Weight (wj)
domain had the same variogram model with the exception
Distance
of the azimuth of the major axis which was adjusted for the
1 1.060 5.22 0.326
2 1.100 5.31 0.333
direction of mineralization. The search ellipsoids used for the
3 0.990 6.56 0.246
kriging of the domains followed the variogram ranges and
4 1.150 9.58 0.026
rotation.
5 1.040 10.82 0.037 The same variogram models have been used in the kriging
6 0.880 11.08 0.032 runs for the resource estimation at this mine for several
Block grade (Z*) = 1.052 GVI (at zc = 1) = 0.803 years. Currently, almost half of the deposit has been mined-
out and there have been new drilling programs since the last
Table 2: Kriging results - Second case (ellipsoidal search - 150o). study. Part of the reason for keeping the status quo in the
resource estimation procedures was that the reconciliation of
No. Grade (zj) Statistical Weight (wj)
the predicted and mined tonnage and grade of ore has been
Distance
reasonably kept below 5%.
1 1.100 5.36 0.356
A new geostatistical study was carried out to update the
2 1.060 5.79 0.310
variograms used in the kriging in order to incorporate the
3 0.990 7.98 0.218
data from new drilling. For the existing drill holes, only the
4 1.150 9.61 0.056
5 1.040 12.54 0.032
composite data below a certain elevation were included for
6 0.880 13.55 0.028
the study to maintain the relevant data for the remaining
Block grade (Z*) = 1.058 GVI (at zc = 1) = 0.766 portion of the deposit. Also, the correlograms were calculated
instead of the standard variograms.
146
The new variogram model determined for using in the
APCOM 2007, Santiago, Chile | Chapter 01 Geological Modeling, Resource Estimate and Geostatistics
ordinary kriging plan for the mineralized zone was a spherical
model with a nugget of 0.21 and sill of 1.0. The updated
ellipsoidal search (225x104x312m) was based on the new
variogram ranges. No change was made in the number of
composites used for the interpolation of the blocks. Also, the
five established domains used for the azimuth adjustment for
the direction of mineralization were kept the same.

REVIEW OF THE RESULTS


Table 4 shows the current ore resources at 0.27% Cu cutoff
as Case 1. Both Cu and Au grades are reported. The Grade
Figure 2: Grade-tonnage curves of Case 1 and Case 2 kriging, and
Variability Index (GVI) for the estimated grades was Polygonal estimate.
calculated at the 0.27% Cu cutoff.
Similarly, the new ore resources at the same cutoff using
the updated variogram and search parameters are shown in
the same table as Case 2. The “ore” blocks from the current CONCLUSIONS
model were used to make the comparison of the current and The Grade Variability Index (GVI) can be a useful tool in
the new resources compatible and simpler. That way the assessing the inherent variability in the “ore” blocks. For
tonnage in both the current and the new resources remained example, consider a block which is surrounded by all “waste”
the same. composites except one outlier high grade composite. If this
The results of comparison are reported as the difference block becomes “ore” because of this high grade composite
between Case 2 and Case1, and they are highlighted in the within its search volume, it will have a relatively high
table. Finally the percent improvement of Case 2 over Case 1 GVI. Conversely, a block where most composites around it
is calculated and reported in the same table. are “ore” composites will have a low GVI. Therefore, it is
A 6.7% improvement (reduction) in Grade Variability possible to determine the areas in a deposit with high GVI
Index (GVI) value indicates that the new variogram and values and therefore identify the areas where the mixtures of
search parameters helped reduce the grade mixtures into grade in the data used are more evident.
the block grade estimates. This was achieved by selecting Another advantage of the Grade Variability Index is that it
comparatively more relevant composites for the blocks and can be helpful to compare different estimation schemes in a
by better validation of the continuity of the mineralization deposit to refine the parameters used to reduce “smoothing” of
through the variogram. As a result, there was also a slight the grades due to sample selection and to quantify the progress.
increase in the Cu and Au grade estimates overall. Often times the resource estimate practitioners wonder what
Finally, the grade-tonnage curves of Case 1 and Case 2 will be the effect of changing a certain interpolation or
kriging results were compared against the polygonal estimate variogram parameter on the results. It is possible that GVI
of the blocks. Figure 2 shows these curves. The results can help verify if the change will have a positive effect on
confirm that the Case 2 kriging plan reduced the smoothing the resource estimates, and quantify the improvement if the
of the grades as compared to Case 1 kriging. This difference interpolation is done properly. In order words, GVI does
in variability was quantified by GVI. provide a relative indication of which strategy introduces
more or less grade mixtures into the estimate.
Table 4: Resource summary from Case 1 and 2, and the results of The Grade Variability Index is easy to implement into
their comparison any linear estimation algorithm, including conventional
methods such as the Inverse Distance Weighting (IDW).
Case Tonnage Cu(%) Au(g/t) Grade
(x1000) Variability
It is simple and versatile. However, it can be a useful tool
Index (GVI) only if the practitioners understand clearly its applications
Current (1) 494.950 0.455 0.476 2.528 and limitations. For example, simply increasing the number
New (2) 494.950 0.458 0.479 2.360 of composites to interpolate the blocks will tend to reduce
Difference (2-1) - 0.003 0.003 -0.168
the GVI values for the estimated blocks in a deposit. Yet the
Improvement % - 0.7% 0.6% 6.7%
relative scaling of GVI values does not necessarily signify
147
the estimation with more composites is better suited for the

Grade Variability Index for the Estimates


deposit. The GVI should not be also blindly used to give
an indication of which estimate is the right one because
not always higher grades are more appropriate. One has to
consider the overall interpolation scheme and its suitability
before thinking about what GVI can offer.

REFERENCES
Adisoma, G.S., Hester, M.G., 1996. “Grade Estimation and
Its Precision in Mineral Resources: The Jacknife Approach,”
Mining Engineering, Vol. 48, No. 2, pp. 84-88.
Arik, A., 1990. “Effects of Search Parameters on Kriged
Reserve Estimates,” International Journal of Mining and
Geological Engineering, Vol 8, No. 12, pp. 305-318.
Arik, A., 1999. “Uncertainty, Confidence Intervals and
Resource Categorization: A Combined Variance
Approach,” Proceedings, ISGSM Symposium, Perth,
Australia.
Isaaks, E.H., Srivastava, R.M., 1989. Applied Geostatistics,
New York, Oxford University Press.
Journel, A.G., Arik, A., 1988. “Dealing with Outlier High
Grade Data in Precious Metals Deposits,” Proceedings,
Computer Applications in the Mineral Industry,
Balkema, Rotterdam, pp. 161-171.
Conditional Simulation Overview and Applications

23rd Conditional Simulation


Annual Overview and Applications
Seminar
Introduction
Simulated deposits are computer models that represent a deposit or a system. These
models are used in place of the real system to represent that system for some purpose.
The simulation models are built to have the same distribution, dispersion characteristics,
and spatial relationships of the grade values in the deposit. Conditionally simulated
models additionally have the same values at the known sample data locations. They are
conditioned to the original sample values. The difference between models of estimation
and the conditional simulations lies in their objectives.
The Objectives of Simulation
Local and global estimations of recoverable reserves are often insufficient at the planning
stage of a new mine or a new section of an operating mine. For the mining engineer, as well
as the metallurgist and chemist, it is often essential to be able to predict the variations of the
characteristics of the recoverable reserves at various stages in the operation.
For instance, in the processing of low-grade iron ore deposits, keeping final product
within strict quality standards may be a complex task whenever impurities such as
phosphorus are involved. The blending process and the flexibility of the plant will depend
on the dispersion variance of the grades received at all scales (daily, monthly, yearly).
Therefore, a detailed definition of an adequate mining control method is essential. For
a preliminary design, it is admissible to use average values to perform an evaluation.
When it comes to detailed definitions, however, these averages are not sufficient due to
local fluctuations.
If the in-situ reality was known, the required dispersions, and thus the most suitable
working methods, could be determined by applying various simulated processes to this
reality. Unfortunately, the perfect knowledge of this in-situ reality is not available at
the planning stages of the operation. The information available at this stage is usually
incomplete and limited to the grades of a few samples. The estimations deduced from this
information are far too imprecise or smooth for the exact calculations of dispersions that
are required.
Simulation or Estimation
Estimation and simulation are complementary tools. Estimation is appropriate for
assessing mineral reserves, particularly global in-situ reserves. Simulation aims at correctly
representing spatial variability and is more appropriate than estimation for decisions in
which spatial variability is a critical concern. It is equally valuable for risk analysis.
The objective of estimation, such as kriging, is to obtain the “average” values that are
as close as possible to the true but unknown values. The result is a single and smooth
representation of the grade spatial distribution in the deposit.
Conditional simulation, on the other hand, provides the same mean, histogram, and
variogram as the real grades (assuming that the samples are representative of the reality).
Therefore, it identifies the main dispersion characteristics of these true grades.
Conditional simulation provides a range of non-smoothed representations of the grades
in the deposit. These representations are called the realizations; each one is equally likely
to be a possible “deposit” which could have given the original set of samples.
In general, the objectives of simulation and estimation are not compatible. It can be
seen from Figure 1 that, even though the estimation curve is, on average, closer to the
real curve, the simulation curve is a better reproduction of the fluctuations of the real
curve. The estimation curve is preferable to locate and estimate reserves, while the

Page 1
Conditional Simulation Overview and Applications

simulation curve is preferred for studying the dispersion characteristics of these reserves, 23rd
remembering that the real curve is known only at the experimental data points.
Annual
Seminar

Figure 1. Real, simulated and estimated profiles (thick solid line=reality, normal solid line=conditional
simulation, dashed line=kriging, o=conditioning data)

Typical Uses of Simulated Deposits


A conditionally simulated deposit represents a known numerical model on a very dense
grid. As the simulation can only reproduce known, modeled structures, the simulation
grid is limited to the dimensions of the smallest modeled structure. Various methods of
sampling, selection, mining, haulage, blending, ore control, and so on, can be applied to
this numerical model to test their efficiency before applying them to the real deposit.
Some of the examples of typical uses of simulated deposits are as follows:
• Application in grade control to determine dig-lines that are most likely to maximize
the profit or minimize the dollar loss.
• Comparative studies of various estimation methods and approaches to mine
planning problems.
• Studies of the sampling level necessary for any given objective.
• Application for generating models of porosity and permeability.
• Application in petroleum reservoir production.
• Studies to determine the probability of exceeding a regulatory limit and application
in development of emission control strategy.
• Studies to quantify the variability of impurities or contaminants in metal or coal
delivered to a customer at different scales and time frames.
• Prediction of recoverable reserves.
In addition to simulating grade, categorical variables, such as lithograohy, alteration,
or mineralization can be simulated. It must be understood that the results obtained from
simulated deposits will apply to reality only to the extent to which the simulated deposit
reproduces the essential characteristics of the real system. Therefore, the more the real
deposit is known, the better its model will be, and the closer the conditional simulation
will be to reality. As the quality of the conditional simulation improves, not only do the
reproduced structures of the variability become closer to those of reality, but so also will
the qualitative characteristics (geology, alteration, and so on) that can be introduced into
the numerical model. It must be stressed that simulation cannot replace a good sampling
campaign of the real deposit.

Page 2
Conditional Simulation Overview and Applications

23rd Sample Run Using Sequential Gaussian Simulation


Annual The available simulation algorithms in MineSight® are the Sequential Gaussian
Simulation (SGS) and Sequential Indicator Simulation (SIS). They are currently accessed
Seminar
through a non-standard sub-menu with the MineSight® Compass™ program. However,
technical support is only available to users who received the training or those who are
knowledgeable on the subject.
Figure 2 shows a snap-shot of the MineSight® Compass™ Procedure Manager when
the Conditional Simulation group is selected. This is currently a non-standard
menu that is accessed through csim.mnu or csim2.mnu files that are prepared for
Conditional Simulation. Note that these menu functions can be further customized or
modified if necessary.

Figure 2. Conditional Simulation sub-menu functions


The SGS is done using the normal scores, or the data transformed into normal
distribution. The normal scores transformation results are usually output to an ascii file,
but they can be loaded into MineSight® composite files. The current SGS program can
handle the normal score transformation of the original grades and “back-transformation”
of the results..
Before a conditional simulation run, a variogram analysis of the normal scores data is
necessary. The parameters of the normal scores variograms are used for the simulation.
Although the simulation results can be output to an ascii file, it is often necessary to set up
a 3-D block model to store the results. The block size used for the conditional simulation
model is usually much smaller than the Selective Mining Unit (SMU) size since it should
represent the deposit at point scale. If a 3-D block model available, the simulation results
can directly be stored. Once the simulated values are in the model file, the standard
MineSight® programs are used to verify, display, plot and summarize the results. The
simulated grade of the blocks can be combined into any size of SMU for further analysis.
Figure 3 shows a plan view of bench 2600 from a sample Conditional Simulation exercise.

Page 3
Conditional Simulation Overview and Applications

23rd
Annual
Seminar

Figure 3. Conditional Simulation of 2600 bench on sample project

Planned Updates
The capabilities that are planned to be added to the simulation package are listed below:
Despiking
Relative elevations for use in both SGS and SIS
Direct model access in SIS
Local means or probabilities for the SIS simple kriging
SGS with a co-located variable
We encourage your feedback on this subject, which allows Mintec, Inc. to assess any
problems clients may be facing and suggest useful solutions.

References
Arik, A., 2003, “Kriging vs. Simulation: How optimum is our pit design?” 4th CAMI
Symposium Proceedings, Calgary, Alberta, Canada.
Arik, A., 2001, “Performance Analysis of Different Estimation Methods on Conditionally
Simulated Deposits,” SME Transactions, Vol. 310.
Arik, A., 1999, “Uncertainty, Confidence Intervals and Resource Categorization: A
Combined Variance Approach,” ISGSM Symposium, Perth, W. Australia.
Dagdelen, K., Verly, G., and Coskun B., 1997, “Conditional Simulation for Recoverable
Reserve Estimation,” SME Annual Meeting, Preprint 97-201.
David, M., 1977, Geostatistical Ore Reserve Estimation, Elsevier, Amsterdam.
Davis, B.M., 1992, Confidence Interval Estimation for Mineable Reserves, SME Annual
Meeting, Preprint #92-39.
Isaaks, E.H., Srivastava, R.M., 1989, Applied Geostatistics, New York, Oxford University
Press.

Page 4
Conditional Simulation Overview and Applications

23rd Journel A.G. and Huijbregts Ch.J. (1978). Mining Geostatistics, Academic Press, London.
Annual Kim, Y.C, Knudsen, H.P. and Baafi, E.Y. (1980). Application of Conditional Simulation to
Seminar Emission Control Strategy Development, University of Arizona, Tucson, Arizona.
Lloyd, C.D., McKinley, J. M. and Ruffell, A. H. (2003). “Conditional simulation of
sandstone permeability” Proceedings of IAMG 2003.

Page 5

You might also like