Professional Documents
Culture Documents
5.sources of Geological Modelling Uncertainty Investigated
5.sources of Geological Modelling Uncertainty Investigated
5.sources of Geological Modelling Uncertainty Investigated
1. Hard data from drilling. mapping, underground exposure in the form of:
The relative importance of interpretation versus hard
and soft data varies, but generally, the
more specific and operational the model, the more it relies on hard data. However, even in the
best informed model (e.g. grade control block models), the quantity of hard data is very small. A
typical metalliferous grade control pattern only samples between 0.1% and 0.01% of the deposit.
Early stage models may have two or more orders of magnitude less.
Implicit modelling
Implicit modelling describes the distribution of a target variable by
a unique mathematical
function derived directly from underlying data and high level user-specified parameters. This
approach may be applied to discrete variables such as lithology, to continuous variables such as
geochemical grades, or to binary indicators of continuous variables.
Implicit modelling creates a unique solution from
any set of input data and given set of
parameters (either a geometry or a grade interpolant). The choice of parameters is clearly an
important consideration in matching the character of the output model to the phenomenon being
described. In many situations, the hard data will simply
be insufficient to directly support
creation of a model
that is geologically reasonable.
So how do we make the output of a data driven model look like our interpreted
understanding
of the geological phenomenon?
The solution is to add ‘hypothesised data’ until the model adequately describes our ‘geological
interpretation’.
Using Leapfrog® implicit modelling, three different types of hypothesised data may be
added
to geometric models – structural disks (identify the location and orientation of a
geological contact), polylines (identify the location and facing direction of a contact at polyline
node points), and curved polylines (same as polylines but with more points to which orientations
can be added). In grade interpolation models, polyline contours may be added at a given grade
threshold – effectively acting as assay information.
This process of ‘making up’ data to force a geological interpretation is exactly the
same as
more traditional CAD- based sectional approaches. The user ‘draws’ an interpretation (usually a
sectional polygon), which is then triangulated into a wireframe. The polygon mixes both hard
data points (drillhole contact locations) and interpreted locations. If the data or interpretation
changes (e.g. new hard data are added), drawn inputs need to be modified and the process
repeated.
One of the clear advantages of implicit modelling is the separation between hard and
hypothesised data. If new drilling information is added it is immediately incorporated and can be
examined to decide if the hypothesis is robust (i.e. confirms the interpretation). Otherwise, the
geological interpretation will need to be changed and/or the modelling parameters and
hypothesised data modified.
This incremental modelling approach is very
well aligned with scientific methods. Geological
models (including geometry and grade models) represent hypotheses and ideas that summarise
and explain available information. Before a new drillhole is drilled, the model provides a
prediction, and when a hole is drilled, it directly tests this prediction.
‘Fit for purpose’ means that the model meets or exceeds the user’s needs. But there are often
multiple, conflicting needs. In the case of predictive models, ‘fit for purpose’ generally means
that the prediction lies within an acceptable tolerance.
In the case of a numeric model of grades, ‘fit for purpose’ is a fairly straightforward concept to
define and quantify, e.g. that the predicted grade of copper in a grade control pattern will be
within +/- 5% of the reconciled mill grade 90% of the time.
But in geological models represented by geometric shapes purposes may be manifold – from
illustrating an exploration concept for planning a drill program, to creating the deposit scale
interpretation underpinning resource estimates, to defining a single domain volume. These
clearly lie along a spectrum: from situations where hard data input is low and parametric choices
and interpretation high, through to high hard data and low interpretive possibilities and
parametric choices.
Data adequacy SAMPLE SPACING, Orientation of drilling May not be known until
DISTRIBUTION AND with respect to key after the fact. Often
ORIENTATION structures, sample decided by comparison
spacing relative to with analogue deposits.
volume of interest High value in obtaining
(SMU), spacing relative close spaced data at
to important geological early stages.
features/grade
distribution.
This web content is a summary of the article “Sources of geological modelling uncertainty investigated. What role does the data play?”, originally
published in the Unearthing 3D implici